Home Top Ad

Responsive Ads Here

Nvidia's new RTX 5080 GPU performs better than AMD's Radeon RX 7900 XTX at raw rasterization and ray tracing DLSS 4 performance ...

Nvidia's RTX 5080 has dethroned AMD's RX 7900 XTX at the same price – but good luck finding one


  • Nvidia's new RTX 5080 GPU performs better than AMD's Radeon RX 7900 XTX at raw rasterization and ray tracing
  • DLSS 4 performance takes it further, while the RX 7900 XTX only has FSR 3 for now
  • Scalping could leave many people buying AMD GPUs instead

The best of Nvidia's RTX 5000 series GPUs are finally here, with the RTX 5090 ($1,999 / £1,939 / AU$4,039) and the RTX 5080 ($999 / £939 / AU$2,019) launching yesterday. With comparisons now out in the wild, it's clear to see that the RTX 5080 defeats AMD's Radeon RX 7900 XTX while sitting at the same listed price - although the chances of finding one at that price are slim.

At both raw rasterization and (unsurprisingly) ray tracing performance, Team Green's RTX 5080 comes out on top against its rivals' flagship RX 7000 series GPU in several games as evident in Gamer Meld's comparison on YouTube (available below). Whilst it isn't by a huge margin (at least in raw rasterization), it completes the job the previous generation's RTX 4080 Super set out to do.

Examples of this are notable in Black Myth: Wukong, as the RTX 5080 scored an average frame rate of 42fps versus the RX 7900 XTX's 32fps at 4K max graphics settings with no upscaling or ray tracing, a 27% performance difference. With RT Overdrive enabled in Cyberpunk 2077 at 4K max graphics settings and upscaling on (performance mode for both), the RTX 5080 had an average of 59.84 fps versus the 7900 XTX's 30.02 fps.

It's worth noting that this is while Team Green's powerhouse GPU was using DLSS 4 and the RX 7900 XTX was using FSR 3 - you could call it an unfair comparison, but Team Red's FSR 4 will only be available for RDNA 4 GPUs (at least for now), and the GPU in question isn't one of them. We will have to wait just a little longer for more information on what the new RX 9070 series offers (especially while using FSR 4), and whether this could stack up to Nvidia's offerings.

Unless you're lucky enough to grab an RTX 5080 FE before scalpers, you likely won't get it at its listed price

Now, this may be a circumstance where I'd recommend sticking with AMD's RX 7900 XTX if you already own the GPU - the RTX 5080 FE would likely be the better option going forward (especially if DLSS 4 is better than FSR 4), but the scalpers will likely be the main obstacle to stop you from purchasing it at reasonable prices.

We've seen this happen on numerous occasions with Nvidia's GPUs and other PC hardware, so expect it to be the same case here. It'll likely be much worse for those chasing the RTX 5090 with its $1,999 / £1,939 / AU$4,039 price (which I frankly don't think is worth it if you already own an RTX 4090).

While Nvidia's RTX 5080 is the stronger GPU, the RX 7900 XTX doesn't stray too far behind in raw rasterization - ray tracing and upscaling are great don't get me wrong, but I've already stated that this shouldn't be the deciding factor for a GPU purchase.

Both GPUs are at the same listed price, with third-party options of the RX 7900 XTX at lower prices (since it's been out for 2 years) so it would be the easy and most affordable option in this case - but once more users catch wind of performance comparisons, you'll likely see the RX 7900 XTX disappear from online retailers too with low stock. A potential purchase of the RTX 5080 is entirely down to whether you own AMD's GPU already or a GPU that's weaker on either Team Red or Team Green's end.

You may also like...



from Latest from TechRadar US in News,opinion https://ift.tt/YD6mGzQ

0 coment�rios:

Back in October, I met with a young German start-up CEO who had integrated the open-source approach by DeepSeek into his Mind-Verse platfo...

DeepSeek and the race to surpass human intelligence

Back in October, I met with a young German start-up CEO who had integrated the open-source approach by DeepSeek into his Mind-Verse platform and made it comply with German data privacy (DSGVO) standards. Since then, many rumors have been circulating that China has chosen a different architectural structure for its foundation model—one that relies not only on open source, but is also much more efficient, requiring neither the same level of training data nor the same compute resources.

When it comes to DeepSeek, this is not a singular “breakthrough moment.” Rather, AI development continues on an exponential trajectory: progress is becoming faster, its impact broader, and with increasing investment and more engineers involved, fundamental breakthroughs in engineering and architecture are just beginning. Contrary to some market spokespeople, investors, and even certain foundation model pioneers, this is not solely about throwing infinite compute at the problem; we are still far from understanding core aspects of reasoning, consciousness, and the “operating model” (or software layers) of the human mind.

Additionally DeepSeek is (was) not a government-sponsored initiative; supposedly, even the prime minister was surprised and visited Hangzhou to understand what was happening. Although Scale AI founder Alexander Wang claims that China already has a significant number of powerful H100 GPUs (about 50,000), yet—based on U.S. export laws—this fact is not publicly acknowledged. DeepSeek is reported to have only about 150 engineers, each earning in the range of $70–100k, which is eight to ten times lower than top engineering salaries in Silicon Valley.

So, regardless of whether they have powerful GPUs or whether $6 million or $150 million was invested, it is nowhere near the billions—or tens of billions—poured into other major AI competitors. This example shows that different engineering and architectural approaches do exist and may be waiting to be uncovered. Most likely, this is not the ultimate approach, but it also challenges the current VC narrative that “it’s all about compute and scale.” Moreover, the open-source mindset behind DeepSeek challenges the typical approach to LLMs and highlights both the advantages and the potential risks.

Sam Altman is rumored to be hosting a “behind-closed-doors” meeting with the Trump administration on January 30th, where he plans to present so-called “PhD-level” AI agents—or super agentic AI. How “super” this will be remains unclear, and it is unlikely there will be any public declaration of achieving AGI. Still, when Mark Zuckerberg suggests Meta will soon publish substantial progress, and Elon Musk hints at new breakthroughs with Groc, DeepSeek is just another “breakthrough” that illustrates how fast the market is moving.

Once agentic AIs come online, they introduce a structural shift: agentic AI is not about merely responding to a prompt, but about pursuing a goal. Through a network of super agents, massive amounts of data are gathered and analyzed, while real products and tasks are delivered autonomously. What is interesting about Sam Altman not making a public appearance and release, his meeting with the U.S. The government hints at potential risks and consequences.

We are at the Verge of Hyper-Efficiency and Hyper-Innovation

What we are seeing is the compound effect of investment and ever-growing teams working on these models, with few signs of a slowdown. Needless to say, any quantum breakthroughs would be the next frontier—essentially “AI on steroids”—where the magnitude of change could increase exponentially. On the positive side, this can unleash innovations in health and medicine like never before in human history.

In the near future, broader access to AI tools will probably benefit infrastructure providers and hyperscalers such as AWS. It is unclear if this will put NVIDIA at a disadvantage or actually benefit it: as “everyone” joins the AI race, there could be more demand for compute, not just from big U.S. tech players like OpenAI. Meanwhile, Anthropic and OpenAI run closed ecosystems, but DeepSeek’s public paper shares many of its core methods.

The greatest risk to the U.S. and its current AI dominance is that China does have talent and the strong work ethic to keep pushing forward. Trade sanctions won’t stop that. As more engineers come together and keep working, the odds of major breakthroughs increase.

The Battle of Distrust

Globally, the U.S. is losing trust. The “don’t trust China” narrative is fading in many parts of the world. While Donald Trump on the surface gains respect, global leaders are quietly looking for alternatives in the background to mitigate. Europe and other Asian nations don’t want to be “hostage” to U.S. technology and will open up to new options.

Technology doesn’t evolve overnight, and we’ve only seen the start of the breakthroughs to be announced by Groc, Meta, and OpenAI. Simultaneously, new capital will continue pouring in, and other regions will join the race, now that it’s clear money alone isn’t everything. The future might not necessarily be bad for NVIDIA, either, since data centers could appear everywhere, enabling a more global roll-out of AI and creating opportunities for many.

From Prompting to Action

There are still numerous smaller AI companies that have received massive funding purely on hope and hype. Yet new approaches to foundation models—via architectural and engineering innovation—can continue to drive progress. And once we “hack” biology or chemistry with AI, we may see entirely new levels of breakthroughs.

Looking toward the rest of 2025, we can expect more “super-agent” breakthroughs, as agentic AI and LQMs (Large Quantitative Models) push generative AI beyond fun language-based tools to genuine human worker replacements. Not only will financial modeling and analysis be optimized, but also execution—the entire cycle of booking, planning, and organizing—could shift to autonomous agents. Over time, these integrated, adaptive agents will replace more and more use cases where humans currently remain in the loop. This might also be one of the biggest threats to society: coping with extreme pressures on market economies under hyper-efficiency and hyper-innovation. In 2025, we are likely to see breakthroughs in education, science, health, consulting, and finance. With multiple compounding effects in play, we’ll likely experience hyper-efficiency and widespread growth.

However, the looming threats are real. Agentic, at-scale AI can still fall victim to hallucinations, and now anyone with a few million dollars can build their own model—potentially for malicious use. While a global, open approach to AI can be positive, many engineering and research challenges remain unsolved, leaving high risks. With the U.S. laser-focused on AI, the race to surpass human-level intelligence is on.

We list the best Large Language Models (LLMs) for coding.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro



from Latest from TechRadar US in News,opinion https://ift.tt/meU0qaZ

0 coment�rios:

The Now Bar in One UI 7 displays live information from apps As promised, Google Maps support has now been added The first Galaxy S25 pr...

The Samsung Galaxy S25's best software feature just got Google Maps support


  • The Now Bar in One UI 7 displays live information from apps
  • As promised, Google Maps support has now been added
  • The first Galaxy S25 preorders are being shipped out to customers

As our Samsung Galaxy S25 Ultra review will tell you, Samsung's latest flagship phones come packed with AI features, such as the useful Now Bar that gives you live updates from apps in real time – and that feature has just got Google Maps support.

The new Google Maps support was spotted by early adopters including @theonecid (via Neowin), and means you'll be able to see how far you are from your destination and the directions you need to take, even if you're using another app.

Samsung has only just started sending out the first preordered Galaxy S25 handsets to customers, but if and when you get your hands on one, make sure you're running the latest version of Android and the latest version of Google Maps to see the live activities.

We had previously heard that Google Maps support would be on the way at the Samsung Galaxy S25 launch event, so it's pleasing to see that Google and Samsung have rolled out a compatibility update so quickly – and let's hope many more apps follow suit, to make full use of the Now Bar's potential.

Looks familiar

The Now Bar is part of the One UI 7 update that Samsung has been working on, which is based on Android 15. Having launched alongside the Galaxy S25 phones, the software should be made available for older handsets in the coming months (it's already available in beta form for other Galaxy phones).

It's quite an obvious copy of the Live Activities feature added to iPhones back in 2022 with iOS 16, and in some respects the Now Bar mimics the same functionality you get from the Dynamic Island on the latest Apple smartphones.

Even if it's not the most original of features, it's still helpful to have. Other apps the Now Bar works with right now include Bixby, Clock, Emergency Sharing, Google (for sports cores), Interpreter, Samsung Health, Samsung Notes, and Voice Recorder.

The Now Bar also works with the lock screen and the always-on display, which means you can get constant updates from Google Maps – which are currently shown in a persistent notification on Android that's not quite as easy to access.

You might also like



from Latest from TechRadar US in News,opinion https://ift.tt/JS0w9Ga

0 coment�rios:

The rise of Generative AI, from tools like ChatGPT to a new wave of AI tools , is reshaping technology demands, driving a surge in the need...

Cooling high-density data centers with coolant distribution units

The rise of Generative AI, from tools like ChatGPT to a new wave of AI tools, is reshaping technology demands, driving a surge in the need for advanced CPUs and GPUs. These powerful processors, with ever-increasing thermal design power (TDP)—like Nvidia’s GB200 Grace Blackwell superchip consuming 1,200W per GPU with racks powering 120kW—are pushing power and cooling requirements to unprecedented levels, making traditional air cooling solutions insufficient.

Enter liquid cooling. With adoption projected to grow at a 45% compound annual growth rate (CAGR) from 2023 to 2028, direct-to-chip liquid cooling solutions, powered by coolant distribution units (CDUs), are transforming how data centers handle the heat generated by high-performance AI workloads. Cooling efficiency isn’t just a consideration—it’s a necessity as AI processors continue to push power and density boundaries.

This article examines how CDUs are supporting thermal heat management, enabling operators to meet the relentless cooling demands of AI and high-performance computing while paving the way for the next generation of data center efficiency.

What is a Coolant Distribution Unit (CDU)?

A CDU is the heartbeat of a liquid cooling system, carefully regulating coolant temperature and flow rates to maintain optimal cooling efficiency. By managing the coolant flow to IT equipment and returning the IT heat to the facility’s water for re-cooling, CDUs help stabilize temperatures and minimize the risk of overheating. In direct liquid cooling, the CDU plays a vital role by allowing for temperature conditioning to prevent condensation and by isolating the IT equipment from harsher facility water, which may contain mineral deposits, particulates, and other impurities that could damage cooling systems or reduce efficiency.

CDUs are also pivotal for increasing system longevity. According to a recent Uptime Institute study, over 70% of unplanned downtime in data centers is linked to power and cooling system failures. With advanced CDUs, operators can reduce such risks, ensuring reliable operations even under heavy workloads.

Different Types of CDUs: Liquid-to-Liquid vs Liquid-to-Air

Liquid-to-Liquid and Liquid-to-Air CDUs both serve the essential function of cooling IT equipment but are suited for different environments based on their cooling mechanisms and efficiency levels. Liquid-to-Liquid CDUs are ideal for facilities with access to facility water, offering high cooling capacity and efficiency due to the superior thermal conductivity of water.

These systems transfer heat from the IT equipment’s coolant loop to the facility’s water loop using a heat exchanger, which is well-suited for high-density environments. In contrast, Liquid-to-Air CDUs use air-cooled radiators and fan systems to dissipate heat into the surrounding air, making them a better choice for locations without access to facility water, though they typically offer lower overall cooling capacity and efficiency.

While both systems include similar components needed to run (such as pumps, temperature control systems, and filtration), the key difference lies in how they transfer heat. Liquid-to-liquid CDUs rely on primary and secondary pumps to circulate coolant and water, while Liquid-to-Air CDUs depend on fans to move air over radiators. Maintenance for both systems is essential, though Liquid-to-Air CDUs may require more frequent attention to air filters and fan components.

In contrast, liquid-to-liquid systems require monitoring the water loop and its cleanliness. Ultimately, the choice between a Liquid-to-Liquid or Liquid-to-Air CDU depends on the available infrastructure and specific cooling and efficiency goals, with Liquid-to-Liquid systems excelling in high-density environments and Liquid-to-Air systems offering flexibility where water access is limited.

The adoption of liquid cooling is on a sharp upward trajectory. In 2023, only 10% of data centers used liquid cooling, but this figure is expected to reach 50% by 2030. Driving this shift are the increasing thermal demands of AI and HPC and environmental considerations. Traditional air cooling can consume up to 40% of a data center’s energy, with goals for advanced cooling to reduce energy expenditures to as little as 5% of total IT load. This is a significant factor in reducing overall Power Usage Effectiveness (PUE).

CDUs are also addressing water conservation challenges. Records obtained by the Financial Times reveal that data center water consumption has spiked by nearly two-thirds since 2019, with over 1.85 billion gallons consumed in 2023 compared to 1.13 billion gallons in 2019. Technologies that eliminate reliance on evaporative cooling are becoming indispensable. Eco-friendly coolants and closed-loop systems are gaining traction, helping operators reduce their environmental footprint.

Conclusion

As AI and HPC workloads intensify, data centers must adopt cutting-edge solutions like CDUs to stay ahead. By enabling efficient, scalable cooling, CDUs not only support operational demands but also align with sustainability goals. With liquid cooling expected to dominate the market by 2030 and the data center cooling market set to exceed $20 billion by 2028, the time to embrace these technologies is now. CDUs are not just a tool for cooling—they are the foundation for a sustainable, high-performance future.

We list the best web hosting services.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro



from Latest from TechRadar US in News,opinion https://ift.tt/0Ka36mP

0 coment�rios:

The cyber threat landscape has reached a critical tipping point. According to the UK government's 2024 Cyber Security Breaches Survey,...

It’s time to catch up with cyber attackers

The cyber threat landscape has reached a critical tipping point.

According to the UK government's 2024 Cyber Security Breaches Survey, a staggering 50% of businesses experienced some form of cyber breach in the last 12 months, with this figure rising dramatically to 70% for medium businesses and 74% for large businesses.

Phishing attacks dominate the threat landscape, accounting for 84% of business breaches, followed by email impersonation (35%) and malware (17%).

The statistics, while alarming, reveal only part of the challenge facing organizations today. The most pressing issue isn't just the increasing frequency of attacks, but also the growing disparity between how quickly attackers can breach systems and how long organizations take to respond.

Contemporary security technologies can detect threats within minutes, yet the average time for organizations to fully identify, contain and restore systems stretches to about 20 days – with recovery times far longer. This extended vulnerability window gives cybercriminals ample time to infiltrate networks, compromise sensitive data and even establish a backdoor for future attacks.

Recent headlines have highlighted the devastating impact of delayed response times across various sectors. From the UK Air Traffic Control's miscommunicated cyber incident last year to UnitedHealth's delayed response to a massive data leak in April this year, as well as ongoing challenges faced by British Ambulance Services and Nuclear plant Sellafield, the impact of inadequate response times continues to be felt.

These incidents underscore a troubling reality. When organizations cannot respond swiftly to cyber threats, the consequences ripple far beyond immediate operational disruption. The financial toll is substantial – IBM reports a 10% increase on the cost of an average data breach in 2024, rising to $4.8million.

The evolution of cybersecurity tools

That said, the cybersecurity industry has made remarkable strides in developing defensive technologies, yet many organizations struggle to maximize their potential.

Modern Extended Detection and Response (XDR) platforms represent a significant advancement, offering sophisticated threat detection and automated response capabilities that can identify and neutralize threats across an organization's entire IT infrastructure.

The latest generation of security tools also incorporates predictive capabilities, leveraging vast databases of threat intelligence to anticipate and prevent attacks before they materialize. These systems can link seemingly unrelated events across different parts of the network – in doing so, subtle patterns that might indicate an emerging threat can be identified – a key part in taking detection timelines from days to hours.

This evolution from reactive to proactive defense represents a crucial step forward in closing the response time gap. However, the form of data remains crucial. Too often, we see organizations dealing with theoretical data as opposed to actual, real-time information. Relying on the former may prove effective in theory, in practice, it’s a different story altogether. No one organization's defense is the same.

Building a cyber safe culture

Indeed, creating an effective cyber defense requires more than deploying the latest security tools – it demands a fundamental shift in organizational culture.

Security posture assessments need to become an ongoing process rather than a periodic checkbox exercise. By continuously evaluating and adjusting defenses, organizations can identify and address vulnerabilities before attackers have the chance to exploit them. The integration of artificial intelligence and machine learning capabilities has become key to this effort, not least because it has reduced time needed to spot and investigate potential threats but also the ability to bring contextual data into play, allowing a more informed response.

Best practices for rapid response

Indeed, a robust cybersecurity strategy must seamlessly integrate people, processes and technology.

Security teams require immediate access to clear and actionable threat intelligence through intuitive interfaces that support rapid decision-making. Protection must extend across the entire attack surface, from cloud infrastructure to remote work endpoints, to create a unified defense against increasingly sophisticated threats.

Modern security platforms can automate initial containment measures, which will buy precious time for security analysts to investigate and respond to incidents. However, technology must be supported by clear protocols for incident communication and stakeholder coordination. While building these defenses requires significant investment, the potential costs of a serious breach can be markedly higher – both in immediate financial terms and long-term reputational damage.

The most effective rapid response strategies now incorporate real-time monitoring of the complete environment. In the most effective cases, this monitoring is bolstered by strong detection and response processes, which provide the correct amount of insights into each individual risk and the damage it is capable of causing. After, cyber teams can quickly understand the scope and nature of any security incident, facilitating faster and more targeted responses.

Looking ahead

A proactive security posture, supported by continuous adaptation and improvement, has become essential for survival. This means not only keeping pace with emerging threats but anticipating and preparing for tomorrow's challenges. It means being cyber safe – not just cyber secure. The current gap between attacker capabilities and defender response times represents one of the most pressing challenges in modern cybersecurity. However, organizations that combine cutting-edge mindset backed by the right technology with robust processes and a cyber safety-conscious culture can work to close this gap. The objective isn't merely to catch up with cyber attackers – it is to stay ahead of them.

Checkout our list of the best identity management software.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro



from Latest from TechRadar US in News,opinion https://ift.tt/xm56E4X

0 coment�rios:

New AI chatbot DeepSeek is taking the technology world by storm, and has even dethroned ChatGPT from top spot on the iOS App store. There...

“This is a wake-up call" - the DeepSeek disruption: 10 experts weigh in

New AI chatbot DeepSeek is taking the technology world by storm, and has even dethroned ChatGPT from top spot on the iOS App store.

There's certainly a lot of speculation about the potential DeepSeek holds, as well as what this could mean for some of the biggest players in tech right now. Plenty of both new and existing firms have invested billions of dollars into Large Language Models in the last few months alone - but DeepSeek has the opportunity to cause chaos in the venture capital world.

Having caused the biggest drop in the US stock market in history, the chatbot is understandably a huge talking point - but what are the experts’ predictions?

Will the hype last?

There’s no denying that DeepSeek arriving on the scene has been disruptive, and although the expert opinion isn’t unanimous, the significantly lower development cost is definitely turning heads.

If you're confused about what DeepSeek is, or why its causing so much of a stir, check out our article here explaining all you need to know,

The pressure is certainly on for US tech firms to respond, but it may not be as destructive as it seems.

We heard from 10 experts in the technology industry;

Steve Povonly, Senior Director of Security Research & Competitive Intelligence at Exabeam:

"The release of Chinese-developed DeepSeek has thrown US tech markets into turmoil; this is both justifiable and also perhaps, a bit overblown. The emergence of a technology that ultimately optimizes chip usage and efficiency is likely to apply pressure on existing large chip vendors, which is a very good thing. As the adage goes: "Pressure yields diamonds" and in this case, I believe competition in this market will drive global optimization, lower costs, and sustain the tailwinds AI needs to drive profitable solutions in the short and longer term."

Mike Follett, CEO and Founder of Lumen Research:

“Last week, Sam Altman and Elon Musk were AI monopolists. Now, it is clear that competition will reduce costs and loosen the grip hold of the biggest players. This represents an opportunity for marketers enabling them to build agents designed for specific problems, at a speed, scale and at a price that drives ROI.”

Nigel Green, CEO of deVere Group:

“This is a wake-up call for markets. The assumption that tariffs could contain China’s technological ambitions is being dismantled in real time. DeepSeek’s breakthrough is proof that innovation will always find a way forward, regardless of economic barriers.”

“By restricting China’s access to high-end semiconductors, Washington sought to slow its progress in AI. Instead, it has fueled an acceleration in domestic innovation, forcing Chinese firms to find alternatives. DeepSeek’s achievement is a direct result of this shift.

“Rather than being crippled by US sanctions, Beijing has cultivated AI models that require significantly less computing power, diminishing its reliance on American technology and eroding US leverage over global supply chains.”

The value

Aleksandr Yampolskiy, CEO of SecurityScorecard:

“DeepSeek is trained on 14.8 trillion diverse tokens whereas OpenAI is trained only on 13 trillion. It also costs radically less to train DeepSeek at $6M while OpenAI costs allegedly $100M, making DeepSeek 16.6X more efficient.”

“We are living in fascinating times. While "constraints in capital" may seem like a challenge, history has shown us (and DeepSeek has demonstrated) that these constraints often spark innovation and creativity. Security for AI will only become more critical. In a world where the lines between deepfake and human-generated content blur, and where biased information can shape our opinions, the need for robust security and ethical practices will grow exponentially.”

Dr. Kjell Carlsson, Head of AI strategy at Domino Data Lab:

“Deepseek’s success serves as a powerful reminder that the value of AI lies not in the size of your infrastructure or the exclusivity of your models, but in how effectively they are leveraged to deliver impact. By developing cutting-edge generative AI models without relying on the latest, most expensive hardware, Deepseek has demonstrated that agility and strategy can outpace raw computational power. Their achievements also highlight the vulnerability of incumbents in the generative AI space—proving that open-source innovation continues to be a powerful equalizer, enabling challengers to match and even surpass established players years into the revolution.”

“For companies seeking to maximise value from AI, the lesson is clear: success hinges on flexibility and capability, not exclusive partnerships or infrastructure scale. Rather than locking into specific LLM providers or focusing solely on hardware access, organisations should prioritise building the end-to-end capabilities to source AI innovations, design solutions tailored to their unique needs, and operationalise them effectively. This approach ensures that businesses remain agile, competitive, and prepared to harness the next wave of AI advancements, wherever they emerge.”

Bradford Levy, Assistant Professor of Accounting, University of Chicago Booth School of Business:

“DeepSeek has sent shock waves through the tech industry – directly challenging tech giants like Meta, Microsoft and Open AI."

“Until now, it’s been assumed their expertise in designing and operating large-scale distributed systems are essential for training state of the art models. But the development of R1 suggests otherwise – if these models can be trained using 90% fewer chips, the implications for valuation models are massive."

“This opens the door for smaller, more agile players to compete, potentially driving more innovation. With limited resources, they proved that scrappy, innovative teams can shake up the industry, even on a shoestring budget."

“While impressive, we should remain skeptical of any claims made by those with a vested interest in their own success. Before jumping to conclusions about the broader AI landscape, we need more time to test these models and understand how they achieved these numbers.”

The revolution

Professor Geoff Webb, Department of Data Science & AI, Faculty of Information Technology, Monash University:

“The emergence of DeepSeek is a significant moment in the AI revolution. Until now it has seemed that billion dollar investments and access to the latest generation of specialised NVIDIA processors were prerequisites for developing state-of-the-art systems.”

“This effectively limited control to a small number of leading US-based tech corporations. Due to US embargoes on exporting the latest generation of NVIDIA processors, it also locked out China.”

“DeepSeek claims to have developed a new Large Language Model, similar to Chat GPT or Llama, that rivals the state-of-the-art for a fraction of the cost using the less advanced NVIDIA processors that are currently available to China. If this is true, it means that the US tech sector no longer has exclusive control of the AI technologies, opening them to wider competition and reducing the prices they can charge for access to and use of their systems.”

“Looking beyond the implications for the stock market, current AI technologies are US-centric and embody US values and culture. This new development has the potential to create more diversity through the development of new AI systems.”

“It also has the potential to make AI more accessible for researchers around the world both for developing new technologies and for applying them in diverse areas including healthcare.”

The risks

Aditya Sood, VP of Security Engineering and AI Strategy at Aryaka:

"Open-source AI models like DeepSeek, while offering accessibility and innovation, are increasingly vulnerable to supply chain attacks triggered during large-scale cyberattacks. These attacks, where adversaries exploit the reliance on third-party dependencies, pre-trained models, or public repositories, can have severe consequences. Adversaries may tamper with pre-trained models by embedding malicious code, backdoors, or poisoned data, which can compromise downstream applications. Additionally, attackers may target the software supply chain by manipulating dependencies, libraries, or scripts used during model training or deployment. This can lead to systemic AI functionality corruption."

Renuka Nadkarni CPO at Aryaka:

"The sudden popularity of DeepSeek comes at a price. There are two dimensions of this. First, threat actors are likely to adopt this new tool now that it's widely available. Second, DeepSeek was a victim of a large-scale malicious attack. This means that their system could be compromised and subject to several of the known AI model attacks. Known AI model vulnerabilities, data risks, and infrastructure threats come into play here."

“While the unavailability of the service is an easy and visible attack on its infrastructure, the bigger concern lies in the undetected attacks on its model and data. These hidden threats could compromise benign users and enable other malicious activities."

The sceptics

Dan Goman, CEO, Ateliere Creative Technologies:

"The market’s reaction to the latest news surrounding DeepSeek is nothing short of an overcorrection. While the enthusiasm around breakthroughs in AI often drives headlines and market speculation, this feels like yet another case where excitement has outpaced evidence. Investors should be cautious about blindly jumping on the hype train without asking the tough questions."

"In summary, while Deepseek’s story is intriguing, it’s imperative to separate fact from speculation. The market needs to temper its enthusiasm and demand more transparency before awarding DeepSeek the crown of AI innovation. Until then, skepticism remains a healthy and necessary stance."

You might also like



from Latest from TechRadar US in News,opinion https://ift.tt/6gE5Ors

0 coment�rios:

As 2025 arrives, it’s clear that the internet’s vast capabilities – spanning cloud services to emerging technologies like AI – depend on r...

From smart cities to streaming: 2025 wireless tech predictions

As 2025 arrives, it’s clear that the internet’s vast capabilities – spanning cloud services to emerging technologies like AI – depend on robust infrastructure. From smart cities to streaming services, while data might be the lifeblood of modern organizations, connectivity is the beating heart. No single technology can meet the world’s growing and diverse connectivity demands – nor should it. Instead, an intricate digital tapestry will emerge, reshaping industries and transforming the global economy.

To paint a picture of this future, here are my predictions for connectivity in 2025.

1. More people will move beyond fiber – embracing the mmWave spectrum

The notion that fiber will reach every corner of the world will be exposed as unrealistic and undesirable. While fiber makes sense in many areas, it’s not a silver bullet; financial and logistical challenges make it impractical in low-density regions. As such, mmWave will gain more recognition as a practical and desirable complement to fiber. Offering gigabit speeds wirelessly in hard-to-reach areas, mmWave delivers reliable, high-speed internet without the extensive groundwork required for fiber installation – and the wheels are already in motion.

Recently, Verizon unveiled an ambitious plan to double its fixed wireless access (FWA) subscribers to 8-9 million by 2028. By deploying mmWave radio frequency (RF) technology, the operator is targeting coverage to 90 million households, having reached its goal of 4-5 million 15 months early. This momentum reflects the growing importance of mmWave in delivering high-speed broadband services.

In 2025, expect to see a surge in the adoption of mmWave technology, particularly in markets where traditional fiber rollouts are impractical or costly. For example, the UK is preparing for the mmWave spectrum auction, unlocking high-frequency bands that promise blazing-fast 5G and transformative services across industries. From consumer electronics to smart cities, 2025 will reveal just how critical mmWave bands are for transforming our digital economy.

2. Smart cities will adopt wireless infrastructure as a key complement to fiber

Well-chosen technology will make cities safer while improving accessibility and sustainability. Local government leaders that invest in high-capacity, low-latency technologies will be able to support top-tier CCTV cameras, sensors, autonomous systems, and smart grids – strengthening security and improving outcomes for citizens. Those that don’t will struggle with inefficiencies, limited scalability, and safety risks.

For example, 4K CCTV cameras will resolve its most common problem: that CCTV is too low quality to prevent crime effectively or be used convincingly as evidence. Wireless outdoor infrastructure will enable widespread 4K CCTV and leave this problem in the past where it belongs.

Also on the rise are smart poles with integrated connectivity options, which provide urban areas with efficient and scalable networking solutions. According to ABI Research, more than 10.8 million smart poles will be installed by 2030, and it’s expected that 20% will need wireless connectivity. Every pole will need to deliver gigabits per second; as a critical connectivity option for capacity-hungry applications, mmWave will prove a key enabler. With leading players working actively on new solutions, next year will see this shift continue.

And let’s not forget autonomous vehicles: this might seem like a faraway future, but 2024 saw Waymo take driverless technology to new heights with its robotaxis, so it’s only a matter of time before this becomes a reality for public transport. Once this happens, smart vehicles like buses and trams will generate vast amounts of data that needs to be transferred to and from the cloud. This won’t be possible without wireless connectivity; city leaders that prioritize this when IT infrastructure planning will unlock not only greater efficiency but also adaptability, enabling them to keep up with constantly evolving demands.

3. mmWave-based fixed wireless access will help bridge the digital divide

In developed regions and emerging countries alike, the digital divide remains one of the most pressing connectivity challenges worldwide. In established regions like North America, underserved rural areas often lack reliable high-speed internet, despite significant infrastructure investments elsewhere in the country. Meanwhile, in emerging markets like Africa, even cities face limited internet access due to a lack of wireline connectivity and congestion of traditional spectrum options.

2025 will see mmWave-based FWA materialise as a powerful solution to plug these gaps. This is because mmWave technologies operating in the 60GHz band offer a cost-effective way to deliver ultrafast, low-latency connectivity. By bypassing the need for extensive physical infrastructure, mmWave will help democratize internet access and unlock new economic potential on a global scale.

4. 60GHz will gain traction in wireless video and enterprise applications

In 2025, the 60GHz spectrum will come into its own, transforming wireless video applications across sectors. From immersive gaming and entertainment experiences to enterprise-grade video conferencing, ultrafast and low-latency connectivity will underpin the rise of professional applications.

Virtual reality (VR), wireless HDMI, and ultra-wide screens will increasingly rely on the speed and quality of 60GHz connectivity. These developments will also play a crucial role in facilitating impactful AI and big data analysis, alongside cloudification and network function virtualization (NFV).

Additionally, as consumer and enterprise demands grow, it will become increasingly understood that business success depends not just on talent and innovation but on the speed and quality of supporting connectivity infrastructure. As 2025 will prove, time will only become a more precious commodity, so anything that speeds up progress reliably will be most welcome.

So what does this mean for infrastructure suppliers? Pushing the boundaries of connectivity — from power efficiency to manufacturability — means tackling competing demands for high performance, reliability, and energy efficiency, while striving to make products accessible and affordable. It's a constant balancing act between advancing innovation and meeting the real needs of global connectivity; striking that balance will shape 2025 as much as any technical milestone. In practice, this means adopting an adaptable approach that doesn’t just solve today’s problems but also anticipates those of tomorrow. On behalf of all digital citizens, current and future: bring on 2025.

We've compiled a list of the best cloud storage.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro



from Latest from TechRadar US in News,opinion https://ift.tt/0IL7vpg

0 coment�rios:

The software development landscape of 2024 has revealed both the potential and limitations of current AI coding tools. While 74% of develop...

AI in 2025: Moving beyond code generation to intelligent development platforms

The software development landscape of 2024 has revealed both the potential and limitations of current AI coding tools. While 74% of developers have embraced these tools, a revealing Stack Overflow study shows 36% struggling with code reliability. This isn't a failure of AI – it's a clear indication of where AI tools need to evolve.

The emergence of "AI Debt" – the hidden costs arising from hastily deployed AI-generated code – in industry discussions highlights a crucial challenge: AI-generated code often requires extensive review and optimization before it's production-ready, diminishing the efficiency gains made in the initial code generation process.

However, studies showing tools like Copilot introducing up to 41% more bugs point not to AI's limitations, but to the need for more sophisticated approaches to AI-assisted development. What are these approaches?

Looking ahead to 2025, here are five key developments that will transform how we use AI in software development.

1. Intelligent Context Modelling Will Transform Code Generation

Next-generation AI will move beyond simple pattern matching to true contextual understanding. These systems will build comprehensive models of your codebase, architecture, and development patterns, ensuring every suggestion fits seamlessly into your existing ecosystem.

So instead of producing isolated snippets of code, they will provide suggestions that align with the broader software design and help to predict issues in performance, security and scalability.

This deep context awareness will dramatically reduce the current effort required to adapt AI-generated code to production environments. And not only will this save on developer time, but contribute to a new standard in quality for creating performance-ready AI-generated code.

2. Multi-Large Language Model (LLM) Architectures Will Replace Single-Model Dependencies

The limitations of single-model approaches will give way to sophisticated multi-LLM architectures. These systems will treat coding LLMs as modular infrastructure components, using advanced prompt engineering and model orchestration to leverage the strengths of different models. For example, one model may be optimized for code syntax and another for code refactoring.

This means companies can access parallel processing and use different LLMs best suited to process different tasks, while also enhancing their reliability by being less dependent on one model. Such adaptability will also allow companies to scale more effectively and cost-efficiently.

This architectural shift will free developers from vendor lock-in while enabling more sophisticated code generation and optimization capabilities. We’re already seeing this start to happen as the benefit of multi-modal becomes more apparent.

3. AI-Driven Code Evolution Through Genetic Algorithms

Static code generation will evolve into dynamic code optimization through genetic algorithms. Based on Darwinian principles, these systems will continuously generate, test, and refine code variations, automatically selecting the best performers based on specific metrics. For instance, the first batch of AI-generated code is the gene pool, which then, through the use of genetic algorithms, undergoes evolutionary processes, with code tested against metrics such as processing efficiency and memory usage. It’s survival of the fittest but for code optimization.

This evolutionary approach ensures code continuously improves as system and business requirements change rather than remaining static after initial generation.

4. Automated Validation Will Shift Left in Development

Real-time validation will become an embedded part of the development process. AI systems will automatically verify security, performance, and compatibility as code is written, not after, ensuring low-quality code is filtered out during the process.

This shift-left approach will integrate comprehensive testing and validation directly into the development workflow, significantly reducing post-generation review time. Ultimately, this will accelerate the overall software development cycle while also improving quality.

5. Next-Gen Intelligence Platforms Will Revolutionize Developer Workflows

The culmination of these advances will be intelligent platforms that fundamentally transform development workflows. These platforms will orchestrate multiple AI technologies while continuously learning from every interaction, code review, and deployment outcome.

What makes these platforms revolutionary is their ability to evolve alongside your development practices. By learning from successful implementations, failed attempts, and developer feedback, they'll become increasingly sophisticated in their understanding of what makes code not just functional, but optimal for specific contexts and requirements. They will evolve and improve with each iteration.

For developers, this evolution means moving beyond simple code completion to truly intelligent development assistance that understands your unique technical environment and objectives. These platforms won't just suggest code – they'll help create better, more reliable software while reducing the manual overhead that currently limits AI's potential in development.

Making the move to intelligent development platforms

The rapid development and hype around AI has led to a majority of software developers adopting AI tools for coding. But with this widespread adoption, the next step in the evolution of these tools is to significantly improve the reliability, quality and performance of AI-generated code. With current processes, much skill, time and effort is required to adapt and maintain code after it is generated. This is where the next-generation of tools will start to make their impact.

Intelligent context modelling and multi-LLM architectures will be a new breed of tools significantly reducing the effort involved in code generation and enhancing optimization capabilities. When code is being generated, genetic algorithms will use natural selection principles to ensure the best lines of code remain, while real-time validation will play its part in enhancing quality as the code is being written.

These advances will culminate in the next generation of intelligent platforms which continuously learn and evolve alongside a developer’s specific practices. Ultimately, in 2025, rather than simply using code generation tools, developers will begin to transform their processes with truly intelligent AI assistants.

We've compiled a list of the best laptops for programming.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro



from Latest from TechRadar US in News,opinion https://ift.tt/XzFLj42

0 coment�rios:

Generative AI (GenAI) has had a huge impact on app development over the past two years, with tools such as ChatGPT and GitHub changing the...

Securely working with AI-generated code

Generative AI (GenAI) has had a huge impact on app development over the past two years, with tools such as ChatGPT and GitHub changing the face of the industry. The gains to be had are potentially enormous, helping developers to work faster and more efficiently. In fact, according to Snyk’s 2023 AI Code Security Report, 71.7% of respondents said that AI code suggestion was making them and their teams somewhat or much more productive.

Recognizing the security risks of AI-generated code

The use of GenAI can pose security challenges for those writing code, however. 56.4% of respondents in Snyk’s report say that insecure AI suggestions are common, but few have changed processes to improve AI security. A Stanford University research paper revealed that participants with access to an AI assistant are more likely to believe they’ve written secure code than those without. The risk of increased and misplaced confidence in the security of AI-assisted code heightens as development speed outpaces traditional security practices.

Worse still, developers using AI assistants have been shown to produce code with potential flaws that are hard to detect without adequate safeguards. Stanford’s research found that not only are AI-assisted developers more likely to produce less secure code, they’re likely to place higher trust in code generated by AI, assuming correctness and security without proper validation.

The dangers are compounded when AI tools draw from outdated or flawed data, potentially replicating known vulnerabilities. With the rapid pace at which developers work, and the speed at which new vulnerabilities are discovered, the required security checks may be overlooked or delayed, creating a window of exposure for cyber risks.

Applying the right policies and practices

Generative AI shouldn’t be overlooked or avoided, however. After all, the productivity benefits are clear, and risks can be mitigated by comprehensive policies and tooling that guide the secure and effective use of AI-generated code. Businesses should treat GenAI as they would a Junior Developer, for example, and implement continuous code reviews. Just as junior coders require constant review and mentorship, AI-generated code also requires rigorous checks before signing off.

The shift-left approach, which focuses on security testing earlier in the software development lifecycle, is also key. Organizations can’t afford for security checks to be relegated to later stages of development as this causes costly rework and delays. Real-time security testing should be integrated directly into the developer’s workflow. Modern security tools and platforms enable developers to identify vulnerabilities immediately as both human and AI-generated code is produced, enabling remediation before flaws become embedded in the production application.

Education is also essential for the adoption of secure AI code assistance. Developers need to understand the limitations of AI models, including their inability to reason and their reliance on historical data that may not account for the latest vulnerabilities. Developers should actively scrutinize AI-generated suggestions to ensure code is both secure and relevant.

While GenAI itself can introduce risks, it can also help mitigate them when paired with security-focused tools. AI-powered security platforms can act as a security companion for AI-generated code. These tools analyze code for vulnerabilities in real-time, flagging issues such as insecure coding practices and outdated dependencies.

By promoting a collaborative DevSecOps culture where development, security and operations are combined within project management workflows and automated tools, organizations can align functions and ensure security becomes an enabler rather than an obstacle to innovation. Security champions within dev teams can play a vital role in advocating for best practices, reinforcing the importance of secure coding without slowing down the pace of innovation.

Tools and processes to safeguard AI-generated code

AI-generated code should always be used with appropriate safeguards to minimize security risks. The adoption of secure practices can go a long way, but organizations will also benefit by integrating security platforms and tools alongside.

Developers using GenAI should run code through security tools that integrate with popular development environments like IDEs, Git and CI/CD pipelines. This ensures that security checks run in the background as work progresses, complementing and empowering the shift-left mentality. Real-time feedback loops enable developers to fix vulnerabilities immediately, and security tools can monitor AI-generated code continuously, performing static analysis and highlighting security issues as the code evolves.

Organizations can also save time and reduce risks by leveraging tools that not only flag vulnerabilities but also suggest or automatically apply fixes. The best tools offer one-click remediation capabilities, enabling developers to secure their code rapidly without needing to rewrite large sections.

Embracing a holistic security approach

As GenAI continues to gain traction in software development, the need for robust security practices becomes even more paramount. Sure, GenAI tools like ChatGPT and GitHub can dramatically improve development speed and efficiency, but should never be assumed to generate secure code out of the box.

Instead, organizations should combine the power of GenAI with comprehensive security frameworks to maximize the benefits of AI-generated code while minimizing associated risks. Ultimately, success requires a balance between AI-driven innovation and stringent security protocols, with the right policies, practices and tools empowering developers and organizations to use AI responsibly while avoiding the pitfalls of insecure software.

We've featured the best laptops for programming.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro



from Latest from TechRadar US in News,opinion https://ift.tt/ljTgL2H

0 coment�rios:

You have the power to take charge of your data. This is the theme of this year's Data Privacy Week, an annual event that aims to raise...

Data Privacy Week 2025 has begun – check out our latest expert advice on how to stay safe online

You have the power to take charge of your data. This is the theme of this year's Data Privacy Week, an annual event that aims to raise awareness about online privacy.

Data Privacy Week 2025, which takes place between January 27 and 31, is all about helping you take control of your data—whether that data is the websites you visit, the apps you use, or any other identifier information you may willingly (or not) share when using the internet.

Here at TechRadar, we proudly advocate online privacy and want to help you regain agency over your online data. Over the week, we'll be giving you everything you need to support your journey. This page will be home to all of our latest advice, experiences, and privacy tool recommendations. Check back regularly for more from Data Privacy Week 2025 on TechRadar.

The best cheap VPN: Surfshark
The first rule of taking back control over your data is minimizing the information you share online in the first place. Surfshark is one of the best VPN tools on the market that can help you do that. For as little as the equivalent of $2.19 a month, you'll get premium privacy across unlimited devices for a very tiny price. Take Surfshark for a test drive with a 30-day money-back guarantee.

▶ Read more in our full Surfshark reviewView Deal

Data Privacy Week: the privacy risks of being online

Huge US data broker hack compromises privacy for millions worldwide
The Gravy Analytics hack, a breach of a US data location broker that holds data from millions of iPhone and Android users worldwide, was another reminder of the great dangers of data collection.

Is 10,000 steps a day worth your personal data? How 80% of fitness apps are selling your privacy
Researchers found alarming data collection rates among today's top fitness apps. Strava and Fitbit came out as the most data-hungry, collecting 84% of all potential data points.

Data Privacy Week: how to take your privacy back

How to clean up your digital footprint: 3 privacy-boosting tips
Minimizing your online data trail is key to protecting your digital privacy. Here are three easy steps to help you do that today.

How to protect yourself and your data online
While you can’t become 100% invisible online unless you’ve never logged into a website before, you can still get very close to removing your information from the internet. Here's how.

Data Privacy Week: all the tools you need to know

What are the benefits of using a VPN in 2025?
Whether you're new to VPNs or a seasoned pro, there's always more to learn about the benefits of virtual private networks. Here's everything a VPN can help you with this year and going forward.

Best secure VPN 2025
Looking to secure yourself online without any doubts? Here are our top-ranking secure VPNs you can get your hands on this year. We've used our proprietary testing and expert experiences to ensure you're not left wondering if your VPN will do the job.



from Latest from TechRadar US in News,opinion https://ift.tt/jpn6yAk

0 coment�rios:

Policies and procedures are the backbone of any organization. Too often, however, they don’t stick with employees in critical moments. Now...

AI’s role in revolutionizing compliance training

Policies and procedures are the backbone of any organization. Too often, however, they don’t stick with employees in critical moments. Now innovations built on artificial intelligence (AI) are poised to make change for the better. Here's how.

Despite occasional red tape, policies and procedures are essential for maintaining staff and customer safety and minimizing liability. When employees fail to follow them, the stakes can be high: hefty fines and sometimes legal or reputational damage.

But many businesses still treat compliance training as a one-size-fits-all "tick-box" exercise. Rather than promoting real understanding, the focus remains on leveraging uninspired, generic programs to tick regulatory checkmarks. This not only disengages employees but heightens risks – and misses the training’s true purpose.

If organizations want to succeed, it’s time to leave tick-box compliance behind and embrace meaningful, modern training strategies that empower the workforce.

Why ‘Tick-Box’ Compliance is Failing

Tick-box compliance exercises simply don’t work as well as employers want them to. Too often they merely demonstrate that a company has completed its mandated obligation – and miss opportunities to engage and promote true understanding. As such training is often reduced training to a dull formality and fail to leave a lasting impact, leading to predictable and costly problems later.

A tragic example of the failure of this approach can be seen in a 2022 incident at a Costa Coffee store in Barking. Despite having received tick-box-style allergy training, an employee serving a 13-year-old girl with a severe dairy allergy failed to follow mandated procedures. The result was the child’s death from anaphylactic shock. This devastating incident is a stark reminder of the human cost of inadequate training. And such lapses are reported in numerous industries.

At the heart of the problem is the disconnect between training and employees’ day to day working lives. Today, 73% of employees admit to lacking the knowledge needed to follow policies, with confusion about the rules being the leading cause of regulatory violations. HR leaders also share this concern: 68% view compliance training as a major challenge, and 66% acknowledge it’s often an afterthought. The outcome is poor compliance, elevating risks for both organizations and their customers.

Time for a Training Refresh

Employees need training that connects with their daily responsibilities and keeps them engaged. The outdated one-size-fits-all approach – relying on generic materials and occasional sessions – simply doesn’t cut it. Nor do newsletters and email updates as sole communicators of policies and procedures. These delivery methods rarely drive meaningful behavioral change.

Personalized, interactive training tailored to specific roles is far more effective. When employees see how compliance relates directly to their work and understand the reasons behind it, they’re more likely to embrace it. Regular updates and follow-ups can then reinforce these lessons, embedding compliance behaviors over time.

Still, fewer than half of companies conduct online training, and even fewer use explainer videos or interactive sessions. And gamified experiences, which significantly boost engagement and retention, are used by just 26% of organizations. The hesitation to adopt these tailored training often stems from cost and logistical concerns, especially for large companies with thousands of employees. As a result, many stick to more affordable, generic approaches – despite clear evidence of their shortcomings.

Smarter Compliance Training

Compliance training is undergoing a transformation, thanks to advancements in artificial intelligence (AI). AI now eliminates many historical economic and logistic barriers to personalized training – delivering role-specific, cost-effective solutions tailored to individual needs. Scalable and accessible, this technology empowers organizations to offer personalized training that truly resonates with employees – whether they have two members of staff or thousands.

One key advantage is adaptability. AI-driven training platforms continuously evaluate and refine modules to help promote maximum effectiveness. Just-in-time learning delivers content when employees need it most – right before crucial tasks or decisions. By incorporating interactive elements, quizzes, and gamification, AI can also boost engagement and retention, making learning not just effective but enjoyable.

The benefits are undeniable: 97% of HR and business leaders believe better engagement in policy and procedure training would increase compliance and reduce company risk. AI brings this closer to reality, helping organizations leave behind tick-box training for modern, impactful solutions.

End of an Era

Tick-box compliance has run its course. It is ineffective, disengaging, and in the worst cases can be dangerous. It's time for businesses to prioritize meaningful, modern training methods to reduce risks, protect employees, and create safer workplaces.

AI technology offers businesses an unprecedented opportunity. By making personalized, engaging learning scalable and cost-effective, AI-tools for learning empower businesses to deliver training that genuinely supports compliance and safety.

Investing in modern compliance training is not just a regulatory necessity – it’s vital for business success. By doing so, businesses can foster a culture of greater safety, productivity, and understanding, creating a better future for employees and customers alike.

We've listed the best HR outsourcing services.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro



from Latest from TechRadar US in News,opinion https://ift.tt/rLKPpoq

0 coment�rios:

Savior Tactical Mobile SSD is a marketing product designed to support a Chinese blockbuster war movie But it's not the brightest of ...

Is it a water bottle? a pepper spray? a battery charger? No, it's a portable SSD designed like a hand grenade, courtesy of Lenovo


  • Savior Tactical Mobile SSD is a marketing product designed to support a Chinese blockbuster war movie
  • But it's not the brightest of product designs, as it might cause chaos and confusion
  • As one colleague puts it, it is likely to be an explosive success

If you've ever looked at your external hard drive and thought, I wish someone would make one that looked like an explosive device, you'll be pleased to know Lenovo has had the same idea. The company’s latest creation, which was put up on its crowdfunding page, is a grenade-inspired external SSD - yes, seriously.

Designed as a marketing tie-in for the Chinese blockbuster war movie Operation Dragon (also known as Operation Leviathan or Operation Hadal) it’s called the "Savior Tactical Mobile SSD," and has the tagline "Officially authorized hardcore aluminum alloy grenade shape."

While its tactical appearance might make a statement, it’s also likely to spark alarm in places like coffee shops, airports, or any public location where a grenade-shaped gadget might not scream "harmless data storage."

Putting the pin back in

Lenovo’s latest marketing stunt boasts a rugged aluminum alloy design, complete with USB 3.2 support.

Details on the device’s specifications remain somewhat scarce, but Tom's Hardware suggests, based on Lenovo’s existing Legion (or Savior in China) SSD lineup, it will feature 1TB of storage, a USB Type-C interface, and deliver transfer speeds of up to 1,050MB/s.

Sadly, we may never know for sure. While it’s still listed on Lenovo’s crowdfunding page, priced at ¥599 (approximately $82), and has achieved 105% funding from 314 backers, with weeks of the campaign remaining, it seems to no longer be available.

If you click the button to back it, a page appears stating that the product “has gone on vacation to Mars” and advising you to “take a look at something else!” There’s no word on why it’s been pulled or if it will return, but it’s likely that Lenovo simply had a rethink, and decided that pulling out what looks like a grenade in a bustling café to back up your files likely won’t end well.

A colleague described the SSD as likely to be an explosive success, but it seems that Lenovo has ultimately decided not to pull the pin on its Savior Tactical Mobile SSD and we’re not sure whether to be disappointed, or relieved.

You might also like



from Latest from TechRadar US in News,opinion https://ift.tt/Jec3di6

0 coment�rios:

Presumed Innocent 2: key information - The second season was confirmed by Apple TV Plus on July 12 - Jake Gyllenhaal will return as exe...

Presumed Innocent season 2: everything we know so far about the Apple TV Plus show's return

Presumed Innocent 2: key information

- The second season was confirmed by Apple TV Plus on July 12

- Jake Gyllenhaal will return as executive producer, but he’s not been confirmed to be starring in it again as Rusty Sabich

- Peter Sarsgaard is unlikely to be returning, as he said “he’s not interested in sequels…I’m a one-season person”

- Season two will “will unfold around a suspenseful, brand-new case”, although it might take inspiration from the two Scott Turow follow-up novels, The Burden of Proof and Innocent, or his new novel, Presumed Guilty, which comes out January 2025

- Executive producer J.J. Abrams – who is also onboard for season 2 – said: “We’re very excited about the possibilities we are discussing”

Such was the success of Presumed Innocent, Apple TV Plus’ dark and twisting adaptation of the Scott Turow book of the same name, that the first season hadn’t even ended before it landed a recommission from the streamer.

And while viewers had yet to find out whether Rusty Sabich (Jake Gyllenhaal) did or didn’t murder his mistress, the announcement on July 12 was cheering news for fans of the gripping crime procedural drama.
Now the story has officially wrapped up – and Forbes claims it’s been the streamers most-watched drama of all time – let’s look forward to the second series. This article will explain everything you need to know, from what to expect from the plot (including what Turow book it’s based on), new cast lists, when it’s likely to be out, and, when it drops, the trailer.

Presumed Innocent 2: does it have a release date?

The best Apple TV Plus shows include legal thrillers like Jake Gyllenhaal-fronted, Presumed Innocent

Will season two see Rusty (Gyllenhaal) back in court? (Image credit: Apple TV+)

With the first season of the drama only finishing at the end of July 2024, we wouldn’t expect to see anything until the very earliest at the end of 2025.

Presumed Innocent 2: is there a trailer yet?

Again, it’s not going to take a district attorney to work out that a trailer won’t be expected until a few weeks before the launch of season two.

Presumed Innocent 2: has a cast been confirmed?

O-T Fagbenle and Peter Sarsgaard in Presumed Innocent

Could Peter Sarsgaard (pictured right) return in Presumed Innocent season 2? (Image credit: Apple TV Plus )

Here’s where things get interesting. Despite Gyllenhaal playing the lead in this series, it’s not been immediately confirmed if he’s starring in the follow up, although this may change. He is, however, staying on as executive producer, alongside J.J. Abrams, Rachel Rusch Rich, Dustin Thomason, Matthew Tinker, David E. Kelley and Scott Turow.

One person who may not be returning is Gyllenhaal’s real-life brother in law, Peter Sarsgaard, who played Tommy Molto. He told IndieWire in August 2024: “I’m not really that interested in sequels. I think I’ve only ever done one season of anything… I think I’m a one-season person.”

Which would be a shame, as the director Greg Yaitanes told Variety that being brothers-in-law gave the actors a special connection on set: “What it did provide was that there was a safety and trust and love there, which I think is important to go as far as they did — and into nuance as much as they did.”

J.J. Abrams added to Deadline: “I will say that watching Peter Sarsgaard bring that character to life was such a joy. He’s such a remarkable actor.”

Presumed Innocent 2: what do we know about the plot?

Noma Dumezweni in Presumed Innocent

(Image credit: Apple TV Plus)

Apple TV Plus' official – but very vague – synopsis is that season two “will unfold around a suspenseful, brand-new case”.

Even one of the exec producers, J.J. Abrams, said that the team didn’t really factor seeding storylines in for a season two in the original series, telling Deadline in July 2024: “We discussed the possibility of a second season during production, but then Apple brought it up to us in post. Nothing was ever shot to set up a second season. Our focus was telling the story of Carolyn’s murder and Rusty’s trial, and wrapping that up at the end of the first season.”

He added: “It’s too early to talk about what might happen in season 2, but we’re very excited about the possibilities we are discussing.”

We know that Turow wrote a couple of follow-up novels to Presumed Innocent called The Burden of Proof (1990) which then follows Sandy Stern after the events at the end of Presumed Innocent. And in 2010, there was Innocent, which sees Sabich having (another!) affair, and he is accused of the murder of his wife.

Then, in August 2024, it was announced that Turow would be releasing another book from his hit literary multiverse, Presumed Guilty, which will hit bookshops on January 14, 2025. The case will return back to Rusty Sabich, now a retired judge, as he returns to the courtroom one last time in hopes of keeping his partner’s son out of jail, when the young man becomes a suspect in a missing-person case.

But we’re yet to find out whether any of these will form the basis for the next season, or whether the writers will be given full creative licence to go off the literary piste.


For more Apple TV Plus-related coverage, read our guides on Severance season 2, Ted Lasso season 4, and Slow Horses season 5.



from Latest from TechRadar US in News,opinion https://ift.tt/Zkyt1Gw

0 coment�rios: