Home Top Ad

Responsive Ads Here

The Sony WH-1000XM6 are here, and it’s safe to say that we’re big fans of Sony’s new flagship cans, awarding them our Recommended badge in ...

Got the Sony WH-1000XM4? Here’s 3 reasons I’d upgrade to the Sony WH-1000XM6 after testing them side-by-side

The Sony WH-1000XM6 are here, and it’s safe to say that we’re big fans of Sony’s new flagship cans, awarding them our Recommended badge in our Sony WH-1000XM6 review – if you're buying new, they're clearly some of the best headphones you can buy right now. But what if you already own a pair of the legendary Sony WH-1000XM4? Is it really worth the upgrade considering the gap in price between each model?

Well, I’ll give it to you straight: yes, yes it is. I’ve spent weeks using the brand new Sony WH-1000XM6, during which time I’ve been able to test them side-by-side with the Sony WH-1000XM4. And Sony really has got it spot on this time, levelling up every part of the package without making any design decisions the feel like a step back (*ahem* Sony WH-1000XM5, looking at you).

The improvements are clear and make a big difference throughout the Sony WH-1000XM6, but I’ve selected the three most impactful factors that may convince you XM4 owners out there to upgrade. So, let’s dive straight into things.

Sony WH-1000XM6 and Sony WH-1000XM4, both leaning against a cylindrical post

(Image credit: Future)

1. Sony has taken noise cancelling to new heights

It’s undeniable. These are the best Sony headphones ever for noise cancelling. When pitting the XM6 against the XM4 in an ANC test, it wasn’t even a contest. And that says a lot given that the XM4 still put up a strong performance as far as noise cancellation is concerned.

Look, the XM4 do well to dispel low-end sounds, and are great if you want to dull harsh noise. When using them, I rarely found it difficult to remain focused on my favorite tracks or TV shows. But the XM6 are a real cut above.

It doesn’t matter where I am or what I’m doing, these headphones almost always deliver silence, or at the least, near-silence. In busy, traffic-filled areas, I was totally detached from the world around me. When vacuuming my apartment, disruptive whirring sounds were utterly crushed. And on public transport, it was as if I was traveling alone.

The XM6 are kitted out to supply this class-leading ANC. That’s thanks to a new, drastically stronger QN3 HD noise-cancelling processor that channels the abilities of 12 microphones to phase out all ambient sounds.

The XM4, meanwhile, have seven fewer mics and a significantly weaker processor. Sure, they had wonderful ANC at the point of launch, but now, they’ve been outclassed by a wide array of rivals – of course the XM6, but also Sony’s own XM5 model and the Bose QuietComfort Ultra Headphones, to name just a couple. We rate these as the best noise cancelling headphones when it comes to pure noise stopping power, and you absolutely will hear the difference if you choose to upgrade.

Sony WH-1000XM6 leaning on pole

The Sony WH-1000XM6's class-leading ANC provides a true escape from the outside world (Image credit: Future)

2. You get the sleekness of the XM5 with the XM4’s foldability

As I mentioned in my Sony WH-1000XM6 review, these headphones marry the very best design elements of their two predecessors to perfection. I’m sure a lot of XM4 owners love the foldability of their cans, which is ideal for throwing them in a bag when you’re on the go.

And I’m also sure that Sony is under no illusions that removing that feature from the XM5 model put off a lot of XM4 owners who might've considered an upgrade.

Thankfully for those people, Sony has listened to its critics and brought folding back from the dead – and improved it. The Sony WH-1000XM6 use fortified metal hinges that are more durable and enduring than those on the XM4.

That means you’ll have less to worry about around breakage or damage over the coming years. On top of that, there’s a new magnetic case for you to put the folded-up XM6 into – in my opinion, that’s more practical than the XM4’s zip-up one.

But not only do the XM6 fold, they look pretty cool while doing so. Sony’s new cans have largely maintained the slimmer, seamless, rounded design of the XM5 – which already had a more premium look than the XM4 in my view.

The headband is greatly improved from that of the XM4, with a less plasticky appearance, a smoother feel, and luxurious levels of comfort thanks to being wider. There’s also a neat paper-like matte consistency to the XM6’s exterior, which looks extremely clean. They feel like a real physical upgrade.

Person folding up the Sony WH-1000XM6

Foldability is back and better than ever (Image credit: Future)

3. Upgraded audio, forged alongside leading mastering engineers

Let’s conclude by discussing what matters most on any pair of the best wireless headphones: sound quality. When we reviewed the Sony WH-1000XM4, we lauded their clean, expressive sound, punchy bass and impressive attention to detail.

Their 40mm dynamic drivers – similar to those used in the Sony WH-1000XM3 before them – are more than capable of handling heavy beats or delicate vocal ballads, with overall audio quality still satisfying me half a decade after launch.

But, as you’d hope, the Sony WH-1000XM6 do all of this and then some. In my comparison testing, I found the XM6 produced a more balanced sound, right out of the box. Sony’s new cans were developed alongside a number of well-renowned mastering engineers, and perhaps unsurprisingly, that means there’s a more even sound across all frequencies.

The end result is a combination of pumping yet disciplined bass, rich mids, and energetic highs. That new and improved sonic cocktail really makes the XM6 worth the step up in my view.

There’s also a ‘noise shaper’ in the XM6, which enhances digital to analog conversion and removes distortion from sudden sound changes, capping off a fantastically controlled listening experience.

Another crucial improvement from the XM4 is the XM6’s more nuanced, wider soundstage. Every instrumental element is given plenty of room to breathe, forging open, layered, and hypnotic listening experiences. Yes, the XM6 really are great at immersing you in the music… but also in movies too.

That’s thanks to a new feature called 360 Reality Audio Upmix. Just flick Cinema mode on in the Sony Sound Connect app, and the headphones will convert a basic stereo signal into a more expansive format, helping to create a three-dimensional spatial impression.

Finally, you’ve still got all the great sound-related features from the XM4. Whether that’s DSEE Extreme upscaling for lower-quality music files, LDAC for ‘hi-res’ Bluetooth listening, or EQ adjustment, you’ll be in for an absolute treat.

The XM6 do still have an advantage here, with a 10-band equalizer giving you elevated levels of control, and that improved ANC I mentioned will keep you even more engrossed in the music.

You get the picture – the XM6 really do have all bases covered, and you will immediately feel a clear difference moving from the old model to the new.

Man turning on the Sony WH-1000XM6

The WH-1000XM6 have all the tech required for a premium listening experience (Image credit: Future)

You might also like



from Latest from TechRadar US in News,opinion https://ift.tt/EawOmeW

0 coment�rios:

The rate of progress in the tech world has shown no signs of slowing down over the last seven days. Whether it's advances in the capabi...

ICYMI: the 8 biggest tech stories of the week, from Google's new AI video magic to WhatsApp on the iPad

The rate of progress in the tech world has shown no signs of slowing down over the last seven days. Whether it's advances in the capabilities of AI video generators or a long-serving messaging app finally appearing on Apple's tablets, it's been quite a week.

As good as we try and make our journalism here on TechRadar, we know that not everyone has time to sit down and digest every story that we put up across the week – and so we present to you this In Case You Missed It recap so you can get yourself caught up.

We'll be back with another ICYMI for you at the same time next week, but for now let's dive into some of the stories that have been causing the biggest ripples in the tech pond over the last few days – there's a lot to get through, and a lot of topics to cover.

If you need more reading material, check out the best new movies and shows to stream this weekend.

8. Garmin’s Whoop-style ‘sleep band’ edged closer to reality

Garmin HRM-200

Could another Garmin product be on the way? (Image credit: Mike Sawh)

Garmin is already one of the biggest and most well-respected names in fitness wearables, and it seems a brand-new device is on the way: well-placed sources say they've seen signs of a screenless, sleep-tracking band that Garmin is planning to introduce soon.

It sounds a bit like a Whoop band, from the few details that have emerged so far, which would undoubtedly make it a more comfortable option for wearing in bed than a chunky smartwatch. As yet, however, we haven't heard anything official from Garmin.

7. A streaming login leak got us updating our Netflix passwords

Netflix Ads

You might want to change your Netflix password (Image credit: Shutterstock)

If you're signed up for a Netflix account, be sure to reset your password at your earliest opportunity, as millions of login credentials have been leaked online. It's not just Netflix, either: accounts across Prime Video, Disney+, and other services are also affected.

The silver lining is that financial information related to these accounts seems to be safe, but there's no room for complacency, especially if you're using your streaming logins for other accounts as well.

It's best to assume you've been exposed and change your details.

6. Anker gave us earbuds that double as a phone battery pack

A young woman with brown hair wearing a blue/gray jacket over a white top is on a train or tube with a soundcore P41i earbud in her ear.

Meet the Anker Soundcore P41i (Image credit: Soundcore)

We're always keen to see tech that's a little bit different here at TechRadar, and that's the case with the Anker Soundcore P41i wireless earbuds. These little buds last up to 12 hours, which rises to a huge 192 hours if you include the charging case.

That's because the compact charging case doubles up as a general-purpose power bank that'll charge your phone too, if needed – it features a 3,000mAh capacity battery inside, so it may mean there's one less gadget or charging plug you need to carry around with you.

5. Sony handed indie filmmakers a new compact video camera

Sony FX2 camera

The new Sony FX2 camera (Image credit: Sony)

The Sony FX2 video camera was officially announced this week, bringing with it a tiltable EVF and a 33MP full-frame sensor. It's capable of filming in 4K at up to 60fps, and it's going to be available from July 2025, priced at $2,700 / £2,700 / AU$ 5,299 for the body only.

We still need to get our hands on the Sony FX2 and put it through some tests, but from what we can see, it looks ideal for anyone making movies on a small, low-budget scale. There are some limitations, though, including a lack of 32-bit float audio recording.

4. We tested Google’s mind-blowing AI video maker

AI-generated dinosaur image

Veo 3 can create all kinds of weird and wonderful clips (Image credit: Future)

AI video making has made a huge step forward with the arrival of the Veo 3 model from Google. Clips made by Veo 3 have been flooding onto the web and across social media, and it's now just about impossible to tell what's real and what's fake with these videos.

We've been able to run a few prompts through the Google Veo 3 engine, creating clips of dinosaurs painting and dramatic set pieces on the surface of Mars. It can take some work to get a result you'll be happy with, though, and we've also got some Veo 3 tips to share.

3. Samsung launched the One UI 8 beta for early adopters

Samsung Galaxy S25 Ultra Review

The Galaxy S25 Ultra is one of the phones first in line for One UI 8 (Image credit: Future / Lance Ulanoff)

Samsung's next big update is One UI 8, based on Android 16, and you can test it out now if you have a Galaxy S25 phone and live in the US, the UK, Germany, or South Korea. Find out how you can sign up now, and the headline features you can expect from the update.

More upgrades and tweaks will no doubt be added as the beta progresses. Samsung also told us that One UI 8 will launch in full in the coming months, alongside some brand-new foldables, which we expect to be the Galaxy Z Fold 7 and the Galaxy Z Flip 7.

2. The first Dolby Atmos FlexConnect speaker landed

The TCL Z100 speaker behind a man watching TV

The TCL Z100 offers some clever surround sound tricks (Image credit: Dolby / TCL)

The TCL Z100 has the distinction of being the first speaker to be announced that works with Dolby Atmos FlexConnect. That's the audio tech that can create dynamic surround sound in a room, no matter how many speakers you've got or how they're arranged.

Up to four TCL Z100 speakers can be combined in a single configuration, and while we've yet to hear pricing and release date details on this unit, it's great to see the dynamic technology making its way into speakers, two years after it was announced.

1. WhatsApp finally got an iPad app

WhatsApp

WhatsApp, now on the iPad (Image credit: Shutterstock)

It's been a long time coming, but WhatsApp is finally available on the Apple iPad – so your chats can spread themselves out across a bigger screen. As well as sending and receiving messages, you'll be able to share your screen and video chat with up to 32 people at once.

You'll be able to sync conversations across from your other devices in just a few seconds, and WhatsApp promises there's more to come with WhatsApp on Apple's tablets. Is it too much to ask Meta to get around to making an iPad app for Instagram next?



from Latest from TechRadar US in News,opinion https://ift.tt/ESbgBca

0 coment�rios:

As demand for AI and cloud computing soars, pundits are suggesting that the world is teetering on the edge of a potential data center crun...

Data centers are at the heart of the AI revolution and here's how they are changing

As demand for AI and cloud computing soars, pundits are suggesting that the world is teetering on the edge of a potential data center crunch—where capacity can’t keep up with the digital load. Concerns and the hype have led to plummeting vacancy rates: in Northern Virginia, the world's largest data center market, for example, vacancy rates have fallen below 1%.

Echoing past fears of "peak oil" and "peak food," the spotlight now turns to "peak data." But rather than stall, the industry is evolving—adopting modular builds, renewable energy, and AI-optimized systems to redefine how tomorrow’s data centers will power an increasingly digital world.

1. Shift Toward Modular and Edge Data Centers

Future data centers will increasingly move away from massive centralized facilities alone, embracing smaller, modular, and edge-based data centers. The sector is already splitting out in hyperscale data centers one end and smaller, edge-oriented facilities on the other.

Smaller, modular and edge data centers can be built in a few months and tend to be located closer to end users to reduce latency. Unlike the huge campuses of hyperscale with facilities often covering millions of square feet these smaller data centers are sometimes built into repurposed buildings such as abandoned shopping malls, empty office towers, and factories in disuse, helping requalify ex-industrial brownfield areas.

Leaner centers can be rapidly deployed, located closer to end users for reduced latency, and tailored to specific workloads such as autonomous vehicles and AR.

2. Integration with Renewable and On-Site Power Sources

To address energy demands and grid constraints, future data centers will increasingly be co-located with power generation facilities, such as nuclear or renewable plants. This reduces reliance on strained grid infrastructure and improves energy stability. Some companies are investing in nuclear power. Nuclear power provides massive, always-on power that is also free of carbon emissions. Modular reactors are being considered to overcome grid bottlenecks, long wait times for power delivery, and local utility limits.

Similarly, they will be increasingly built in areas where the climate reduces operational strain. Lower cooling costs and access to water enables the use of energy-efficient liquid-cooling systems instead of air-cooling. We will be seeing more data centers pop up in places like Scandinavia and the Pacific Northwest.

3. Operational Optimization

Artificial intelligence will play a major role in managing and optimizing data center operations, particularly for cooling and energy use. For instance, reinforcement learning algorithms are being used to cut energy use by optimizing cooling systems, achieving up to 21% energy savings.

Similarly, fixes like replacing legacy servers with more energy-efficient machines, with newer chips or thermal design, can significantly expand compute capacity, without requiring new premises.

4. Hardware Density and Efficiency Improvements

Instead of only building new facilities, future capacity will be expanded by refreshing hardware with newer, denser, and more energy-efficient servers. This allows for more compute power in the same footprint, enabling quick scaling to meet surges in demand, particularly for AI workloads. These power-hungry centers are also putting a strain on electricity grids.

Future data centers will leverage new solutions such as load shifting to optimize energy efficiency. Google is already partnering with PJM Interconnection, the largest electrical grid operator in North America, to leverage AI to automate tasks such as viability assessments of connection applications, thus enhancing grid efficiency.

Issues are typically not due to lack of energy but insufficient transmission capacity.

In addition to this, fortunately, data centers are usually running well below full capacity specifically to accommodate future growth. This added capacity will prove useful as facilities accommodate unexpected traffic spikes, and rapid scaling needs without requiring new constructions.

5. Geographically and Politically Responsive Siting

Future data center locations will be chosen based on climate efficiency, grid access, and political zoning policies but also availability of AI-skilled workforce. Data centers aren’t server rooms—they’re among the most complex IT infrastructure projects in existence, requiring seamless power, cooling, high-speed networking, and top-tier security.

Building them involves a wide range of experts, from engineers to logistics teams, coordinating everything from semiconductors to industrial HVAC systems. Data centers will thus drive up the demand for high-performance networking, thermal, power redundancy, and advanced cooling engineers.

Looking to the future

It’s clear that the recent surge in infrastructure demand to power GPUs and high-performance computing, for example, is being driven primarily by AI. In fact, training massive models like OpenAI’s GPT-4 or Google’s Gemini requires immense computational resources, consuming GPU cycles at an astonishing rate. These training runs often last weeks and involve thousands of specialized chips, drawing on power and cooling infrastructure.

But the story doesn’t end there: even when a model is trained, running these models in real-time to generate responses, make predictions, or process user inputs (so-called AI inference) adds a new layer of energy demand. While not as intense as training, inference must happen at scale and with low latency, which means it’s placing a steady, ongoing load on cloud infrastructure.

However, here’s a nuance that’s frequently glossed over in much of the hype: AI workloads don’t scale in a straight-forward, linear fashion: doubling the number of GPUs or increasing the size of a model will not always lead to proportionally better results. Experience has shown that as models grow in size, the performance gains actually may taper off or introduce new challenges, such as brittleness, hallucination, or the need for more careful fine-tuning.

In short

In short, the current AI boom is real, but it may not be boundless. Understanding the limitations of scale and the nonlinear nature of progress is crucial for policymakers, investors, and businesses alike as they plan for data center demand that is shaped by AI exponential growth.

The data center industry therefore stands at a pivotal crossroads. Far from buckling under the weight of AI tools and cloud-driven demand, however, it’s adapting at speed through smarter design, greener power, and more efficient hardware.

From modular builds in repurposed buildings to AI-optimized cooling systems and co-location with power plants, the future of data infrastructure will be leaner, more distributed, and strategically sited. As data becomes the world’s most valuable resource, the facilities that store, process, and protect it are becoming smarter, greener, and more essential than ever.

We list the best colocation providers.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro



from Latest from TechRadar US in News,opinion https://ift.tt/4RHu3YP

0 coment�rios:

Artificial Intelligence (AI)’s role in web hosting is set to redefine the tech industry over the next twelve months, but not in the way yo...

Revolutionizing web hosting: how AI will reshape the industry in the next 12 months

Artificial Intelligence (AI)’s role in web hosting is set to redefine the tech industry over the next twelve months, but not in the way you might expect. While current AI web hosting tools are largely focused on front-end solutions like marketing features, such advancements only scratch the surface of what AI can really offer the industry. These AI tools automate basic tasks and provide quick fixes, making them popular, despite failing to address the real challenges.

The real potential of AI in web hosting lies in solving deeper, backend challenges that directly impact a website’s stability, performance and security. In this article, I’ll explore AI’s shift in web hosting from front-end features to backend solutions, that drive real improvements in these areas.

Why AI in web hosting is missing the mark

The race is on to adopt AI – the problem is, in a bid to win, some companies have taken shortcuts to the finish line, adopting only surface level integrations. Let’s take an AI customer chatbot for example. Yes, AI chatbots provide customers with an instance service and can improve customer satisfaction, however they’re rendered useless if your website takes fifteen seconds to load.

According to Cloudways’ research, 60% of UK online shoppers will abandon a purchase if a website takes too long to load or has minor errors, with 27% of shoppers facing such problems in the last 12-months. Customers are facing real problems with slow loading times, and complicated checkout processes.

By only focusing on flashy, front-end solutions, small and medium-sized businesses (SMBs) – which make up 99.8% of the UK business population – are missing a huge opportunity to improve the overarching website experience, fundamental to the customer.

How AI can be used to solve real challenges for SMBs

The majority of SMBs don’t have dedicated website experts; websites are instead managed by a member of the team as a bolt-on to their day job. This setup works fine—until something goes wrong. When issues arise, troubleshooting can take days or even weeks for complex problems, potentially impacting sales. On the other hand, AI can proactively detect, diagnose, resolve, and even prevent website issues from occurring with minimal human intervention.

Modern hosting environments must navigate complex challenges like database optimization, traffic spikes, security threats and bot activity. By leveraging AI-driven analytics and machine learning models, hosting providers can predict potential failures, automatically allocate resources, and mitigate risks before they impact performance. Integrating AI into the process can enable predictive maintenance, reducing server downtime by 30% or greater enhancing the reliability of web hosting services.

AI can also be integrated into intelligent monitoring systems, proactively detecting and resolving technical issues before they impact users. By analysing real-time data, AI can predict server strain, automatically scale resources, and optimize database performance, all without manual intervention. This ensures websites remain fast and reliable, even during high-demand events like product launches or ticket sales. AI tackles the behind-the-scenes challenges customers don’t see, but businesses spend hours trying to fix.

These examples of integrations aren’t flashy front-end solutions but solve real business challenges. They integrate AI to deliver customers with a seamless, reliable experience.

The human-AI partnership: the key to unlocking the future of web hosting

The future of successful web hosting will be shaped by the integration of AI and human expertise. In my opinion, AI should be used to enhance, not replace, human skills, with technical professionals still playing a key role in strategic decision-making and solving complex problems.

The human touch will always be essential for building relationships and understanding unique business needs.

With 80% of businesses in Europe expected to integrate AI-powered software by next year, SMBs will increasingly demand hosting solutions that simplify complexity through intelligent automation. The most successful businesses in the coming months will blend AI efficiency with human insight, allowing time and money to focus on growth and innovation while AI handles day-to-day operations. The industry is evolving from reactive problem-solving to predictive, proactive hosting management.

AI’s true potential in web hosting over the next twelve months lies in tackling fundamental technical challenges rather than just offering surface-level features. Businesses should focus on AI hosting capabilities that deliver measurable improvements in performance, security and reliability. The future of web hosting will be defined by providers who successfully blend AI automation with human expertise, ensuring both the technical side and customer relationships are prioritized.

Organizations should seek hosting partners who use AI to solve real business challenges, rather than simply following technology trends, ensuring they stay ahead in a competitive digital landscape.

We Compiled a list of the best website monitoring software.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro



from Latest from TechRadar US in News,opinion https://ift.tt/4VyGcSY

0 coment�rios:

A Nintendo Switch 2 bundle listing has appeared on the CostCo website It's the Mario Kart World bundle, with 12 months of Nintendo S...

CostCo is the latest US retailer to confirm Nintendo Switch 2 stock


  • A Nintendo Switch 2 bundle listing has appeared on the CostCo website
  • It's the Mario Kart World bundle, with 12 months of Nintendo Switch Online + Expansion Pack subscription time
  • No price as of yet, but we expect it to be around $549.99

I've got some potentially great news if you're a CostCo member and still on the hunt for a Nintendo Switch 2, as the retailer is expected to have launch day (June 5) stock.

The US retail giant now has a Nintendo Switch 2 bundle listed on its website. It includes the console itself, a digital copy of Mario Kart World, and a 12-month Nintendo Switch Online + Expansion Pack subscription.

Stock could obviously vary depending on your location, but the webpage states that the Switch 2 "is expected to be in stock and available for purchase soon" and urges shoppers to "check back again later."

There is, however, still no sign that Nintendo Switch 2 pre-orders will be available at CostCo. Furthermore, the retailer is limiting purchases to one unit every seven days. But after a week, it's realistic to expect Switch 2 stock to sell through entirely at CostCo given how difficult it has been to already secure a pre-order.

As for price, CostCo hasn't disclosed this quite yet. But if I had to guess, given the addition of a 12-month NSO + Expansion Pack membership on top of the $499.99 Mario Kart World bundle, I'd expect this bundle to go on sale for around $549.99.

Likely not the most affordable bundle out there, then, but certainly one worth considering if you'd like a Switch 2 console on day one, and are running out of options elsewhere.

You might also like...



from Latest from TechRadar US in News,opinion https://ift.tt/sTiqmZN

0 coment�rios:

Dyson Cool FC1 is an upgrade of the original Dyson Cool tabletop fan It has the same powerful, blade-less design and Air Multiplier tech...

The original Dyson bladeless fan just got a long-awaited revamp, and it looks better than ever


  • Dyson Cool FC1 is an upgrade of the original Dyson Cool tabletop fan
  • It has the same powerful, blade-less design and Air Multiplier technology
  • It adds a Night mode, more setting options, and a useful LCD screen

The original Dyson Cool was the world's first blade-less fan, and the catalyst for many of the brand's air-care innovations that followed. Since its launch 16 years ago the brand has released many more cooling, heating and air-purifying products with increasingly advanced features, but the Dyson Cool hasn't seen an upgrade – until now.

The next-gen model, called the Dyson Cool CF1, has the same powerful airflow and sleek, blade-less design, but adds some helpful features and usability tweaks to bring it in line with the rest of the best fans on the market. It's not as big or splashy as some of the brand's other launches, but it delivers exactly what I want from a tabletop fan.

Dyson Cool CF1 fan

(Image credit: Dyson)

So what's new? Firstly, you have more customization options to play with. The oscillation settings have been extended – you can now choose between 15, 40 and 70 degrees of oscillation (or no oscillation at all) – and you've now got 10 fan speeds to play with.

Dyson has added a Night mode, which automatically dims the displays and adjusts fan speed for cooling that'll help you drop off, rather than being distracting. This is combined with a new sleep timer that shuts off the fan when you're (hopefully) happily away in the land of nod. There's also an LCD screen, which shows at a glance what mode you're in and which airflow setting you're using.

Dyson Cool CF1 fan

(Image credit: Dyson)

Elsewhere, there's still an energy-efficient brushless DC motor, and it uses Dyson's patented 'Air Multiplier' technology, which can apparently amplify the air around it by up to 13 times. As with all Dyson's fans, there are no blades, which means smooth and even airflow, and a design that's safer and far easier to keep clean.

There are no air purification (or heating, or humidifying) functions, as appear on other Dyson fans in the wider range. The Dyson Cool FC1 is focused entirely on efficient personal cooling.

Dyson Cool CF1 fan

(Image credit: Dyson)

"The original bladeless fan revolutionized the way we think about airflow, combining cutting-edge engineering with sleek, safe, and efficient design," says Logan Thomson, Dyson Design Engineer. "This latest iteration builds on those core benefits by introducing modern upgrades like intelligent features, including sleep mode, to meet the demands of today’s customers.”

The Dyson Cool CF1 goes on sale in the UK on May 28 priced at £249.99, at Dyson Demo stores or online at Dyson.co.uk. We're waiting for Dyson to confirm pricing and launch date information for the US and Australia.

You might also like...



from Latest from TechRadar US in News,opinion https://ift.tt/R9oqNOX

0 coment�rios:

Data sovereignty has rapidly become a critical consideration for organizations evaluating and selecting data center solutions. At its core...

Data sovereignty is now a strategic priority

Data sovereignty has rapidly become a critical consideration for organizations evaluating and selecting data center solutions.

At its core, data sovereignty is the principle that data is subject to the laws and governance structures of the country in which it is physically stored or collected. This principle is embedded deeply into two foundational legislative instruments: the Data Protection Act 2018 (DPA 2018) and the UK General Data Protection Regulation (UK GDPR).

While organizations have always been concerned about the safety and security of their information, the concept of sovereignty introduces an added layer of complexity. It is not just about protecting data from breaches, but also about always ensuring the correct jurisdictional authority over it.

Mandatory standards

Both the DPA 2018 and the UK GDPR establish mandatory standards for how personal data must be handled, but they go beyond that. These laws define the standard of sovereignty and shape the processes surrounding the collection, storage, access, and processing of personal data. Consequently, the selection of a data center provider is no longer just a matter of performance metrics or operational efficiency.

Instead, it’s a decision heavily influenced by regulatory compliance and the ability of the provider to support the broader digital transformation goals of a business. Choosing the wrong partner could mean costly delays in projects, added legal scrutiny, and potential breaches of customer trust, making the decision-making process far more strategic than it has been in the past.

This consideration becomes especially important when organizations seek to harness the potential of emerging technologies, particularly artificial intelligence (AI) and machine learning (ML). These technologies are data-intensive and require vast amounts of computing power. They demand a digital infrastructure that can handle complex processing workloads in real time. UK-based high-performance data centers are emerging as essential to this transformation.

These facilities offer powerful computing capabilities combined with localized data handling, resulting in significantly reduced latency and faster processing speeds. For AI and ML, where split-second decision-making and continuous data training are essential, any delay or disruption can severely impact the effectiveness of models and applications.

Being able to process information securely and locally gives businesses a critical edge in fields that are becoming increasingly competitive and innovation-driven. By ensuring that data remains close to its point of origin, these centers support more agile, secure, and compliant technological innovation.

Cloud platforms

In parallel with this trend, UK-based private cloud computing platforms are gaining traction as a strategic enabler for organizations looking to maintain data sovereignty while remaining agile in a competitive digital environment. These platforms are built on networks of data centers that are not only physically located within the UK but also owned and operated by domestic entities.

This domestic control provides peace of mind, particularly when combined with access to secure partner ecosystems and direct, high-speed interconnections to public cloud providers. For organizations, this translates into more control, better predictability around data transfer costs, and simpler compliance with increasingly complex data protection regulations.

It also eliminates the uncertainties associated with cross-border legal disputes, particularly in a climate where international data transfer rules are under constant review and renegotiation. Businesses no longer must wonder whether a change in global politics will suddenly make their infrastructure non-compliant or expose them to new liabilities.

Put simply, the ‘stick’ element of data sovereignty lies in the serious consequences for non-compliance. The UK’s Information Commissioner’s Office (ICO) has made it clear that failing to properly manage the transfer of personal data, particularly to jurisdictions outside the UK that do not have adequate data protection frameworks, can result in heavy penalties. These fines can reach up to £17.5 million or 4% of a company’s global annual turnover, whichever is greater.

These are not hypothetical threats; they are actively enforced, and they highlight the very real financial and reputational risks associated with poor data governance. The reputational damage alone can be devastating, especially in sectors where customer trust is fundamental. Companies that suffer breaches or compliance failures often see long-term declines in customer confidence, partner relationships, and market value, compounding the original financial penalties.

Data in transit

What many businesses may not fully realize is that these risks don’t just apply to where data is stored, but also to how it moves. Data in transit, when data migration between servers, centers, or even across international boundaries, falls under the same stringent scrutiny. And with the UK’s upcoming Data Protection Bill set to introduce even tighter restrictions on data flows and increased responsibilities for data controllers and processors, the pressure to adopt robust sovereignty practices is only going to intensify.

This means that simply having strong cybersecurity policies is no longer enough. Organizations must now monitor and manage the full lifecycle of their data with far greater precision, including every transfer, replication, and access point.

However, the ‘carrot’ on offer is equally compelling and far more constructive. Organizations that invest in sovereignty-conscious infrastructure and best practices aren’t just ticking a compliance box, they are unlocking the ability to innovate more quickly and confidently. Keeping data processing geographically close to its source not only meets regulatory requirements, but it also reduces the reliance on distant infrastructure that may be slower or less secure.

The result is improved performance, lower latency, reduced operational risk, and stronger overall resilience. These gains can be transformational for businesses, particularly those operating in sectors where agility, speed, and security are essential to competitiveness.

Additional assurance

In addition to these benefits, privacy-preserving computing (PPC) models are providing organizations with additional assurance. These models ensure that data remains within UK borders and is handled in environments specifically designed for high security, maximum uptime, and seamless interconnectivity.

The IT infrastructure behind these models is increasingly being viewed not just as a support system, but as a vital part of a company’s core value proposition. In sectors like finance, retail, and public services, where milliseconds can matter, localized and resilient infrastructure is no longer a luxury but a necessity for delivering outstanding user experiences and meeting rising customer expectations.

In this evolving landscape, data sovereignty is no longer just a compliance requirement or a legal consideration. It is becoming a strategic differentiator, an asset that enables businesses to manage risk more effectively, embrace innovation with confidence, and build a more robust, future-ready digital foundation.

As such, those making data center purchasing decisions must consider sovereignty not merely as a legal obligation, but as a pathway to enhanced performance, better control, and sustained competitive advantage.

We've compiled a list of the best data recovery software.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro



from Latest from TechRadar US in News,opinion https://ift.tt/IY3ETVd

0 coment�rios:

Opera Neon is a new fully agentic browser capable of performing tasks The new premium browser is subscription-only and coming soon You ...

Opera rethinks the role of the browser with Neon, the first AI agentic web browser that will do tasks for you


  • Opera Neon is a new fully agentic browser capable of performing tasks
  • The new premium browser is subscription-only and coming soon
  • You can join the waitlist today

Opera Neon is a new premium subscription web browser that can understand your commands in natural language thanks to AI while also performing a variety of tasks for you.

For instance you could ask Opera Neon to produce a detailed report, make a website or even code projects like games, all in the browser.

“We’re at a point where AI can fundamentally change the way we use the internet and perform all sorts of tasks in the browser. Opera Neon brings this to our users’ fingertips,” said Henrik Lexow, Senior AI Product Director at Opera.

“We see it as a collaborative platform to shape the next chapter of agentic browsing together with our community.”

Opera Neon browser.

Opera Neon keeps its complexity hidden by offering you a simple choice between Chat, Do and Make. (Image credit: Opera)

Fully agentic on the web

Of course, you can currently chat with AI in the standard Opera browser, which has access to Aria AI and ChatGPT in the sidebar, but Opera Neon is a fully agentic browser, which means you can ask it to perform tasks for you as well as chat or search with AI.

That could include filling out a form that appears in the website you’re viewing, making a hotel reservation, or even going shopping. Best of all, it does all this locally in the browser, without risking your privacy or security.

The AI agent inside Opera Neon has previously been showcased by Opera as Browser Operator and you can give it tasks with simple prompts like “Keep me updated on the latest breakthroughs in artificial intelligence,” and it would regularly collect and summarize the most relevant articles.

So, instead of wading through an endless news feed, you’d get just what matters to you the most, neatly packaged.

You can also chat with Opera Neon as if it were an AI chatbot, just like ChatGPT, and it can also search the web for you to find answers.

Opera Neon

(Image credit: Opera)

Chat, Do and Make

Opera Neon boils its core functionality down to three main options: Chat, Do and Make.

Chat is the chatbot function. Here you can ask the AI contextual questions about the web page you are viewing and search the web.

Do is where Opera Neon can interact with the website you are viewing. We're talking about things like filling in forms, booking reservations and shopping. This is the technology we've previous known as Bowser Operator.

Make is the truly new part of Opera Neon. Here you can ask the browser to make you something, and it will interpret what you mean, then go away and do it for you. Once you've tasked it with making something you're free to go off and do something else.

Opera Neon looks like being one of the most exciting uses of AI I’ve seen in a while. The prospect of being able to ask the AI questions about the website you’re currently viewing in the browser and getting reliable answers back isn’t new, but the agentic qualities of the browser sound incredibly valuable.

Opera Neon isn’t out yet, but Opera says you can join the waitlist today. In the meantime, Opera has made this video to explain what an AI agent is:

You might also like…



from Latest from TechRadar US in News,opinion https://ift.tt/HZl1KpD

0 coment�rios:

If you and your compliance team are jumping between separate systems just to track Slack messages, email threads, mobile chats and collabor...

Unifying communications supervision across channels

If you and your compliance team are jumping between separate systems just to track Slack messages, email threads, mobile chats and collaboration tools, you're not alone. The digital-first workplace has made communication faster, but supervision more fragmented, and riskier than ever.

Regulators like the SEC, FINRA, and the CFTC have made it clear that all business-related communications must be captured, supervised, and auditable, no matter the channel. Yet, many organizations struggle with siloed monitoring approaches that slow down compliance efforts and leave vulnerabilities unchecked.

The answer is a single, unified view of employee communications in one system that captures everything - what I like to call a Single Pane of Glass approach.

What is a Single Pane of Glass?

A Single Pane of Glass approach consolidates communications data from multiple channels into a single, real-time dashboard. The term doesn’t just refer to all the information being visible in one place, but also its transparency, as nothing is hidden or obscured. Instead of managing different supervision separately across email, chat, social media and mobile, compliance teams can monitor through one interface. This means a holistic view of all employee interactions, ensuring greater efficiency, clarity, and regulatory adherence.

This unified approach helps compliance teams:

1. Provide real-time monitoring across multiple channels.

2. Automatically flag noncompliant language and behavior across all sources.

3. Streamline audits and reporting for regulatory examinations.

4. Align compliance, IT, and risk teams to work together with a shared source of truth.

With a centralized system, teams can spot issues faster, reduce false positives and strengthen their response to potential violations.

It’s also important to distinguish what a true Single Pane of Glass is not, and that’s a dashboard packed with sub-modules. Everything might be technically accessible, but if you’re forced to click around to find what matters, it's missing the point. Key insights get buried, messages get missed, and the whole value of a unified view falls apart.

Why Fragmented Monitoring Falls Short

Regulators have made it clear: failing to supervise digital communications properly will lead to fines. In recent years, financial institutions have faced millions in fines due to gaps in their monitoring capabilities, particularly around unauthorized messaging apps and personal devices.

Here’s what’s at stake:

Regulatory non-compliance: Disconnected systems make it harder to capture, search, and audit all relevant communications.

Operational inefficiencies: It’s not uncommon for compliance teams to spend up to 12 hours a week navigating between different monitoring systems, as reported by Smarsh. That kind of manual effort adds up—and pulls focus away from higher-value tasks.

Exposures to fines and reputational damage: Missed violations due to fragmented oversight can lead to financial penalties and unwanted attention.

Without a unified system, firms are left reacting to issues after they occur, often under intense scrutiny.

What to Look for in an Effective Unified Supervision Platform

Not all Single Pane of Glass solutions are created equal. To be effective, a platform needs to deliver more than just aggregation. It should be purpose-built for capturing complex communications channels, flexible enough to adapt, and easy to use across teams.

Key features to prioritize:

1. Comprehensive Channel CoverageEmail, instant messaging, social media, collaboration tools (Teams, Slack, Zoom), SMS, and more.

2. Automated Surveillance – Automate keyword tracking, sentiment analysis, and anomaly detection to proactively flag risks and regulatory violations.

3. Audit & Reporting Tools – Easy access to communication history in the event of regulatory requests and internal reviews.

4. Scalability & Integration – A solution that integrates with the existing compliance infrastructure and evolves with regulatory developments.

Choosing a platform where you can leverage these capabilities can improve efficiency, reduce costs, and enhance risk mitigation.

How to Start Implementing a Single Pane of Glass Strategy

Transitioning to a unified supervision approach requires careful planning, but it’s worthwhile. With the right planning, this move will deliver long-term value across compliance, risk, IT and overall business performance.

Here’s how firms can get started:

Assess Current Gaps – Identify where compliance monitoring is fragmented, incomplete and where risks exist.

Define Key Compliance Goals – Ensure alignment with relevant requirements, from the SEC to FINRA, the FCA, ASIC etc, depending on your location.

Select the Right Technology Partner – Look for platforms that integrate easily, offer strong customer support, and specialize in regulated industries.

Secure Cross-Functional Buy-In – Engage IT, compliance, and risk teams to ensure a smooth rollout and long-term adoption.

Monitor & Adapt – Continuously refine supervision policies as regulations and communications trends evolve.

A successful Single Pane of Glass strategy not only enhances compliance but also improves operational efficiency and agility in an increasingly complex regulatory environment.

The Future of Communications Supervision is Unified

With regulators tightening oversight on digital communications, compliance leaders must adopt a proactive approach. Now is the time to evaluate your current compliance framework and explore unified supervision solutions. A Single Pane of Glass approach gives compliance leaders a more efficient and scalable way to manage risk.

By consolidating communications oversight, compliance leaders can focus on strategic risk management rather than reactive firefighting, future-proofing their communications compliance programs and ensuring they’re always one step ahead of evolving regulations.

We've compiled a list of the best call center software.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro



from Latest from TechRadar US in News,opinion https://ift.tt/UjenSTN

0 coment�rios:

A major Indian job site was leaking recruiter emails The problem stemmed from a bug in the Naukri API The hole was quickly plugged, but...

Another top employment website found exposing recruiter email addresses


  • A major Indian job site was leaking recruiter emails
  • The problem stemmed from a bug in the Naukri API
  • The hole was quickly plugged, but users should be aware of scams

One of the most popular and widely used job portals in India has reportedly been found leaking recruiter email addresses.

A security researcher named Lohith Gowda recently discovered a vulnerability in Naukri’s API for Android and iOS apps, which exposed the recruiters’ email addresses when they were viewing profiles of potential candidates.

Speaking to TechCrunch, Gowda explained what the dangers of this vulnerability were: “The exposed recruiter email IDs can be used for targeted phishing attacks, and recruiters may receive excessive unsolicited emails and spam."

Save up to 68% for TechRadar readers

TechRadar editors praise Aura's upfront pricing and simplicity. Aura also includes a password manager, VPN, and antivirus to make its security solution an even more compelling deal.

Preferred partner (What does this mean?)View Deal

2FA codes and session tokens

Gowda further stressed that the email IDs can be added to different spam lists and public breach databases, which are sometimes picked up by scraping bots. This, in turn, can lead to automated bot abuse and various scams.

Relevancy and a sense of urgency are key to a successful phishing email.

An attacker might reference an ongoing hiring campaign, a candidate's resume, or a job platform the recruiter uses, to make the email feel timely and legitimate.

Urgency, on the other hand, is how threat actors force the victims into making rash decisions that they later regret.

In this case, these could be claims of a top candidate being about to accept another offer or interview access links that are expiring.

After discovering the flaw, Gowda reached out to Naukri, who then plugged the leak. “All identified enhancements are implemented, ensuring our systems remain updated and resilient,” Alok Vij, IT infrastructure head at Naukri’s parent company InfoEdge, confirmed to TechCrunch. “Our teams have not detected any usual activity that affects the integrity of user data.”

Naukri.com is one of the most popular Indian job sites. According to SimilarWeb, it had more than 28 million unique monthly visits in April 2025, and ranks as the number one job and employment website in the country.

You might also like



from Latest from TechRadar US in News,opinion https://ift.tt/GZv8iC4

0 coment�rios:

The Ministry of Defence (MOD) recently published a document on 'Secure by Design' challenges that represents something we rarely se...

Secure by design: the MOD's honest take

The Ministry of Defence (MOD) recently published a document on 'Secure by Design' challenges that represents something we rarely see in government cybersecurity: a transparent acknowledgment of the complexities involved in implementing security from first principles.

Secure by design is a fundamental approach that embeds security into systems from the very beginning of the design process as opposed to treating it as a bolt-on feature later in development.

Having spent years advocating for the human element in security, it's refreshing to see an official recognition that technical controls are only as effective as the people implementing them.

Addressing the Security Skills Challenge

The MOD's first identified problem is "How do we up-skill UK defense in 'Secure by Design'?"

Their acknowledgment that effective implementation requires a "one team" approach across UK defense reflects the reality that security cannot be siloed within technical teams.

This aligns perfectly with what I've observed in organizations with mature security cultures—security becomes everyone's responsibility, not just the security department's concern.

The Knowledge Distribution Problem

Perhaps most intriguing is problem two: "How does 'Secure by Design' account for unevenly distributed information and knowledge?"

The MOD correctly identifies that information asymmetry exists for various legitimate reasons. What makes this assessment valuable is the recognition that not all information-sharing barriers stem from poor security culture; some exist by design and necessity.

Imagine a family planning a surprise birthday party for their grandmother. Different family members have different pieces of information that they intentionally don't share with everyone:

The daughter knows the guest list and has sent invitations directly to each person, asking them not to discuss it openly on family group chats,

The son has arranged the venue and catering, with specific dietary requirements for certain guests,

The grandchildren are handling decorations and have a theme they're working on,

And most importantly—nobody tells grandmother anything about any of this.

This isn't because the family has poor communication skills or doesn't trust each other. These information barriers exist by design and necessity to achieve the goal of surprising grandmother. If everyone shared everything with everyone else, the surprise would be ruined.

The MOD's approach

In the MOD's security context, this is similar to how:

Certain threat intelligence can't be shared with all suppliers because doing so might reveal intelligence-gathering capabilities,

Suppliers can't share all their proprietary technology details even with clients like the MOD, as they need to protect their competitive advantage,

Specific security controls might be kept confidential from general staff to prevent those controls from being circumvented.

These aren't failures of security culture—they're intentional compartmentalization that sometimes make security work possible in the first place. The challenge isn't eliminating these barriers but designing systems that can function effectively despite them.

This reflects the nuanced reality of human behavior in security contexts. People don't withhold security information solely due to territoriality or negligence; often, legitimate constraints prevent the ideal level of transparency. The challenge becomes developing systems and practices that can function effectively despite these inherent limitations.

The Early Design Challenge

The third problem addresses a familiar paradox: how to implement security at the earliest stages of capability acquisition when the capability itself is barely defined.

In other words, it's like trying to build a high-tech security system for a house when you only have a rough sketch of what the house might eventually look like - you know you need protection, but it's difficult to plan specific security measures when you're still deciding how many doors and windows there will be, what valuables will be stored inside, or even where the house will be located. As the MOD puts it, at this stage a capability might be "little more than a single statement of user need."

This connects directly to how humans approach risk management. When primary objectives (delivering military capability) compete with secondary concerns (security), practical compromises inevitably emerge. The MOD's candid acknowledgment that "cyber security will always be a secondary goal" reflects a pragmatic understanding of how priorities function in complex organizations.

Through-Life Security

Problem four addresses perhaps the most demanding human aspect of security: maintaining security rationale and practice across decades of a capability's lifespan. With defense platforms potentially remaining operational for 30+ years, today's security decisions must make sense to tomorrow's engineers.

The question of continuous risk management becomes particularly relevant as organizations encounter new threats over their extended lifespans. How human operators interpret and respond to evolving risk landscapes determines the long-term security posture of these systems.

Building a Collaborative Security Culture

The MOD recognizes that 'Secure by Design' implementation isn't merely a technical challenge but fundamentally about collaboration among people across organizational, disciplinary, and national boundaries.

The MOD's approach suggests a shift toward a more mature security culture — one that acknowledges limitations, seeks external expertise, and recognizes the complex interplay between human factors and technical controls. Their openness about needing help from academia and industry demonstrates a collaborative mindset essential for addressing complex security challenges.

This collaborative approach to security culture stands in stark contrast to the traditional government tendency toward self-sufficiency. By explicitly inviting external perspectives, the MOD demonstrates an understanding that diverse viewpoints strengthen security posture rather than compromising it.

Security isn't about having all the answers—it's about creating the conditions where people can collaboratively develop appropriate responses to ever-changing threats.

We've compiled a list of the best identity management software.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro



from Latest from TechRadar US in News,opinion https://ift.tt/L6FXwBm

0 coment�rios:

Agentic AI is one of the latest concepts in artificial intelligence, now gaining real traction beyond its early buzz. Ongoing advancements ...

Unlocking intelligent agents through connected data

Agentic AI is one of the latest concepts in artificial intelligence, now gaining real traction beyond its early buzz. Ongoing advancements in Agentic AI are accelerating the development of autonomous business systems, building on the achievements of machine learning.

Operating as an independent ‘agent’, this technology is equipped to make informed decisions based on the multimodal data and algorithmic logic, and can then ‘learn’ and evolve through experience.

Even more exciting is its capacity to act independently. It’s this unique ability to adapt, plan, and carry out complex tasks without human oversight that distinguishes Agentic AI from earlier generations of AI tools.

In supply chains, for instance, AI agents can track market activity and historical demand trends to forecast inventory needs and implement measures to avoid shortages, such as by automating parts of the restocking processes. These agents shift their behavior in response to changing market conditions, boosting efficiency and performance. It's therefore no surprise that 26% of business leaders report their organizations are beginning to shape strategic approaches around Agentic AI.

However, as great as it sounds to outsource such tasks to Agentic AI, we also need to err on the side of caution. For all its autonomous power, how can the actions and outputs of AI agents be fully trusted? If we rely on Agentic AI to complete sophisticated tasks on its own, how do we ensure its decisions are truly grounded in what’s happening in the real world, or in the enterprise’s view of the world?

In the same way our brains use observation and extra inputs to draw conclusions, AI agents need to rely on a lot of external sources and signals to enhance their reasoning capabilities.

This need can be met by solutions and platforms that collect and present data in a way that’s accessible and retrievable. Here’s how:

The trust challenge in autonomous AI systems

As discussed, what sets Agentic AI apart from other AI systems is its ability to act autonomously, not just engage in a linear conversation. The complexity of the tasks agents complete typically requires them to refer to multiple, dynamic external sources. As a result, the risk of something going wrong automatically increases. For example, you might trust a chatbot to provide you with an update on the status of a claim or refund, but would you feel as trusting when giving an AI agent your credit card details to book a flight for you?

Away from conversational AI, task-based agents plan and change actions depending on the context they’re given. They delegate subtasks to the various tools available through a process often referred to as “chaining” (the output of one action becomes the input for the next). This means that queries (or tasks) can be broken down into smaller tasks, with each requiring access to data in real-time, processed iteratively to mimic human problem-solving.

The chain effect (in which decisions are made) is informed by the environment that’s being monitored, i.e., the sources of data. As a result, explainable and accurate data retrieval is required at each step of the chain for two reasons. Firstly, users need to know why the AI agent has landed on a particular decision and have visibility of the data source it’s based on.

They need to be able to trust that the action is, in fact, the most effective and efficient. Secondly, they need to be able to optimize the process to get the best possible result each time, analysing each stage of the output and learning from any dissatisfactory results.

To trust an agent to complete sophisticated tasks based on multiple retrieval steps, the value of the data needed to support the decision-making process multiplies significantly.

The need to make reliable enterprise data available to agents is key. This is why businesses are increasingly recognising the power of graph database technology for the broad range of retrieval strategies it offers, which in turn multiply the value of the data.

How graph technology strengthens AI reasoning

As Agentic AI drives decisions from data, the insights underpinning these decisions must be accurate, transparent, and explainable – benefits that graph databases are uniquely optimized to deliver. Gartner already identifies knowledge graphs as an essential capability for GenAI applications, as GraphRAG (Retrieval Augmented Generation), where the retrieval path includes a knowledge graph, can vastly improve the accuracy of outputs.

The unique structure of knowledge graphs, comprised of ‘nodes’ and ‘edges’, is where higher-quality responses can be derived. Nodes represent existing entities in a graph (like a person or place), and edges represent the relationship between those entities – i.e., how they connect to one another. In this type of structure, the bigger and more complex the data, the more previously hidden insights can be revealed. These characteristics are invaluable in presenting the data in a way that makes it easier for AI agents to complete tasks in a more reliable and useful way.

Users have been finding that GraphRAG answers are not only more accurate but also richer, speedier, more complete, and consequently more useful. For example, an AI agent addressing customer service queries could offer a particular discounted broadband package based on a complete understanding of the customer, as a result of using GraphRAG to connect disparate information about said customer. How long has the customer been with the company? What services are they currently using? Have they filed complaints before?

To answer these questions, nodes can be created to represent each aspect of the customer experience with the company (including previous interactions, service usage, and location), and edges to show the cheapest or best service for them. A fragmented and dispersed view of the data could lead to the agent offering up a discounted package when it was not due, leading to cost implications for the business.

As mentioned by the CEO of Klarna, “Feeding an LLM the fractioned, fragmented, and dispersed world of corporate data will result in a very confused LLM”. But the outcome is very different when data is connected in a graph: Positive results have been reported by the likes of LinkedIn’s customer service team, who have reduced median per-issue resolution time by 28.6% since implementing GraphRAG.

Why connected data is key to Agentic AI readiness

With every iteration, the LLMs behind AI agents are advancing quickly, and agentic frameworks are making it easier to build complex, multi-step applications. The next vital move is to make enterprise data as rich, connected, and contextually aware as possible, so it's fully accessible to these powerful agents.

Taking this step allows businesses to unlock the full value of their data, enabling agents that are not only more accurate and efficient but also easier to understand and explain. This is where the integration of Agentic AI and knowledge graphs proves transformational. Connected data gives agents the context they need to think more clearly, generate smarter outputs, and have a greater impact.

We've compiled a list of the best survey tools.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro



from Latest from TechRadar US in News,opinion https://ift.tt/f2TBdVx

0 coment�rios:

In today's digital economy, the ability to handle explosive growth without performance ramifications isn't just a technical conside...

How to scale at the speed of success

In today's digital economy, the ability to handle explosive growth without performance ramifications isn't just a technical consideration, it's now a business imperative. So when success arrives, systems must be ready.

Throughout my career advising technology and business leaders, I've witnessed a recurring scenario: a company experiences unexpected success - perhaps they had a successful viral marketing campaign, or suddenly face market interest, or even a random rapid uptick in customer adoption - only to have this triumph transform into a technical crisis as systems falter under the load and stress.

What should be a celebratory moment instead becomes an emergency. Performance levels considerably dip. Customer experience suffers. And the very success that should propel the business forward becomes its biggest operational challenge.

This phenomenon isn't limited to startups and it isn’t necessarily new. Established enterprises frequently encounter these issues during product launches, seasonal peaks, or when entering new markets. Black Friday becomes a nightmare for fresh retailers. The root cause is rarely insufficient hardware or lack of technical talent. More often than not it is that the architectural foundations weren't designed for rapid, unpredictable scaling.

Why traditional approaches can fail

Conventional technology stacks typically perform well under predictable, linear growth conditions. However, real-world business expansion is rarely so simple. Life and business comes in surges, spikes, and sometimes happens overnight.

Traditional databases particularly struggle with these dynamics. When transaction volumes multiply, these systems often hit performance bottlenecks that can't be resolved by simply adding more hardware and their scalability is limited by the biggest available box. Connection limits are reached, query performance deteriorates, and the costs for digital infrastructure climb without delivering proportional benefits.

This is a particular headache that many players in the cryptocurrency space face, where market volatility can trigger 5x transaction volume increases within a matter of minutes. Platforms built on rigid architectures simply cannot adapt quickly enough, leading to trading halts or poor functionality precisely when users need reliability most.

Similarly problematic are monolithic architectures, which are geared for initial speed-to-market rather than long-term flexibility. These approaches might launch quickly, but they rarely support sustainable hypergrowth.

Building from the ground up

Forward-thinking companies are increasingly adopting architectures specifically designed for unpredictable scaling patterns. At the core of this approach is the need for horizontal scalability. This is the ability to expand capacity by adding instances rather than continuously upgrading to larger, more expensive IT infrastructure. In short, flexibility and adaptability is prioritized.

One cryptocurrency exchange that we've worked with demonstrates this principle effectively. By implementing a distributed database architecture, they maintain sub-millisecond response times even during market volatility. So if a run on a certain coin dramatically leads to substantial trading volume fluctuations and customer demand, their platform can automatically scale to handle this without any impacts to the overall service offering.

Equally important is the adoption of cloud-native design patterns - be it deployed on a public cloud, private cloud or just on premises. Microservices, containerization, and orchestration tools all allow businesses to scale cloud computing components independently and recover quickly from failures or setbacks. This modularity essentially supports innovation without compromising stability.

Data model flexibility also plays a crucial role. When another trading platform needed to quickly add new cryptocurrencies to their exchange, their flexible schema approach allowed them to introduce new assets without database migrations or downtime. Understandably, this is a critical advantage in the fast-moving digital asset space.

What does this mean for technology leaders?

For executives preparing their organizations for potential hypergrowth, four priorities consistently make the difference. Firstly, they must design for horizontal scaling from day one. Systems should be built to scale outwardly, but not only in an upward direction. This approach will provide long-term resilience and cost efficiency, something that becomes increasingly valuable as businesses grow and develop.

Secondly, leaders should look to embrace automation. The past two decades have shown how manual processes rarely manage to scale well. Investing in automated provisioning, deployment, and monitoring will not only reduce errors, it allows engineering talent to focus on innovation rather than firefighting issues.

On top of this, they have to stress test beyond their expected peaks. Many systems fail because they're only tested to their current limits. Rigorous testing at 5-10x anticipated peak loads helps identify bottlenecks before they have the chance to impact customers.

Lastly, every leader seeking out hypergrowth must outline architectural efficiency as a key boardroom focus. What I mean here is that scaling isn't purely around performance. It’s as much about financial sustainability, meaning everything from granular resource management to efficient data architecture can help maintain steady growth.

The competitive edge of scalability

In markets where digital experience defines success, scalability is no longer just a technical consideration, it has to be a strategic business capability.

The most successful organizations recognize this. Technology foundations either enable or constrain their ability to capitalize on opportunities and markets. Thinking of recent conversations I’ve had with cryptocurrency executives, when markets surge with interest, exchanges with truly scalable architectures will be the ones that welcome new customers seamlessly. Competitors not following this approach will be forced to implement emergency registration freezes or they risk crumbling altogether.

Ultimately, scaling isn't just about handling growth. It's about being prepared for success, whenever and however that arrives. The question isn't whether your business will face a scaling challenge, but whether you'll be ready when opportunity presents itself.

We've compiled a list of the best cloud databases.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro



from Latest from TechRadar US in News,opinion https://ift.tt/HmvaUyc

0 coment�rios: