Latest Technology News

Home Top Ad

Responsive Ads Here

Things aren’t looking good if you were hoping to play Call of Duty: Black Ops 6 on Xbox or PC for just $1 at launch. It seems as though Mi...

Things aren’t looking good if you were hoping to play Call of Duty: Black Ops 6 on Xbox or PC for just $1 at launch. It seems as though Microsoft has removed the option to pick up a 14-day subscription for a reduced price just nine days away from the Black Ops 6 launch.

The news comes from a recent post to the X / Twitter account of Call of Duty blog CharlieIntel that claims the offer has vanished. I headed over to the Xbox Game Pass website and wasn’t able to find the deal anywhere, which would lend some extra credibility to this assertion.

Black Ops 6 is the next entry in the hugely popular Call of Duty franchise and is set to launch on October 25. It will be the first Call of Duty title to be added to the Xbox Game Pass service at launch following the completion of Microsoft’s Activision Blizzard acquisition last year.

It will be available to those with an Xbox Game Pass Ultimate or PC Game Pass subscription, which costs $19.99 / £14.99 and $11.99 / £9.99 per month respectively. Many players had planned to take advantage of the $1 subscription offer to get two weeks of access to the game for less.

Although it’s incredibly disappointing to see this option disappear just days away from release, I can’t say that it’s particularly surprising. Call of Duty games usually generate a huge amount of sales revenue over their launch periods. The idea of Microsoft sacrificing a large chunk of this by providing consumers with an almost unbelievably cheap way to try out the game seems a little silly in retrospect.

There’s currently no official word on whether the $1 Xbox Game Pass deal will return in the future, but I suspect that it will once Black Ops 6's launch period is out of the way. While the timing is enough to raise some eyebrows, it’s also worth noting that this is not the first time that the offer has disappeared so it might be due to something else entirely.

You might also like...



from TechRadar - All the latest technology news https://ift.tt/cMJhDb3

The growing adoption of generative AI tools across various industries signifies a significant shift in business operations and innovation....

The growing adoption of generative AI tools across various industries signifies a significant shift in business operations and innovation. This technology, capable of creating content, analyzing data and optimizing processes, is becoming an invaluable asset for enhancing productivity and creativity.

From automating customer support to generating marketing content, AI offers unprecedented efficiencies and personalization. As organizations seek to stay competitive, the integration of AI-driven solutions is rapidly evolving from a cutting-edge novelty to a business necessity, reshaping how products are developed, decisions are made, and customer experiences are crafted.

However, new data from a recent Smart Communications survey suggests that simply implementing generative AI isn’t enough – there’s a clear trend toward growing skepticism in AI use, especially when adopted for communications. Good communication is critical in industries such as healthcare, finance and insurance, where poorly communicated messages or mistakes in the relay of information can have a life-changing impact. Businesses can look to prioritize transparency by making it clear when they're using AI and ensuring human oversight throughout the communication process.

Clear, accurate and timely: what do customers want?

2000 customers across the globe were surveyed worldwide on their opinions regarding customer communications from financial services, insurance and healthcare companies. The responses revealed that four in five (81%) want businesses to employ human oversight and over three-quarters (77%) feel it’s important for companies to explicitly call out when generative AI is used in their communications.

This skepticism stems mainly from ethical (63%) and security concerns (66%) about using generative AI in customer communications. And fewer than half (47%) agree that generative AI actually has the potential to improve the communications they are receiving from businesses. This tells us that many customers don’t see generative AI as worth the risk when it comes to their communications.

The survey also asked customers what they value most when it comes to communications - a majority (71%) said that communications need to be straightforward and easy to understand, while over half (55%) valued accuracy and over a third (38%) valued timeliness. This isn't too surprising, but certainly reaffirms that the need for clear accurate communication prevails now more than ever in today's digital world, and businesses need to embrace digital tools to avoid human error and ensure customers are happy.

Older consumers, comprising 88% of the Silent Generation and 79% of Baby Boomers, prioritized 'clear and easy to understand' communications in significantly greater numbers than their younger counterparts, with 65% of Generation Z and 63% of Millennials valuing this aspect. For younger customers, personalization and delivery via a preferred channel were significantly more important than for their older counterparts. Over a quarter of Generation Z (28%) ranked personalization as the most important factor, compared to 23% of all respondents. It’s clear that different generations have different values when it comes to their communications. For businesses to have successful communication across the board, especially in key industries such as healthcare, they need to accommodate all needs, for each demographic.

So, with growing concerns around its use, what is the value of generative AI in communications?

The use of generative AI in customer communications

The reality is that, despite customers' concerns about generative AI in customer communications, it can provide great value when used correctly, by improving operational efficiency and effectiveness. This is why the latest CCM tools and technologies out there can help companies leverage generative AI in a responsible and impactful way to enhance customer engagement.

In fact, generative AI is shaping communications by automating content creation, enhancing personalization and streamlining workflows. It enables businesses to quickly generate high-quality text, images and audio, making crafting tailored messages for diverse audiences easier. By reducing the time and effort required for content production, generative AI allows organizations the time to focus on strategy and innovation. Its ability to learn and adapt ensures that communications are not only efficient but also relevant and engaging.

What can businesses do to reassure customers?

It’s clear that generative AI can improve communications and build relationships. However, businesses should look to implement it carefully and thoughtfully, especially amid growing concerns about the new technology. There is a clear need to be open, honest and respectful of these attitudes – here, transparency is key and will help to generate customer trust.

Furthermore, businesses need to ensure human oversight throughout the communication process, never leaving the technology to communicate independently. This helps prevent biases, unintended consequences and disseminating incorrect or inappropriate content. Additionally, human input can refine AI outputs, making them more relevant and culturally sensitive, thus enhancing overall communication quality. This is particularly crucial for regulatory compliant communications, which cannot be compromised by AI hallucinations. Such hallucinations pose a serious risk to both customers and businesses and can have significant consequences.

Generative AI can be a fantastic tool for businesses to implement and enhance their customer communications. It helps to personalize, streamline, speed-up and strengthen relationships. However, customer concerns must be considered, and with growing skepticism businesses should prioritize transparency by clearly indicating when AI is being used and ensure human oversight is maintained throughout the communication process.

We've featured the best AI chatbots for business.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro



from TechRadar - All the latest technology news https://ift.tt/dGZWNpY

A whopping 69% of organizations have reported paying ransoms this year, according to research by Cohesity, with 46% handing over a quarter ...

A whopping 69% of organizations have reported paying ransoms this year, according to research by Cohesity, with 46% handing over a quarter of a million dollars or more to cybercriminals. It is hardly the picture of resiliency that is often painted by industry. Clearly, there is a disconnect between cyber resiliency policy and operational capability that urgently needs addressing. 

With the advent of Ransomware-as-a-Service platforms and the current global geopolitical situation, organizations face a huge existential threat through destructive cyber attacks that could put them out of business. This gap between confidence and capability needs to be addressed, but in order to do so, those organizations need to recognize there is a problem in the first place.

According to the Global cyber resilience report 2024, which surveyed 3,139 IT and Security Operations (SecOps) decision-makers, despite 77% of companies having a 'do not pay' policy, many have found themselves unable to respond and recover from attacks without caving in to ransom demands. In addition, only 2% of organizations can recover their data and restore business operations within 24 hours of a cyberattack – despite 98% of organizations claiming their recovery target was one day.

This clearly indicates that current cyber resilience strategies are failing to deliver when it matters most. Companies have set ambitious recovery time objectives (RTOs), but are nowhere close to building the appropriate effective and efficient investigation and threat mitigation capability needed to rebuild and recover securely. Most organizations treat a destructive cyber attack like a traditional business continuity incident like a flood, fire or electricity loss - recovering from the last backup and bringing back in all the vulnerabilities, gaps in prevention and detection, as well as persistence mechanisms that caused the incident in the first place. The gap between these goals and actual capabilities is a ticking time bomb, leaving businesses vulnerable to prolonged downtime and severe financial losses.

Equally alarming is the widespread neglect of Zero-Trust Security principles. While many companies tout their commitment to securing sensitive data, less than half have implemented multi-factor authentication (MFA) or role-based access controls (RBAC). These are not just best practices; they are essential safeguards in today’s threat landscape. Without them, organizations are leaving the door wide open to both external and internal threats.

As cyber threats continue to evolve, with 80% of companies now facing the threat of AI-enabled attacks, the need for a robust, modern approach to data resiliency is more urgent than ever. Yet, the continued reliance on outdated strategies and the failure to adapt to new threats sets the stage for even greater risks. It’s not even a question of complacency.

Building confidence or creating false hope?

With 78% of organizations claiming that they are confident in their cyber resilience capability, this infers that a lot of work has already been done in creating the process and technology to not just isolate attacks but also have the ability to recover a trusted response capability to investigate, mitigate threats and recover. This would be great if true, but we are seeing a real disconnect between perception and reality when it comes to cyber resilience.

That’s a big concern. The financial impact of these failures is not limited to ransom payments alone. The true cost of inadequate cyber resilience extends far beyond the immediate outlay. Prolonged downtime, loss of customer trust, criminal prosecutions for false attestations around the quality of security controls or paying ransoms to sanctioned entities, brand damage, and skyrocketing cyber insurance premiums are just a few consequences that can damage an organization. It’s a sobering reminder that investing in and testing robust cyber resiliency measures upfront is far more cost-effective than dealing with the fallout of a successful attack.

Moreover, the report reveals that only 42% of organizations have the IT and Security capabilities to identify sensitive data and comply with their regulatory requirements. This deficiency exposes companies to significant fines and undermines their ability to prioritize protecting the very data that is the lifeblood of their organization and is subject to regulatory obligations.

With the expected rise of AI-enhanced cyberattacks adding another layer of capability to cyber adversaries, organizations with traditional defenses will have their work cut out. They are no match for these effective and high-efficient threats, which can adapt and evolve faster than most organizations can respond. Organizations need AI-tools to counter these emerging AI-driven threats.

Identify a problem to fix a problem

The report ultimately reveals opportunities for improvement. People, processes, and tools do exist to reverse these trends and close gaps to shore up cyber resilience. Still, organizations need to understand where they currently sit regarding resiliency and be honest with themselves.

The right workflow collaboration and platform integration between IT and Security needs to be developed before an incident. Organizations must engage in more realistic and rigorous threat modelling, attack simulations, drills and tests to understand their strengths and weaknesses. This can ensure that the response and recovery process is effective and that all stakeholders are familiar with their roles during an incident or can identify shortcomings and areas for improvement.

In addition, automated testing of backup data can verify the integrity and recoverability of backups without manual intervention. This automation helps ensure that backups are reliable and can be restored quickly when needed.

Finally, maintaining detailed documentation and recovery playbooks helps ensure everyone knows their responsibilities and what steps to take during an incident. These playbooks should be regularly updated based on changes in adversary behavior and the results of testing and drills.

And this is just a start. To fully reduce operational risk, a transition to modern data security and management processes, tools, and practices is required. Perhaps then, we will see a reduction in ransom payments and a cyber resilience confidence built on reality.

We've rated the best identity management software.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro



from TechRadar - All the latest technology news https://ift.tt/fyvaucn

It's Cybersecurity Awareness month and one of the best VPN services on the market has just unveiled an easy way to monitor the safety ...

It's Cybersecurity Awareness month and one of the best VPN services on the market has just unveiled an easy way to monitor the safety of your personal information. Even better, you won't need to spend a penny to use it.

Surfshark launched its free Data Leak Checker as a standalone website. All you need to do is head to the page and enter your email address. Then, the system will examine multiple sources across the web for potential database and malware-related leaks.

The new tool is powered by Surfshark Alert, a data leak detection system included with the Surfshark One security suite.

The growing need to defend against data leaks

You might've noticed a surge of data leaks recently, with headlines about new instances hitting international news outlets on an almost daily basis.

Let's look at some data. 2024 kicked off with the "Mother of all data breaches" that saw 26 billion records compromised – the biggest ever recorded, at that time. New incidents occurred regularly throughout the year and around the world. In August, the medical data of almost 400,000 American patients was stolen in a massive supply-chain cyberattack.

According to findings from Surfshark’s Global Data Breach Statistics, approximately 18 billion user accounts have been leaked globally over the last 20 years.

"As we launch the Data Leak Checker, we stress the importance of knowing exactly where and how your data may have been compromised," said Kornelija Vanage, Alert Product Owner at Surfshark. "Understanding breach details can empower individuals to take informed actions to protect their personal information and prevent further damage."

Vanage explains that the Surfshark Data Leak Checker is a simple and accessible tool that'll ensure that everyone, regardless of their technical expertise, can secure their personal information against data breaches.

Once you enter your email address, the tool will monitor every corner of the web on the lookout for compromised data.

After the scan, you'll receive a report covering database and malware attacks. The first will show large breached domains and compromised databases that may have included your account. The latter will indicate any potential vulnerabilities with your email address due to malware on your device.

The provider explains that, for security reasons, some data may be hidden. To view the complete and detailed information, you'll need to subscribe to its premium service Surfshark Alert.

As mentioned earlier, Surfshark Alert is included in the Surfshark One security bundle alongside its virtual private network, a private search engine, and antivirus software.

Surfshark's paid data leak detection system goes a step further as it gives you real-time alerts, notifying you of any breaches around your personal information. Not only does it promise to guard your email accounts, but it also checks your password vulnerability, prevents identity theft, and protects your credit cards.

What to do if your email has been leaked

If your Surfshark report indicates that your email has somehow been compromised, it's crucial to act quickly to mitigate any potential harm.

Experts at Surfshark recommend immediately changing the passwords for all affected accounts. Remember to use complex passwords composed of special characters and non-dictionary terms – and don't use the same password for multiple accounts. A good password manager tool can come in handy, here, to generate strong passwords and remember them on your behalf.

If your account allows it, you should also enable two-factor authentication. This adds an extra layer of security to your accounts by requiring you to verify your identity with an additional method (like a single-use code) before gaining access to the account in question. Two-factor authentication also mitigates credential-stuffing attacks.

Reviewing the activities of your account is another important step, allowing you to check for unauthorized transactions or suspicious login attempts, and report anomalies to your service provider.

Surfshark also suggests keeping an eye out for phishing attempts as attackers might be using your leaked email address to conduct even more scams in the future.



from TechRadar - All the latest technology news https://ift.tt/54ISxFt

As applications have migrated to the cloud and employees have demanded the flexibility to work from anywhere, maintaining a secure, efficie...

As applications have migrated to the cloud and employees have demanded the flexibility to work from anywhere, maintaining a secure, efficient, and scalable network infrastructure, while providing a consistent user experience, has become a top priority for IT leaders. 

However, many organizations find themselves facing a patchwork of security tools and struggling with the convolutedness and vulnerabilities of legacy Virtual Private Networks (VPNs), which were designed for a very different era of remote access computing. Network leaders are looking for clarity on how to enable them to support their businesses to scale and grow while reducing the attack surface exposing their data to risk.

Complexity brings vulnerability

VPNs have long been the backbone of secure remote access, allowing employees to connect to corporate networks from outside the physical security perimeter. In the past, when most applications were hosted on-premises within the company's data center, this approach made sense. However, as businesses increasingly rely on cloud-based applications and services, the traditional VPN model has begun to show its design limitations.

One of the biggest challenges with legacy VPNs is that they can overcomplicate infrastructure. Modern enterprises are no longer confined to a single data center or geographic location. Employees access applications and data from multiple devices and locations, creating a web of connectivity that legacy VPNs struggle to manage. The traditional model of routing all traffic through a central VPN concentrator adds unnecessary complexity, slowing down network performance due to inefficient routing and creating bottlenecks that frustrate users.

This is compounded by the fact that many CIOs are forced to maintain existing legacy technology due to budget constraints or resistance to sweeping changes. As a result, IT leaders often find themselves relying on expensive point products to address specific issues, rather than implementing a more holistic platform solution. This patchwork approach can be costly and inefficient, leading to a fragmented infrastructure that is difficult to manage and prone to security vulnerabilities.

Many IT leaders’ careers can hinge on their ability to maintain network performance while keeping pace with the demands of the modern enterprise. Balancing these often competing priorities is no small task. To remain competitive and secure in today's digital landscape, organizations must be willing to rethink their approach to network security and infrastructure.

From patchwork to platforms

IT leaders are aware that they need to remove dependency on outdated hardware. This shift involves adopting cloud computing platforms that integrate networking and security into a single, cohesive solution, rather than relying on disparate single-purpose solutions to patch up legacy systems.

By embracing a platform approach, IT leaders can streamline their infrastructure and improve overall performance. This shift not only alleviates the burden of maintaining legacy hardware but also positions the organization to better adapt to the evolving needs of the business. Cloud-native platforms are designed with modern networking in mind, offering features like dynamic routing, load balancing, and traffic optimization that are critical to support today’s distributed workforce.

Moreover, these platforms are built to scale with the organization, allowing IT teams to easily accommodate growth without the need for constant hardware upgrades. This agility is particularly important in a world where the pace of business is accelerating, and the ability to quickly respond to new challenges can be a key differentiator.

A key advantage of switching to a cloud-native platform is simplifying cloud access for the end user. In the traditional VPN model, all traffic is routed through a central concentrator, which can lead to inefficient traffic patterns and latency. By contrast, a cloud-native approach allows traffic to be routed more directly, improving performance and providing a better user experience by moving the cloud on-ramp closer to the user. This is especially important in a hybrid work-from-anywhere environment.

Visibility brings trust

One of the most compelling advantages of a cloud-native platform is the enhanced visibility and control it provides to IT leaders. In a legacy VPN environment, it can be difficult to gain a clear understanding of network traffic and to diagnose issues or identify potential security threats when the ultimate destination is somewhere outside the corporate network. The visibility over data, advanced analytics and reporting tools available through cloud-native platforms help monitor all traffic, not just the traffic passing through the VPN, and plays a critical role in security.

A zero trust security approach operates on the principle that no user or device should be trusted by default, even if they are inside the network perimeter. Instead, access is granted based on a verification process that considers contextual factors, including the user's location, device, role and behavior. By giving continuous visibility, cloud-native platforms can provide unmatched contextual awareness, enforce dynamic security policies and allow for adaptive access to users, devices, applications, and data, minimizing the risk of unauthorized access or data breaches. It delivers on the principle of only giving the right amount of access, to the right people, under the right conditions through a continuous validation model.

As businesses bump into the limitations of legacy VPNs and outdated infrastructure, IT leaders must be willing to embrace a transformative platform approach that brings cloud access closer to the end user, enhances visibility and control, and supports a zero-trust security model. By doing so they future-proof their digital infrastructure and create a platform that enables their business to thrive.

We've featured the best network monitoring tool.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro



from TechRadar - All the latest technology news https://ift.tt/BhcQL6e

For a long time, email phishing scams have often been a poorly-worded, typo-ridden, desperate plea for funds, that will of course be paid b...

For a long time, email phishing scams have often been a poorly-worded, typo-ridden, desperate plea for funds, that will of course be paid back tenfold. Well, now that our guard is down, AI is here to make sure we don’t get too comfortable.

A new, hyper-realistic scam is hitting Gmail users, and the AI-powered deceptions are capable of fooling even the most tech-savvy amongst us. In this new wave of fraud, the classic ‘Gmail account recovery’ phishing attack is paired with an ultra-realistic voice-call to trick users into a panic.

In a recent blog post, Microsoft solutions consultant, Sam Mitrovic, explained how he almost fell victim to the elaborate scam, and he recounts an account recovery notification that was followed by a very real sounding phone call from ‘Google Assistant’.

Don’t get caught out

Mitrovic revealed repeated emails and calls were sent from seemingly legitimate addresses and numbers, and that the way he cottoned on to the scam was by manually checking his recent activity on Gmail.

This is part of a worrying larger trend of a ‘deepfakes’, which are already targeting businesses and consumers more than ever. Criminals can use ultra realistic video or audio footage to trick unsuspecting users into transferring over funds or information.

Almost half of businesses have reported encountering deepfake fraud already in 2024, and the trend looks set to continue.

The key to staying safe from this type of scam is by staying vigilant and taking your time - criminals will almost always try to rush you into a decision or into handing over money or details, but by taking a step back to evaluate, you can get perspective and even an outside assessment from someone you can trust.

Via Forbes

More from TechRadar Pro



from TechRadar - All the latest technology news https://ift.tt/HkRMD4P

Every now and then there's a Wordle that's so difficult it ends thousands of streaks in one go. Today's Wordle is one of them....

Every now and then there's a Wordle that's so difficult it ends thousands of streaks in one go. Today's Wordle is one of them.

According to WordleBot, the New York Times' in-game helper tool, puzzle #1,214 has an average score of 5.7. That makes it the hardest since October 2022 and indeed the third hardest ever. Inevitably, avid Wordlers took to Twitter (or X, whatever) to share their stories of woe with the world. 'Wordle 1,214 X' is trending top of the social platform right now as the complaints pour in.

So, why is it so difficult, why are people so angry, and how could you have avoided failure today?

To answer those questions I'll need to reveal the solution, so don't read past this point if you haven't played yet, because SPOILERS FOR TODAY'S WORDLE, GAME #1,214, ON TUESDAY, 15 OCTOBER 2024 will follow.

Third hardest ever

Okay, solution coming up, so really do stop reading if you don't want to know what it is.

Or, if you haven't played yet, head to my NYT Wordle today page for some last-minute hints.

**FINAL SPOILER ALERT**

Today's answer is CORER. Yes, CORER.

At the time of writing, WordleBot has analyzed around 42,000 games, of which around 10,000 have been failures; that's around 25%. Another 30%, meanwhile, only solved it on the final guess.

That explains the super-high average score of 5.7; yes, some people did solve it in three, four or five guesses (well done if you're one of them!), but the majority either needed six or failed entirely.

I've recorded the WordleBot average scores each day since it launched in April 2022, meaning I now have a list of 926 games ranked by difficulty. By that measure, CORER is the third hardest ever, behind only PARER (game #454, average score 6.3) and MUMMY (#491, 5.8), both in late 2022.

The common theme in the failures today is evident from a glance at Twitter, where those familiar rows of green squares have a tell-tale gap from top to bottom in the center:

A perfect Wordle storm

This pattern indicates that the game is one of the too-many-answers variety, where the solution only differs by one letter from several other words. Today, COVER, COWER and CODER were all arguably more likely answers, while COPER, COMER and COYER were also possibilities.

It's a common scenario that plays out with ER-ending words, simply because there are so many of them. My analysis of every Wordle answer shows that 141 of the game's 2,309 original solutions end in ER, making it by far the most likely ending. It's therefore easy to identify the pattern – but incredibly difficult to then narrow down the correct start of the word.

That leads to the pattern above, where people who had CO-ER kept adding the wrong middle letter.

Not only is it a too-many-answers word, and an ER word, but CORER also contains a repeated letter, R. These make the game more difficult by default, because most people don't like playing a repeat; it feels like throwing away a letter.

Finally, there's the fact that CORER really isn't a common word, to the extent that judging by the Twitter response, some people haven't even heard of it at all.

Put all that together and you have all the ingredients for a nightmare Wordle – so it's no surprise that so many people are failing.

So, what could you have done differently if you lost your streak today?

The best way to solve this kind of Wordle

I've played every Wordle ever and only lost once, plus my streak is now over 1,000 – so I know a thing or two about avoiding defeat in this game. That's not exactly a superpower, but it's the closest I come!

I scored a five today, and it should have been a four if not for a silly mistake. But I was never in danger of losing my streak, despite a terrible opening guess that left me with 967 possible solutions. That's because I've learned what to do and not to do on days like this.

I'll start by pointing out that if you play Wordle on hard mode, I can't help you. On hard mode, you're not allowed to leave out letters that are already green, meaning if you found yourself with that CO-ER pattern early on, you'll have had no option but to blindly guess letters in search of the right one. There are good strategies out there for avoiding defeat on hard mode, but you'll need to look elsewhere for what they are.

If you played in standard mode, however, then I do have some advice.

Firstly, you'll want to identify as early as possible whether or not you're dealing with an ER word. The obvious trick here is to make your start word one that includes both of those letters. STARE used to be my opener of choice for years until I switched to playing random for the sake of variety, and is one of WordleBot's favorite words too. CRANE is the 'bot's first choice, or you could try CRATE, TRACE, CARET, CARTE, TASER, PARSE, SNARE or many others.

Any of these will point the way to an ER answer early on, but if your start word doesn't include one or both of those letters you should fix that on the second guess unless you know for sure that it's not a possibility (because one has been ruled out already, or a different letter is green at the end, for instance).

That's what I did today. My random start word was VINYL, which was useless, but I followed up right away with STARE and uncovered the yellow R and E.

NYT Wordle answer for game 1214 on a green background

(Image credit: New York Times)

That was enough info for me to know that it was probably an ER word, so the next step was to confirm that while ruling in/out as many words as I could.

The best approach at this stage is to list as many words as you can, to see which letters might be in play. There were apparently 68 still open to me at this stage (WordleBot told me that afterwards), and I reckon I came up with about half of those. That was enough to give me a good steer as to the best letters to play next.

For instance, I could see that O was the most likely missing vowel to appear, featuring in the likes of BOXER, JOKER, GOFER and MOWER. P was also a common letter – POWER, POKER, DOPER, MOPER, PURER. And then there was the repeated R…

It might seem silly to play a repeated letter at this stage, but there are loads of answers that fit that pattern, including RIDER, RUDER, ROWER, ROGER CURER and indeed CORER – plus plenty of non-ER words that would still have fit (RECUR, ERROR, FREER).

So I put all that together and played ROPER, and that did the trick in that it reduced my options to only one. I then messed up by playing BORER rather than CORER, which was a shame, but ultimately finished with my streak intact.

The key thing, above everything I've said before, is to never just blindly guess letters in search of the missing one.

Let's say you had CO-ER on the third guess – it would be tempting to guess COVER next, right? Don't do it. Instead, look at which words it could be: COVER, COWER, CORER and CODER, for instance. Your next guess should then be something like VOWED, as that would rule in/out COVER, COWER and CODER depending on which one of V, W or D turned yellow/green. And if none of them did, the answer would be CORER.

It can go against instinct to do this, because in some ways you're wasting a guess – you know it won't be the answer. But scoring a five when it could have been a four is always preferable to failing entirely, so don't be a hero – play it safe and live to fight another day.

This may not be of any consolation to you today if you lost your streak, but on the plus side it's very unlikely we'll get another word as difficult as this one for a while. Good luck!

You might also like



from TechRadar - All the latest technology news https://ift.tt/SI0ji7Z

A quick start out of the gate is an enormous advantage for sprinters, swimmers, jockeys and race car drivers alike. It’s also extremely val...

A quick start out of the gate is an enormous advantage for sprinters, swimmers, jockeys and race car drivers alike. It’s also extremely valuable to cybercriminals. By exploiting a zero-day vulnerability before anyone else knows about it, cybercriminals gain an early window to infiltrate systems and achieve goals like stealing data or deploying ransomware while avoiding detection.

Attacks that exploit zero-day vulnerabilities cannot be prevented — but they can be faced with confidence. This article offers practical guidance containing these threats by building a resilient IT infrastructure that features reducing the attack surface, fast detection and effective response.

The Frustration of Zero-Day Vulnerabilities

It is an inescapable fact that every operating system and software application have vulnerabilities that are not yet known by the vendor or the organizations using the product. Another unhappy fact is that cybercriminals are constantly looking for these vulnerabilities, and when they find one, they begin working hard to find a way to exploit it.

Organizations need to come to terms with the reality that adversaries sometimes succeed in developing an effective zero-day attack and there is little they can do to prevent the initial strike. Instead, they must focus on blocking the escalation of the threat and preventing attackers from gaining access to precious data or establishing control over the whole system.

Essentially, exploitation of a zero-day vulnerability is just the first stage of a longer battle for control over your valuable digital assets. To win that battle, security teams must proactively reduce their exposure to attack, stay on top of vulnerabilities, master threat detection and response, and ensure they can restore operations quickly after an incident.

Reducing the Attack Surface

The first priority in reducing the risk from zero-day vulnerabilities is to minimize the attack surface. Core strategies that will help include disabling unneeded services, implementing a robust patch management process, and segregating your network into distinct segments to isolate critical systems and sensitive data.

Another critical best practice is configuring stringent access controls that adhere to the least privilege principle. Even if an attacker gets into the system, their ability to move laterally will be restricted, since each account has only the access rights necessary for the user to perform their tasks.

For an even more robust approach, highly privileged accounts can be replaced with just-in-time (JiT) elevated privileges that are granted only after additional verification and that last only as long as needed for the task at hand. Such an approach further limits the ability of an adversary to escalate privileges.

Discovering and Mitigating Vulnerabilities

What makes a vulnerability a zero-day is that it is discovered by adversaries and exploited in attacks before anyone else knows about it. Software vendors usually quickly provide a security patch or mitigation strategy. Unfortunately, many organizations fail to perform the recommended action in good time, so they remain at risk from the vulnerability far longer than necessary.

Accordingly, a robust patch management strategy is another vital element in reducing the attack surface area. That strategy should include scanning systems for unpatched vulnerabilities so they can be mitigated promptly. One option is a traditional patch management tool that scans systems regularly. However, as the number of software products in use has grown, this process now takes more time than ever before. Modern solutions use a discovery process known as a scan-less scan, which maintains a real-time inventory of the software installed on the system and flags any vulnerabilities as they appear.

Detecting Threats in Their Early Stages

Attackers don’t advertise the time and place that they are going to attack, but entire websites are devoted to detailing the tactics and techniques that they use. Identity threat detection and response (ITDR) solutions leverage this knowledge, with a focus on detecting threats relating to identity and access control systems. Signs of these threats include unusual login attempts, suspicious access requests and unplanned changes to privileges. Detection of a threat can trigger automated responses like blocking access and resetting credentials.

Organizations also need an endpoint detection and response (EDR) system. EDR complements ITDR by monitoring endpoints for potentially malicious activity and enabling prompt response to those threats.

Of course, if these solutions flag too many events as suspicious, security teams will be overwhelmed with false alerts. Accordingly, file integrity monitoring (FIM) is also crucial, since it can filter out planned system changes and empower IT teams to focus on swift response to real threats.

Ensuring Quick Recovery

Organizations must also be prepared for attacks that succeed in taking down key systems and destroying or encrypting valuable data. To minimize disruption to the business in the wake of an incident, they need a documented strategy for data recovery and getting processes back on track as soon as possible.

A robust recovery plan starts with backing up key data and systems, testing those backups carefully and storing them securely. If attackers make malicious changes, IT teams should be able to identify the specific assets involved and granularly reverse the modifications. In a broader disaster, IT pros need to be able to quickly restore key domain controllers, applications and data to reduce downtime and business losses.

Conclusion

While it is not possible to prevent cybercriminals from discovering and exploiting zero-day vulnerabilities, organizations can and should take action to reduce the impact of these attacks. By implementing the practices above, organizations can build a multi-layered security strategy that enhances their resilience against not only zero-day exploits, but other types of cyberattacks and insider threats.

We've rated the best identity management software.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro



from TechRadar - All the latest technology news https://ift.tt/lj0qtIU

The blazing-fast speeds, extremely low latency, and massive connectivity achieved with 5G technology are changing how mobile media is bein...

The blazing-fast speeds, extremely low latency, and massive connectivity achieved with 5G technology are changing how mobile media is being perceived. As such, storage solutions need to keep up with the growing demands of the content supply chain. In 2009, 4G introduced the concept of mobile data as an object that is downloaded to a device and is then played back. This innovation led to an increase in the amount of data processed on mobile devices.

For example, more than 500 hours of video content are uploaded in a single minute just on YouTube alone, and 5G is only going to increase how much media can and will be consumed worldwide. In this article, we’ll provide insights on how the 5G innovation will change storage requirements and the way media is consumed on mobile devices.

Impact of 5G on Media Consumption

The performance of 5G far surpasses standard wired Ethernet ports on desktop computers, with speeds up to 20 Gbps in theory. This is compared with the typical 1 Gbps to 10 Gbps of wired Ethernet. In practice, 5G is more than a match for most wired connections, capable of download speeds from 1 Gbps to 10 Gbps, which is on par with your desktop computer. Verizon, AT&T, and T-Mobile own the 5G infrastructure and are able to create private 5G cells, where a higher performance can be achieved and may even reach 20 Gbps as you don’t have to share the bandwidth with many devices.

UHD Streaming

5G's higher speed and lower latency allow high-resolution videos such as 4K and 8K to be streamed easily with minimal buffering on mobile devices. For reference, a 4K stream requires about 25 Mbps of bandwidth, while an 8K stream calls for 100 Mbps, both of which are demanding, but easily achievable with 5G. As a consequence, consumers can seamlessly stream UHD videos from their phones in almost any setting.

Augmented Reality (AR) and Virtual Reality (VR): AR and VR apps, with their full immersion, rely extensively on high bandwidth and low latency. Although 5G's 1ms latency is significantly higher than Ethernet's 10-30 microseconds latency, it is still suitable for these applications. To meet these fast requirements, storage systems must be capable of real-time data retrieval and processing.

Interactive Live Streaming: High-data volume and speed for 5G empower new interactive live streaming possibilities, such as multiple angles and instant replays. To achieve this, the underlying system and storage infrastructure need to be efficient and quick enough to handle countless requests while maintaining sub-second video latency.

Storage Challenges and Solutions

As more advanced and data-driven media technologies become possible through 5G, older, conventional storage infrastructures face various technical limitations:

1) Scalability: The surge in high-resolution and interactive content calls for storage solutions that are capable of handling increased amounts of data. Conventional storage systems might be unable to keep up with the amount of information created by innovative, 5G-enabled applications due to their sheer size and rapid expansion. Distributed storage systems, such as those using software-defined storage (SDS) architectures, offer the scalability needed to handle these demands efficiently.

2) Bandwidth Management: Compression technologies are used to effectively manage the greater data bandwidth offered by 5G networks, with many of them leveraging artificial intelligence (AI) for more efficient algorithms. These algorithms help reduce the size of the data being stored or transferred, making sure that bandwidth is not wasted while preserving media quality.

3) Security: Given the massive amount of data pushed through 5G networks, security is growing even more important and challenging. With massive amounts of data being sent around at an unprecedented rate, traditional encryption methods may fall short. Improving encryption techniques and incorporating blockchain for data integrity and Self-Encrypting Drives (SEDs) into storage solutions can improve security.

Edge Storage: As 5G becomes more widely available, it will be critical to make better use of "edge storage,” which is a type of decentralized storage that keeps data close to where it's needed. This approach has the potential to reduce latency while improving performance and overall user experience in real-time applications such as AR/VR and streaming for Apple Vision Pro and similar devices.

AI-Driven Storage Optimization: The use of AI in storage management is likely to become more widespread in the 5G era. AI tools can analyze usage patterns and dynamically optimize storage allocation, making sure that resources are used to their fullest potential. For example, AI can predict which content will be accessed frequently and then cache it in high-performance storage tiers, while less frequently accessed data is moved to lower-cost storage.

5G as Backbone for Broadcasting: Mobile broadcasting and network infrastructure, including cables, are not particularly suited to sudden changes in mobility. 5G technology reduces the need for extensive infrastructure, making it an ideal solution for dynamic, live broadcasting environments such as breaking news or outdoor events. When combined with edge computing, 5G enables local processing of video feeds, reducing latency and boosting broadcast efficiency. This seamless integration improves the efficiency and speed of live content delivery, representing an important shift forward in the media industry.

5G as Backbone for Mobile Broadcasting: Leveraging 5G has the potential to transform how cameras and other devices connect with the Outside Broadcasting (OB) trucks. With technically up to 20 Gbit, private 5G cells and an acceptable latency of 1ms, the setup becomes considerably more flexible. Inside the OB truck, data storage receives the recorded data and may send it out via 5G and allows for real-time editing and graphic overlays directly from that storage. 5G technology can also greatly improve mobile broadcasting workflows by delivering rapid speeds of up to 10 Gbps. This allows for the seamless transmission of uncompressed 4K or 8K video directly from cameras to production facilities without sacrificing video quality.

5G as Backbone for Live Broadcasting: For live sports broadcasts, 5G enables near-instantaneous synchronization of multiple camera feeds with an ultra-low latency of only 1 millisecond. Thanks to these precise positioning and angles, viewers get to enjoy an immersive, “live-like” experience from any screen. Furthermore, 5G’s impressive speeds easily support real-time editing and graphics overlay right from the field, allowing editors to integrate live feeds and make quick edits with minimal delay.

Conclusion

Innovative storage solutions and 5G completely transform the way media is consumed on mobile devices, and the latter will continue to change the way digital media is consumed on every screen. Storage technologies must meet security standards and continue evolving to keep pace with 5G’s increased bandwidth management and scalability requirements to ensure viewers enjoy always-on seamless experiences. Luckily, 5G higher speeds and lower latency makes it easy to transmit live 4K content over the internet from anywhere possible. 5G and advanced storage solutions can not only facilitate great viewing experiences; they can vastly improve upon them. Who knows what’s next?

We've rated the best cloud storage.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro



from TechRadar - All the latest technology news https://ift.tt/ne2fAid

Cloudflare has announced the deployment of its 12th generation servers, powered by AMD EPYC 9684X Genoa-X processors, delivering improved p...

Cloudflare has announced the deployment of its 12th generation servers, powered by AMD EPYC 9684X Genoa-X processors, delivering improved performance and efficiency across its infrastructure.

The new processor has 96 cores, 192 threads, and a massive 1152MB of L3 cache - three times that of AMD’s standard Genoa processors.

This substantial cache boost helps reduce latency and improve performance in data-intensive applications, with Cloudflare saying Genoa-X delivers a 22.5% improvement over other AMD EPYC models.

Updated AI developer products

According to the cloud provider, the new Gen 12 servers can handle up to 145% more requests per second (RPS) and offer a 63% increase in power efficiency compared to the previous Gen 11 models. The updated thermal-mechanical design and expanded GPU support offer enhanced capabilities for AI and machine learning workloads.

The new servers are equipped with 384GB of DDR5-4800 memory across 12 channels, 16TB of NVMe storage, and dual 25 GbE network connectivity. This configuration enables Cloudflare to support higher memory throughput and faster storage access, optimizing performance for a range of computationally intensive tasks. Additionally, each server is powered by dual 800W Titanium-grade power supply units, providing greater energy efficiency across its global data centers.

Cloudflare is keen to stress these improvements are not just about raw power but also about delivering more efficient performance. The company says the move from a 1U to a 2U form factor, along with improved airflow design, reduced fan power consumption by 150W, contributing to the server’s overall efficiency gains. The Gen 12 server’s power consumption is 600W at typical operating conditions, a notable increase from the Gen 11’s 400W but justified by the significant performance improvements.

The new generation also includes enhanced security features with hardware root of trust (HRoT) and Data Center Secure Control Module (DC-SCM 2.0) integration. This setup ensures boot firmware integrity and modular security, protecting against firmware attacks and reducing vulnerabilities.

The Gen 12 servers are designed with GPU scalability in mind, supporting up to two PCIe add-in cards for AI inference and other specialized workloads. This design allows Cloudflare to deploy GPUs strategically to minimize latency in regions with high demand for AI processing. Looking ahead, Cloudflare says it has begun testing 5th generation AMD EPYC "Turin" CPUs for its future Gen 13 servers.

Separately, Cloudflare has introduced big upgrades to its AI developer products. Workers AI is now powered by more powerful GPUs across its network of over 180 cities, allowing it to handle larger models like Meta’s Llama 3.1 70B and Llama 3.2, and tackle more complex AI tasks. AI Gateway, a tool for monitoring and optimizing AI deployments, has been upgraded with persistent logs (currently in beta) that enable detailed performance analysis using search, tagging, and annotation features. Finally, Vectorize, Cloudflare’s vector database, has reached general availability, supporting indexes up to five million vectors and significantly lowering latency. Additionally, Cloudflare has shifted to a simpler unit-based pricing structure for its three products, making cost management clearer.

More from TechRadar Pro



from TechRadar - All the latest technology news https://ift.tt/7qo2fus

The FBI created a cryptocurrency company and crypto token as a bait for scammers who participate in ‘pump-and-dump’ schemes, new reports ha...

The FBI created a cryptocurrency company and crypto token as a bait for scammers who participate in ‘pump-and-dump’ schemes, new reports have revealed.

The tactic, which involve making fake trades to boost prices before cashing out, worked very well, with 18 people arrested for ‘widespread fraud and manipulation in the crypto currency markets’, marking the first ever set of criminal charges brought against financial service firms for ‘wash trading’ and market manipulation in the cryptocurrency industry.

Over $25 million in cryptocurrency was seized during the operation, along with trading bots responsible for millions of dollars worth of ‘wash trades’ for around 60 different cryptocurrencies, which have been deactivated.

‘Operation Token Mirrors’

The cryptocurrency the FBI created was an Ethereum-based instrument named NexFundAI, which they used to track unsuspecting traders.

‘Wash trades’ refers to the illegal process of buying and selling the same security as a form of market manipulation. Recent reports suggest that as much as 70% of all crypto currency transactions fall under this category, so it's no wonder police want to crack down.

“These are cases where an innovative technology – cryptocurrency – met a century old scheme – the pump and dump. The message today is, if you make false statements to trick investors, that’s fraud. Period.” stated Acting United States Attorney Joshua Levy.

“These charges are also a stark reminder of how vigilant online investors must be and that doing your homework before diving into the digital frontier is critical. People considering making investments in the cryptocurrency industry should understand how these scams work so that they can protect themselves.” he adds.

Also charged by the Security and Exchange Commission were three ‘market makers’, which refers to individuals who engage in two-sided markets of a security.

There’s been a flurry of bad news stories for crypto investors recently, with crypto-linked cybercrime seeing a record year, with stolen funds inflows doubling to around $1.58billion in 2024.

Via The Register

More from TechRadar Pro



from TechRadar - All the latest technology news https://ift.tt/taJEGSo

Rumors that Fujifilm is making an all-new camera with a new kind of sensor , tipped for 2025, have plenty of fans excited. Details are thin...

Rumors that Fujifilm is making an all-new camera with a new kind of sensor, tipped for 2025, have plenty of fans excited. Details are thin, to say the least, and initial speculation has been based on what makes most sense according to Fujifilm's current camera lineup, most plausibly landing on a digital compact with a 1-inch sensor.

That logic would pit the would-be Fujifilm camera against the likes of the Sony RX100 VII, which is one of our favorite premium compact cameras. However, there has recently been a surprising development that suggests this new sensor could, in fact, be a unique vertical one rather than being horizontally positioned like in pretty much every digital camera.

So, you would hold this camera horizontally – which is the easiest way to hold a camera – yet make vertical format pictures and videos, like the natural way on your phone. In analog terms, it's the approach of half-frame, which is the format of the recent Pentax 17: it uses 35mm film but takes two half-sized vertical pictures in the space of every single frame on the film roll.

The difference here is that Fujifilm's rumored camera isn't analog but supposedly digital. So, is a digital half-frame camera a smart idea or a gimmick?

Pentax 17 compact film camera front-on, in the hand with boats in background

The Pentax 17 is a popular half-frame analog camera that shoots pictures in vertical format when held horizontally, like above. (Image credit: Future | Tim Coleman)

Is a digital 'half-frame' compact camera a gimmick?

If anyone can make a digital compact with vertical sensor work, presumably being aimed at content creators, it's Fujifilm. Fujifilm is a trending camera brand – its X100VI is one of the most popular and sought-after cameras in recent memory. Analog photography is also trending, with the half-frame Pentax 17 proving to be one of the hits this year. So bringing the two design concepts together into one: a retro digital compact with social-friendly vertical photos and videos – it should make sense.

Am I convinced? Yes and no. Let's say the rumor is true. On the one hand I think a 'half-frame' digital compact is an easy sell in 2024, especially with Fujifilm's retro looks. But what would it be like to actually take pictures and videos with said camera?

Let's be clear: You can simply rotate a regular digital camera 90 degrees to shoot in vertical format and rotate those video clips using a video editor. Or you can sacrifice video resolution by cropping into your horizontal videos to make a vertical one. However, these steps are awkward, and a camera that's optimized for shooting vertically makes a lot of sense in 2024 and beyond.

Most people view short-form video content and photos on their phones vertically, so why not just make capturing in that format as easy as possible? Sure, shooting half-frame is counter intuitive at first. However, you'd get the full-resolution of the sensor for vertical video rather than having to crop down to a lower resolution, and avoid unnecessary time editing. Being a dedicated camera, you would also get a superior user experience over using your phone instead.

Half-frame makes even more sense for analog photography, where your photos are permanently exposed onto a film roll. For instance, I love creatively thinking in pairs, which is another layer of image curation, plus you double the number of your shots on a film roll. In a way you don't get the same practical benefits with digital and a memory card that can hold thousands of photos. As an aside, I wouldn't be the only one hoping that Fujifilm follows in Pentax's footsteps and develops an analog camera, especially as it's one of the leading producers of photographic film.

I can see a digital half-frame compact resonating with many people, and being ridiculed by others. Personally, I'm all for brands trying new things and I hope this rumor turns out to be true. If the camera materializes, it'll certainly spark debate and offer creators a unique shooting experience to wrap their heads around.

You might also like



from TechRadar - All the latest technology news https://ift.tt/MrbFud1

With October in full swing, you might be looking for some of the best horror movies to stream. But outside of the classics and the spooky ...

With October in full swing, you might be looking for some of the best horror movies to stream. But outside of the classics and the spooky new movies, one of the best streaming services has just dropped a contender for my favorite horror series of the year, a perfect adaptation of Junji Ito's manga Uzumaki. After being let down by the 2000 movie (which is available on Prime Video in the US and Shudder in the UK), it is refreshing to finally see it done properly.

Uzumaki: Spiral Into Horror is a four-part adaptation of Ito's frankly massive manga that throws you in the deep end from the first episode alone. The show is streaming on Max and Adult Swim in the US and Channel 4 in the UK. It wastes no time in establishing this terrifying world, where citizens in the town of Kurouzu-cho are plagued by spirals. Surreal as the concept is, it grips you immediately, with this obsession and paranoia around spirals resembling that of a disease. People are terrified of it, to the point where some seriously crazy stuff happens. It's gritty, it's dark, and Max is really delivering that tone especially when you look at DC hit The Penguin, which we compared to The Sopranos, and rightfully so.

I was also pleased to discover that even people who have never read the manga could get sucked in. My partner watched the first episode with me and since then has been keen to tune in when it airs each week, and ironically, we found ourselves becoming just as invested in the spirals as the people on-screen. With less horrifying stuff happening, of course.

Junji Ito's lines come to life on screen

A woman pushes back her hair to reveal a spiral mark on her forehead

(Image credit: Adult Swim)

For me, the most striking thing of all is just how stunning this is to look at. It's the reason the manga gripped me too, I found myself desperate to turn to the next page to see what horrific, albeit beautifully drawn thing would greet me this time, and watching the TV adaptation is no different. Sticking to the original black and white design, it's like watching a moving version of the manga. This is exactly where the movie failed, in my opinion, because it was a color live-action take on the tale and it simply did not work. Ito's terrifying world is best when it's devoid of any color at all, I say let's keep it that way.

The art is beautiful. It's also the worst thing you'll ever see. I have omitted some of the really awful imagery to not spoil it. If you have read the manga already, you'll no doubt find yourself anticipating certain famous moments, and even when I knew what was about to happen I still found myself cringing. From transformations to mutations to people having psychotic breaks due to the spirals, nothing can fully prepare you for it. If you we wondering, yes, it is even worse when it's animated.

Uzumaki has been in the works for a long time, and I am glad they spent so much time making it as accurate to the source material as possible. There's little point diverting from it too much when Ito has given us such a great story. It's up there with some of the best anime shows you'll watch.

When you're done here, there are plenty of other spooky offerings to sink your teeth into as well. I also recommend James Wan's Teacup and some of these indie horror games (Cult of the Lamb is my favorite!). But until then, please do step into the horrifying world of Uzumaki. You won't regret it.

You might also like



from TechRadar - All the latest technology news https://ift.tt/aIWPGXh

This week, after months of waiting for a follow-up to the hugely successful Nintendo Switch handheld we finally got brand new Nintendo hard...

This week, after months of waiting for a follow-up to the hugely successful Nintendo Switch handheld we finally got brand new Nintendo hardware in form of a clock called Alarmo. We also saw some major AI developments for Gemini, and the RTX 5090 price leaked (spoiler, it ain't cheap).

To catch up on all of this and more, we've collected the week's biggest news stories here so you can find out about everything you missed.

Once you're up to speed why not check out our picks for the seven new movies and TV shows to stream this weekend (October 11).

8. Apple struggled to keep a lid on the M4 MacBook Pro

YouTuber Wylsacom with a leaked Apple MacBook Pro with M4 chip.

(Image credit: Wylsacom)

Apple doesn’t really do leaks, so this week was something of a shock for tech fans used to its watertight launches.

Not only did we see a wave of credible video and benchmark leaks for the rumored M4 MacBook Pro, several people in Russia claimed to be selling the unannounced laptop on a classified ads site. Not quite on the level of leaving an iPhone 4 prototype in a bar, but not far off.

While it’s possible that those now-pulled adverts were fakes, the sheer number of convincing leaks suggests that an M4 MacBook Pro is coming soon – potentially with more Thunderbolt ports and a Space Black version.

7. Toyota revealed a future powered by hydrogen cartridges

A hydrogen battery sticking out the side of a grey car

(Image credit: Toyota)

Hydrogen hit the headlines again this week as a possible fuel source for cars and even homes as Toyota revealed some concept portable cartridges that look like giant AA batteries.

Toyota says the cylinders have been developed using its experience in shrinking the hydrogen tanks in its fuel-cell electric vehicles. The concept is certainly an alluring one – rather than having to refuel at petrol stations or EV charging points, you could just swap out your power source when your hydrogen levels run low. In theory, at least.

Whether the concept makes it to reality remains to be seen, but it’s hopefully at least somewhere down the road – we can’t endure broken EV charging networks for much longer.

6.Nintendo finally launched new hardware

Nintendo announced new hardware this week, but it wasn’t the Nintendo Switch 2. Instead it revealed (of all things) a new sound clock called Alarmo.

It features a 2.8-inch LCD screen that tells you the date, time, and shows a playful Nintendo mascot – including Link, Olimar, and (of course) Mario – who react to what you and Alarmo are doing. Though if you stay in bed for too long Alarmo might send a less friendly face to motivate you – like the evil king Bowser.

What makes this smart alarm clock clever however is its in-built motion sensor which can track your movements. Alarmo can track your sleep habits which you can review in the morning, can be waved at to snooze your alarm, and can detect when you sit up to stop your alarm.

Alarmo’s only available to buy for people who are paid Nintendo Switch Online members right now, but it should be launching to the general public in January 2025.

5. Google's Imagen 3 rolled out worldwide

Image made in Imagen 3

(Image credit: Google)

This week Google updated its Gemini AI chatbot to use the latest Imagen 3 software for generating images. It’s easy to use too, you just ask Gemini to create an image using the same text prompts that you use to talk to Gemini normally. Imagen 3 sees considerable improvements over the previous version, with much better detail in images, especially where text is concerned.

Imagen 3 is available to everybody who can access Gemini, on a laptop or smartphone, even if you are on the free tier, however, while the image quality of Imagen 3 is superb, and there don’t seem to be limits on how many images you can create a day, there is one slight annoyance – you need to be a Gemini Advanced subscriber if you want to use it to generate images of people.

4. The Apple Intelligence release date leaked

Apple Intelligence

(Image credit: Apple)

Apple Intelligence finally has a release date... sort of. We were told Apple’s AI tools would arrive on iPhone, iPad, and Mac as part of a software update in October, and Bloomberg’s Mark Gurman has given us a date.

Gurman suspects iOS 18.1, iPadOS 18.1, and macOS Sequoia 15.1 will arrive on October 28, ushering in a new era for Apple as it moves into the AI-powered future. It’s definitely an exciting time to own Apple products, but will features like Writing Tools, Clean Up, and notification summaries be enough to make people care about AI?

At WWDC, Craig Federighi called it ‘AI for the rest of us’, but time will tell if the ‘rest of us’ even want AI to begin with. Expect to see Apple Intelligence features roll out over the next year with Genmoji and Image Playground arriving before the end of the year and Siri’s long-anticipated update set to release in early 2025.

3. The Loop Dream helped us sleep well

Loop Dream earbuds on a desk

(Image credit: Future)

Loop released its latest noise reducing ear buds, Loop Dream, which are specifically designed for sleep. Offering the highest noise reduction in the Loop range at 27dB, Loop Dream features redesigned oval tips that put less pressure on your ear canal as well as a new, silicon-coated loop that secures the buds in the cavum of your ear.

These handy little buds proved to be massively useful for our Managing Editor of Lifestyle, who's been using them for the last three weeks - and finally slept well because of it.

2. Nvidia apparently losing its mind with next-gen GPU pricing

An imagined RTX 4090 against a black background

The RTX 4090 (Image credit: Nvidia/Future)

It was a rocky ride for Nvidia this week from the rumor mill, and the most eye-opening piece of speculation came regarding the purported price tags that Team Green could pin on RTX 5000 graphics cards when they arrive (likely early next year).We were seriously shocked to discover that Nvidia is apparently mulling – and it is just a consideration at this point – a price of between $1,999 and $2,499 for the flagship RTX 5090. And the leaker who shared this – YouTuber Moore’s Law is Dead – reckons that the company is looking more towards the $2.5K mark, than a mere two grand. Yikes.

Furthermore, Nvidia may be thinking about pitching the RTX 5080 from $1,199 up to $1,499, and the RTX 5070 could go for $599 to $699. An RTX 5070 that is potentially equipped with only 12GB of VRAM, we should note, adding to the indignation around this week’s Nvidia-related leaks.

What’s going on with these prices? We’re honestly a bit baffled, but a theory proposed that maybe Nvidia is testing the reaction to this pricing, when the figures were inevitably leaked, could offer up some hope that there’ll be a reversal of course here. Come on, Nvidia – don’t do this to us. The worst thing, in some ways, is that these days it almost feels inevitable that Team Green will push the envelope when it comes to expensive, and that worse still, this gives AMD no incentive to price more competitively with RDNA 4 GPUs, either, when they arrive. Meh…

1. Panasonic revealed the world’s smallest zoom lens for full-frame

Panasonic Lumix S 18-40mm F4.5-6.3 attached to a 'Smokey White' Lumix S9 camera

(Image credit: Panasonice)

Panasonic's new Lumix S 18-40mm F4.5-6.3 became the world's smallest and lightest zoom lens with autofocus for full-frame cameras – and it's an ideal pairing with the Lumix S9 mirrorless camera, for which a big firmware update was also announced, plus improvements to Panasonic's Lumix Lab app.

Tipping the scales at just 0.34lb / 155g and measuring just 40.9mm in length when retracted, the 18-40mm is positively tiny yet still packs a wider than average 18mm perspective that's ideal for video creators, weather resistance, focus breathing suppression, plus decent close focusing capabilities – just 0.15m / 0.49ft. It's exactly the lens that Panasonic's polarizing Lumix S9 for content creators needed, a camera that we labeled "small, simple, powerful, flawed" in our Lumix S9 in-depth review, but whose compact form felt rather redundant without a complementary L-mount lens. That changed with the new 18-40mm which, along with the firmware update, gives the Lumix S9 gets a second wind and could realize its potential as one of the best YouTube cameras.



from TechRadar - All the latest technology news https://ift.tt/urBcbCR