Home Top Ad

Responsive Ads Here

Remedy Entertainment has confirmed it has acquired the full rights to the Control franchise from 505 Games, the company that originally pu...

Remedy Entertainment acquires rights to the Control franchise from publisher 505 Games

Remedy Entertainment has confirmed it has acquired the full rights to the Control franchise from 505 Games, the company that originally published the 2019 title.

Remedy confirmed as such in an investor's post on its official website. The post states that "all publishing, distribution, marketing and other rights" pertaining to the Control franchise have been reverted to the Finnish developer.

"The Control franchise is in the core of Remedy," the post continues. "Having acquired the full rights to Control, Condor [a multiplayer side project for Control], and Control 2, Remedy is now in a position to make the right product and business decisions focusing on long-term franchise growth.

Control 2 was officially announced back in 2022, but there's been no further information on the sequel in the interim. Since then, Remedy Entertainment has also released Alan Wake 2, which launched to critical acclaim and several 2023 Game of the Year nominations.

Furthermore, development is underway on remakes for the first two Max Payne titles, as well as Condor, a multiplayer-focused title set in the Remedy Connected Universe (which encompasses Alan Wake, Control, and potentially Quantum Break).

This will likely be welcome news for fans of Control and Remedy Entertainment projects at large. The timing of the announcement is quite apt, too; earlier in February 2024, Remedy CEO Tero Virtala stated that Control 2 and other projects "have all increased development pace" due to Alan Wake 2's strong commercial performance.

Alan Wake 2 also has two paid DLC expansions on the way this year; The Lake House and Night Springs. While it's been confirmed that neither will lead directly into Control 2, creative director Sam Lake stated that they'll feature some hints about what's to come in the sequel to Alan Wake's sister franchise.

We consider Alan Wake 2 to be one of the best horror games released in recent years. And for more games of a similar vein, be sure to browse our best single-player games list for recommendations on more excellent titles.



from TechRadar - All the latest technology news https://ift.tt/o7uIbVG

0 coment�rios:

While there is no sugar coating the pressures on businesses and government departments to meet NetZero carbon targets, most IT leaders have...

Why AI progress doesn’t have to come at an environmental cost

While there is no sugar coating the pressures on businesses and government departments to meet NetZero carbon targets, most IT leaders have the added strain of trying to keep up with demands for new technologies. It’s a constant balancing act of enabling people to work and perform better, while addressing ESG compliance and not blowing IT budgets.

Automation is now dominating IT buyer thinking. New products and tools keep emerging. Only recently, Microsoft founder Bill Gates talked about the huge potential of AI assistants, for example, suggesting the race is on for organizations to develop powerful AI assistants that could reshape the digital landscape, putting the likes of Google and Amazon under threat. He suggested these AI assistants could radically change behaviors impacting everyday life and work. We’ve already seen an element of this with ChatGPT, while Microsoft has already made a play in this direction with the announcement of its Copilot AI assistant for 365.

The fact is, automation is attractive to organizations for productivity, efficiency and overcoming skills shortages but it can come at a cost, both a financial and environmental one. As Gartner warned in its 10 Strategic Predictions for 2023, AI comes with increased sustainability risk. By 2025, it says, “AI will consume more energy than the human workforce, significantly offsetting carbon-zero gains.” With this in mind, something surely has to be done now, to enable AI without undermining environmental efforts.

Meeting ESG targets is, according to Deloitte at least, a more prominent issue in boardrooms this year, so how organizations balance this with increased automation needs will be key. Cloud computing is, of course, central to the enablement of AI tools in organizations. Digital transformations to implement platforms that unify organizations and therefore data are driving cloud adoption.

As Gartner revealed recently, worldwide spending on cloud is expected to hit around $600 billion this year, driven primarily by emerging technologies, such as generative AI. Sid Nag, vice president analyst at Gartner, says generative AI requires “powerful and highly scalable computing capabilities to process data in real-time,” with cloud offering “the perfect solution and platform.”

Cloud bursting

And yet, cloud continues to be dogged by claims of being bad for the environment and not helping organizations hit their ESG compliance targets. In fact, the cloud industry has been one of the most active in trying to increase efficiencies and reduce environmental impacts. Such is the demand for cloud services, that inevitably keeping up is difficult. Piling on more racks in a datacenter is a short term solution but not really a long term answer, especially given the leap in power demands to manage increased automation.

In our Enterprise Cloud Index research, 85% of 1,450 IT decision makers acknowledged that meeting corporate sustainability goals is a challenge for them. While nearly all (92%) said sustainability was a much more important issue than a year ago, there is clearly a disconnect between what organizations want to achieve and how they go about it. What we have seen is that there are big challenges arising from a mix of complexity and IT budget restraints.

Our research shows that most organizations use more than one type of IT infrastructure, whether it is a mix of private and public clouds, multiple public clouds, or an on-premise datacenter, along with a hosted datacenter. This is only going to grow but mixed infrastructures create new management challenges. Given the increased complexity, organizations need a single, unified place to manage applications and data across their diverse environments, to reduce costs but also to measure impacts.

Increasing efficiencies in data processes is an important step in reducing ‘hits’ on IT systems but this really only goes part of the way. The real step change for any organization operating in the cloud is looking at the underlying infrastructure. Measuring and then managing impacts from datacenters will continue to be key to reducing carbon impacts of organizational computing. As with a car, if you have a smaller and yet more powerful and efficient engine, not only are you going to reduce emissions, you are going to enable room for growth and increased performance, through tools such as AI.

Re-framing the picture

As Atlantic Ventures suggests in its report Improving sustainability in data centers, the required energy demand on datacenters is still very high and results in large amounts of carbon dioxide emissions. Energy consumption is a major factor in measuring environmental performance of datacenters but one traditional method is now being questioned.

Fundamentally, changes need to be made at the rack. Infrastructure modernization starts with hyperconverged infrastructures (HCI), reducing ‘moving parts’ and therefore energy needs. This also means less complexity, both in terms of cloud structures but also data management. This is what will achieve the most direct outcomes.

As Atlantic Ventures says, “in the EMEA region HCI architectures have the potential to reduce up to 56,68 TWh from 2022-2025 and save up to €8.22bn in electricity costs in the same period for companies and data center providers undertaking a complete transformation towards HCI.” This, combined with next generation liquid cooling is a huge step towards creating a low impact platform for the future.

For any organization looking to embrace AI and related automation applications, addressing infrastructure complexity now is key. Running datacenters is an increasingly specialist business (especially given on-going high energy prices) and as more and more data is required in real time, so the challenges for organizations only increase. With the right partners and the most efficient infrastructure in place, any organization could consider itself AI-ready without sacrificing an ESG targets.

We've listed the best bare metal hosting.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro



from TechRadar - All the latest technology news https://ift.tt/R9YG3FA

0 coment�rios:

Deck Nine Games has confirmed that it’s laying off 20 percent of its staff as a result of being affected by “the games industry’s worsening...

Life is Strange: True Colors developer Deck Nine is laying off 20% of its staff due to ‘worsening market conditions’

Deck Nine Games has confirmed that it’s laying off 20 percent of its staff as a result of being affected by “the games industry’s worsening market conditions.”

In a statement posted to Twitter / X yesterday (February 27), the studio said that the decision to lay off staff was “difficult” and that those affected are “amazing, talented and awesome developers.” It encouraged other companies to hire them if possible. 

“Like many others in the games industry right now, Deck Nine has been affected by the games industry’s worsening market conditions. Today we made the difficult decision to lay off 20 percent of our staff,” the statement reads. 

“These people are amazing, talented and awesome developers. They have made a huge impact during their time at Deck Nine Games and we did not take this decision lightly. Please hire these people if you can, they’re amazing.”

See more

At the time of writing, it’s not clear how many people have been affected by the layoffs, nor has Deck Nine outlined which roles at the studio have been impacted.

Deck Nine Games developed 2021’s Life is Strange: True Colors, as well as Life is Strange: Before the Storm and the series’ Remastered Collection. It also co-developed the episodic adventure game The Expanse: A Telltale Series alongside Telltale Games. 

Sadly, Deck Nine Games is far from the only studio to have made staff cuts recently. Yesterday, it was also confirmed that PlayStation is cutting around 900 jobs worldwide along with the closure of its London Studio, and Den of Wolves developer 10 Chambers revealed that it’s disbanded five roles as part of changes it believes will “benefit the long-term development” of its upcoming co-op heist game. 

Looking for some new games to play? You can find some top recommendations on our roundups of the best PC games, best PS5 games and best Xbox Series X games.



from TechRadar - All the latest technology news https://ift.tt/Snj45dh

0 coment�rios:

Close your eyes and imagine a calm, relaxing scene. What can you hear? Maybe the gentle patter of rain, a rumble of thunder, damp mud squel...

'Ambient battlefield' is the strangest ASMR trend that just might help you sleep

Close your eyes and imagine a calm, relaxing scene. What can you hear? Maybe the gentle patter of rain, a rumble of thunder, damp mud squelching, the distant gunfire of WWI fighter planes... No, you haven't switched your 'goodnight' mix to your 'zombie outbreak' playlist. Some people really are using 'battlefield ambience' to get to sleep.

If 'trench warfare' isn't what springs to mind when you hear the word 'relaxation', you'd be just as surprised as TikTok user blusoho when she discovered her husband snoozing to the sounds of warfare. Even the best mattress couldn't help most of us drift off with this kind of racket in the background. But while this viral video has shocked a corner of the internet, there is some logic to this unexpected sleep routine.

@blusoho

Like how???? 😭😭

♬ original sound - blusoho

The repetitive rumbling that defines many of these warfare ambience videos bears some similarity to more traditional white noise. Add on a calming sense of familiarity – thanks to a childhood playing video games – and you have one of the most surprisingly popular sleep trends ('battlefield ambience' turns up pages of hits on YouTube). Let's dive into just what's going on when you turn off the lights and turn up the trench warfare...

Why might battlefield ambience help anyone sleep?

When it comes to winding down for bed, we tend to choose activities that are relaxing and familiar. Admittedly, for most of us, those terms don't describe the noise of a WWI battle. But they do bring to mind snuggling up with familiar book and the gentle music from your childhood bedtime routine. And while it sounds strange, this is largely what's happening for those who enjoy battlefield ambience. 

Speaking to Newsweek, blusoho, creator of the viral TikTok that exposed the trend, explains that her husband played video games as a child. War games, specifically. When he boots up 'WWI Distant Battle Ambience', it doesn't make him think of war. Instead, it reminds him of cozy days in his childhood home – a much more relaxing thought. 

I could see the logic behind this, but at first I wasn't quite convinced that any amount of happy associations could make battlefield ambience relaxing. Then I gave it a listen. And to be perfectly honest, I kind of get it. While I won't be playing this one before bed, the repetitive, rumbling sounds are more cinematic than scary. There was even some heavy rainfall – one of the favorite white noise sounds for sleep. If I hadn't known it was battle ambience going in, I might have thought it was a thunderstorm in a busy city. 

All kinds of sounds can be relaxing before bed, from persistent rainfall to sizzling bacon, and what works for one person might not for another. I can't sleep to birdsong, as I discovered when I tested the Groov-e Serenity Sound Machine, but others find endless tweeting and chirping can get them snoozing. And while I was never one for fighter games as a kid, the Crash Bandicoot soundtrack has an unexpected soporific effect. So perhaps it's not that surprising that the boom-bang-blast of a turn of the century battlefield signals relaxation to some.

Which means next time you're struggling to sleep, instead of asking yourself 'what's a relaxing thing to do?', think about what's the familiar thing to do. You never know, you might find that Dinosaur Forest or Zombie Apocalypse is just what you need to get snoring again. 



from TechRadar - All the latest technology news https://ift.tt/8UNXfu5

0 coment�rios:

If you've played any amount of Elden Ring , you may have heard stories of a player by the name of ' Let Me Solo Her .' The ch...

Elden Ring player 'Let Me Solo Her' may quit the game's hardest boss when Shadow of the Erdtree DLC arrives

If you've played any amount of Elden Ring, you may have heard stories of a player by the name of 'Let Me Solo Her.'

The character went viral after clips surfaced of the mysterious Tarnished using the game's co-op system to aid other players in fighting what's largely considered to be its hardest boss fight: Malenia, Blade of Miquella. And they do almost stark naked, with nothing but a Cold Uchigatana, the Rivers of Blood Katana, and the rather humorous Jar helmet.

Now, the player behind the character (referred to as 'LMSH') has caught up with IGN to discuss his future plans regarding both the Malenia boss fight and the upcoming Shadow of the Erdtree DLC, which is currently set to launch on June 21.

Discussing the fight through email, LMSH tells IGN he's now logged around 1,200 hours into Elden Ring, and in regards to Malenia, has "probably defeated her about 6,000-7,000 times by now."

So many successful runs are bound to have any player wanting something else, and indeed this may be the case with LMSH. After thousands upon thousands of clears, he admits that "I've had my fill of fighting Malenias lol," while also praising the fight as a "great joy" to play through.

LMSH also talks about his excitement for the Shadow of the Erdtree DLC. "Soulslike games have a history of their DLCs being the best part of the game," he states, "and I trust that Mr. Miyazaki [Elden Ring's director] will give us another masterpiece to enjoy."

He also hopes that Messmer the Impaler, who appears to be the content's flagship boss fight, will provide as much of a challenge as encounters in FromSoftware's repertoire. Though he understandably remains unsure as to whether or not he'll continue soloing the content for other players.

"Everyone knows that FromSoft likes to make the DLC bosses the strongest (for example, Gael). I welcome the challenge and hope newer fans of the genre will also enjoy the difficulty as well," says LMSH, "I'm not too sure if I will solo the newest boss yet. I will have to see what the boss will be like."

Need something to play while waiting on the Shadow of the Erdtree DLC? Consider checking out our list of the best soulslike games for similarly excellent titles in the same subgenre.



from TechRadar - All the latest technology news https://ift.tt/5QXthsL

0 coment�rios:

If you're a fan of historical sagas like Game of Thrones or Rings of Power, mark your calendar for the 27th of February. That's whe...

You've never seen a Samurai story like this: FX's Shōgun is beautifully brilliant

If you're a fan of historical sagas like Game of Thrones or Rings of Power, mark your calendar for the 27th of February. That's when you can stream FX's Shōgun on Disney+. It's a beautiful epic set in feudal Japan at the dawn of a century-defining civil war, a show that's filled with drama, violence and unexpected humour too. 

Armies gather and prepare for war

It's the year 1600, and Japan is on the brink of war: the clan heads don't yet know it but the Battle of Sekigahara, the biggest, most lethal battle in Japanese feudal history, is just months away. 

As storm clouds gather, Yoshii Toranaga, Lord of the Kwanto and president of the Council of Regents, is fighting for his life to become Shōgun, Japan's supreme military commander. As Toranaga's enemies unite against him, he doesn't yet know that another man is about to enter his world and change it, and Japan, forever.

That man is John Blackthorne, an English pilot on board the Dutch warship Erasmus. Erasmus's mission is to disrupt Japan's ties with Portugal and the Catholic Church, but a storm blows it off course and maroons Blackthorne and a handful of survivors in a small fishing village. When the men are discovered, they are captured and held prisoner until it emerges that Blackthorne is in possession of secrets that could help Toranaga tip the scales of power – secrets that Toranaga may have to kill for, and that others would kill to protect.

Secrets to die for – or die defending

As Blackthorne speaks little Japanese and Toranaga no English, the men come to depend on the mysterious Toda Mariko. Lady Mariko is a Christian noblewoman and the last in line of a disgraced and formerly powerful family. Mariko becomes the two men's translator and develops a relationship with Blackthorne, a relationship that leaves her torn between Blackthorne, her loyalty to the faith that saved her, and her duty to the memory of her father. 

The secrets Blackthorne describes to Lady Mariko are explosive, so much so that they could deal a killer blow not just to Toranaga's enemies but to Blackthorne's too. His revelations set in motion a chain of events that will ultimately see the Englishman become the first Western man to become of Japan's legendary warriors, the Samurai.

A tale stranger than fiction

The story of FX's Shōgun isn't entirely fictional: Blackthorne is based on a real person, William Adams, the navigator who was the very first englishman to ever reach Japan and who would go on to live an extraordinary and dangerous life during one of the most turbulent times in Japan's history. 

Adams' story was the inspiration for James Clavell's global best-selling novel Shōgun, a worldwide smash hit that was adapted for TV in the early 80s, and now Disney+ brings this extraordinary adventure to the screen in all its incredible glory. 

An epic telling of an incredible story

FX's Shōgun delivers everything you could wish for from a historical saga: unforgettable characters, stunning battles, brutal betrayals and twists within twists within twists. It's about power, and about passion, and with its themes of precarious peace, cultural conflicts and politicians' lust for power, it's truly timeless: Toranaga and his enemies could be taken from today's headlines too.

The cast of FX's Shōgun includes Cosmo Jarvis as John Blackthorne, Anna Sawai as Toda Mariko and Hiroyuki Sandata as Yoshii Toranaga, and it also features Tadanobu Asano, Hiroto Kanai, Takehiro Hira, Moeka Hoshi, Shinnosuke Abe and Tokuma Nishioka. Hiroyuki Sandata is also the producer. and the show was created for television by Rachel Kondo & Justin Marks, with Marks serving as showrunner and executive producer alongside Michaela Clavell, Edward L. McDonnell, Michael De Luca and Kondo. The series is produced by FX Productions.

FX's Shōgun will be streaming on Disney+ from 27 February 2024.



from TechRadar - All the latest technology news https://ift.tt/RdZvpCX

0 coment�rios:

It feels like an age since Renault unveiled its electric 5 concept car back in 2021, but the French company has finally taken the wraps off...

Renault reveals all about its 5 E-Tech Electric: a French fusion of retro charm, modern EV practicality and AI

It feels like an age since Renault unveiled its electric 5 concept car back in 2021, but the French company has finally taken the wraps off a production model. However, you won’t see it hit European roads until early next year.

Unveiled at a much smaller and more intimate Geneva Motor Show than we are used to, Renault wheeled out a production-ready EV that has stayed true to the original 2021 concept (95% faithful, according to the brand), itself borrowing styling elements from the plucky hatchback that sold some 5.5 million units between 1972 and 1985, but fusing it with the latest in electrical architecture and underpinnings.

The Renault 5 E-Tech rides on a new AmpR Small chassis that has been designed by the group to underpin a swathe of new, compact electric vehicles. The car will come with two battery options - either a 40kWh or a  52kWh unit - with the powertrain producing either 90kW/120hp or 110kW/150hp available.

Renault 5 E-Tech Electric

(Image credit: Renault)

The electric motors are borrowed from the current Megane E-Tech, but have been reduced in weight by 15kg thanks to some engineering breakthroughs. Renault says it shuns permanent magnets in its motors in favour of copper coils, which means less reliance on rare earth materials. 

Performance is punchy enough, if a little underwhelming by today's EV standards. Renault says 0-62mph is possible in eight seconds in the most powerful models. But, it is quick to mention that it has been imbued with ‘agile’ handling and fun driving characteristics. 

Charging speeds range from 11kW AC to 100kW DC fast charging in the larger 52kWh battery, while the 40kWh battery features an 80kW DC socket, which sees both battery packs take around 30 minutes to achieve a 15% to 80% top-up for their respective fastest outlets.

Renault 5 E-Tech Electric

(Image credit: Renault)

Renault says its aim is to keep the weight below 1,500kg to improve battery-supping efficiency, although this is really billed as an electric city car - not a hyper-mileage EV tourer. The longest targeted range is 248 miles in the 52kWh version, and 186 miles in the smaller battery option. 

It’s not exactly going to rival the Kias, Teslas and Porsches of this world, but pricing will be pitched at a much more affordable level as a result. 

A nose for nostalgia

Renault 5 E-Tech Electric

(Image credit: Renault)

In terms of size, the 3.92-meter overall length means it is shorter than Renault's newest Clio. But with the wheels pushed to each corner, the short overhangs and clever use of interior space, it's billed as an everyday EV that will seat five people and offer 326 litres of luggage space in the rear.

In keeping with the squat, aggressive look of former coveted Turbo models, all 5 E-Techs will ride on 18-inch rims, while the car will be launched in some seriously eye-catching colours, including Yellow Pop and Green Pop. That said, more subdued navy blue and white hues have also been shown.

But perhaps the biggest draw is in some of the Gallic quirks and cheeky features that have either been ported over from concept stage or introduced into the production car. The eye-shaped LED headlights wink as the driver approaches with the key and the digital display on the bonnet acts as a charge indicator, as well as displaying the instantly recognisable 5 logo.

Renault 5 E-Tech Electric

(Image credit: Renault)

Inside, the seats have taken plenty of inspiration from the old R5 Turbo in their overall shape, yet they are covered in a funky denim upholstery that's made from 100% recycled plastic bottles. 

In range-topping Iconic Five trim, the passenger receives a padded area in front of them and the general dash and interior ambience feels luxurious and premium…. not phrases typically banded around when talking about Renault 5 of yesteryear.

Customers can go one step further and customize the interior further thanks to 3D printing technology, with the ability to personalize storage areas and even tweak the tip of the gear shifter stalk - dubbed the 'e-pop shifter’ - with a number of preset designs. This little oblong shifter cover can be jazzed up with the tricolore or funky 5 logos, for example.

Tech it onboard

Renault 5 E-Tech Electric

(Image credit: Renault)

Renault's retro-tastic EV will use the same OpenR Link multimedia system as found in the electric Megane. It uses a Qualcomm Snapdragon chip for speed and uses Google's Android Auto software to do much of the app-based action.

Having used it recently in Renault’s hybrid Austral E model, we can confirm that it's slick, fast and intuitive to use. However, the diminutive 5 E-Tech doesn’t get the same vertical tablet as the French marque’s large SUV. Instead, it features a 10-inch digital instrument cluster and a horizontal touchscreen infotainment system of the same size.

However, this will be the first vehicle to offer Reno - the French automaker’s first stab at an AI voice assistant. Apparently, the "feeling of empathy created will strengthen the emotional bond between the user and their Renault 5 E-Tech electric," according to the marque.

Renault 5 E-Tech Electric

(Image credit: Renault)

We’re not sure about that, but Renault says Reno (confused yet?) can handle lots of vehicle functionality and even answer questions. The examples given are:  "Hey Reno, schedule a charge for 8am tomorrow" or "Hey Reno, how can I increase the range of my car?"

What’s more, Renault says that ChatGPT integration will also allow drivers to pose wider questions and receive answers from one of the world’s most popular AI systems. Plus, Reno can even dive into the vehicle handbook and give you tips on changing a tyre.

There has been no official word on price, but Renault 5 E-Tech is expected to start at around £30,000 ($38,000/AUS$58,000) when it goes on sale in Europe.

On that subject, you’ll have to wait until the beginning of next year before Renault’s hot new EV hits showrooms. It seems the French marque likes to keep its fans waiting. 

you might also like



from TechRadar - All the latest technology news https://ift.tt/EUCB9wi

0 coment�rios:

VMware is harking back to the past for the launch of its new SASE offering at Mobile World Congress 2024 . The company has unveiled VMware...

VMware launches new VeloCloud SASE to help tie together all your edge infrastructure

VMware is harking back to the past for the launch of its new SASE offering at Mobile World Congress 2024.

The company has unveiled VMware VeloCloud SASE, a single-vendor SASE solution bringing together VeloCloud SD-WAN, and Symantec SSE.

VMware acquired VeloCloud back in 2017, and originally phased out the company's branding some years later, but has now brought it back as it launches a new SASE product.

VeloCloud SASE return

Revealing the news in a pre-brief ahead of MWC 2024, Abe Ankumah, Broadcom’s Head of SD-WAN, Software-Defined Edge Division, noted that the launch would help allow customers to modernize their underlying infrastructure whilst also monetizing new services.

He added that VMware's approach to the software-designed edge has to focus on enabling a right-size infrastructure (in terms of shrinking the tech stack), zero-touch orchestration (as edge locations are increasingly distributed, and not always constantly online), and network programmability (looking at a need to be agile and flexible where needed).

Ankumah added that the motivation behind re-introducing the VeloCloud brand came because a lot of its customers and partners still refer to VMware existing offerings as VeloCloud, despite the brand being previously hidden away.

"The customer is always right - so we're going to reflect that, and bring back the brand," he laughed.

"The number of new use cases we're able to open up...bears the full-strength of everything we're doing in the software-defined edge," he added.

More from TechRadar Pro



from TechRadar - All the latest technology news https://ift.tt/UipDy27

0 coment�rios:

The Oculus Quest 2 has been the most popular VR headset in the world for the past couple of years – dominating sales and usage charts with...

The Meta Quest 3’s popularity is proof a cheap Vision Pro can’t come soon enough

The Oculus Quest 2 has been the most popular VR headset in the world for the past couple of years – dominating sales and usage charts with its blend of solid performance, amazing software library and, most importantly, affordability. 

Now its successor – the Meta Quest 3 – is following in its footsteps. 

Just four months after launch it’s the third most popular headset used on Steam (and will likely be the second most popular in the next Steam Hardware Survey). What’s more, while we estimate the Quest 3’s not selling quite as well as the Quest 2 was at the four-month mark, it still looks to be a hit (plus, lower sales figures are expected considering it’s almost double the launch price of the Quest 2).

Despite its higher cost, $499.99 / £479.99 / AU$799.99 is still relatively affordable in the VR space, and its early success continues the ongoing trend in VR that accessibility is the make or break factor in a VR gadget’s popularity.  

Oculus Quest 2 floating next to its handsets

The cheap Oculus Quest 2 made VR mainstream (Image credit: Facebook)

There’s something to be said for high-end hardware such as the Apple Vision Pro bringing the wow factor back to VR (how can you not be impressed by its crisp OLED displays and inventive eye-and-hand-tracking system), but I’ll admit I was worried that its launch – and announcement of other high-end, and high-priced, headsets – would see VR return to its early, less affordable days.

Now I’m more confident than ever that we’ll see Apple’s rumored cheaper Vision Pro follow-up and other budget-friendly hardware sooner rather than later.

Rising up the charts 

According to the Steam Hardware Survey, which tracks the popularity of hardware for participating Steam users, 14.05% of all Steam VR players used a Quest 3 last month. That’s a 4.78% rise in its popularity over the previous month’s results and means it’s within spitting distance of the number two spot, which is currently held by the Valve Index – 15% of users prefer it over other VR headsets, even three-and-a-half years after its launch.

It has a ways to go before it reaches the top spot, however, with the Oculus Quest 2 preferred by 40.64% of Steam VR players. The Quest 3’s predecessor has held this top spot for a couple of years now, and it’s unlikely to lose to the Quest 3 or another headset for a while. Even though the Quest 3 is doing well for itself, it’s not selling quite as fast as the Quest 2.

(Image credit: Future)

Using Steam Hardware Survey data for January 2024 (four months after its launch) and data from January 2021 (four months after the Quest 2’s launch) – as well as average Steam player counts for these months based on SteamDB data – it appears that the Quest 3 has sold about 87% as many units as the Quest 2 did at the same point in its life.

Considering the Quest 3 is priced at $499.99 / £479.99 / AU$799.99, a fair bit more than the $299 / £299 / AU$479 the Quest 2 cost at launch, to even come close to matching the sales speed of its predecessor is impressive. And the Quest 2 did sell very well out of the gate.

We don’t have exact Quest 2 sales data from its early days – Meta only highlights when the device passes certain major milestones – but we do know that after five months, its total sales were higher than the total sales of all other Oculus VR headsets combined, some of which had been out for over five years. Meta’s gone on to sell roughly 20 million Quest 2s, according to a March 2023 leak. That's about as fast as the Xbox Series X is believed to have sold, which launched around the same time.

This 87% of Quest 2 sales figure can be taken with a pinch of salt – you can find out how I got to this number at the bottom of this piece; it required pulling data from a few sources and making some reasonable assumptions – but that number and the Quest 2 and 3’s popularity on Steam shows that affordability is still the most powerful driving force in the VR space. So, I hope other headset makers are paying attention.

Lance Ulanoff wearing Apple Vision Pro

The Apple Vision had me a little concerned (Image credit: Future)

A scary expensive VR future

The Apple Vision Pro is far from unpopular. Reports suggest that between 160,000 and 200,000 preorders were placed on the headset ahead of its release on February 2, 2024 (some of those orders have been put on eBay with ridiculously high markups and others have been returned by some disappointed Vision Pro customers).

The early popularity makes sense. Whatever Mark Zuckerberg says about the superiority of the Quest 3, the Apple Vision Pro is the best of the best VR headsets from a technical perspective. There’s some debate on the comfort and immersive software side of things, but eye-tracking, ridiculously crisp OLED displays, and a beautiful design do make up for that.

Unfortunately, thanks to these high-end specs and some ridiculous design choices – like the outer OLED display for EyeSight (which lets an onlooker see the wearer’s eyes while they're wearing the device) – the headset is pretty pricey coming in at $3,499 for the 256GB model (it’s not yet available outside the US).

Seeing this, and the instant renewed attention Apple has drawn to the VR space – with high-end rivals like the Samsung XR headset now on the way – I’ll admit I was a little concerned we might see a return to VR’s early, less accessible days. In those days, you’d spend around $1,000 / £1,000 / AU$1,500 on a headset and the same again (or more) on a VR-ready PC.

Valve Index being worn by a person

The Valve Index is impressive, but it's damn expensive (Image credit: Future)

Apple has a way of driving the tech conversation and development in the direction it chooses. Be it turning more niche tech into a mainstream affair like it did for smartwatches with the Apple Watch or renaming well-established terms by sheer force of will (VR computing and 3D video are now exclusively called spatial computing and spatial video after Apple started using those phrases).

While, yes, there’s something to be said for the wow factor of top-of-the-line tech, I hoped we wouldn’t be swamped with the stuff while more budget-friendly options get forgotten about because this is the way Apple has moved the industry with its Vision Pro.

The numbers in the Steam Hardware Survey have assuaged those fears. It shows that meaningful budget hardware – like the Quest 2 and 3, which, despite being newer, have less impressive displays and specs than many older, pricier models – is still too popular to be going anywhere anytime soon.

If anything, I’m more confident than ever that Apple, Samsung, and the like need to get their own affordable VR headsets out the door soon. Especially the non-Apple companies that can’t rely on a legion of rabid fans ready to eat up everything they release. 

If they don’t launch budget-friendly – but still worthwhile – VR headsets, then Meta could once again be left as the only real contender in this sector of VR. Sure, I like the Meta headsets I’ve used, but nothing helps spur on better tech and/or prices than proper competition. And this is something Meta is proving it doesn’t really have right now.

Girl wearing Meta Quest 3 headset interacting with a jungle playset

(Image credit: Meta)

Where did my data come from?

It’s important to know where data has come from and what assumptions have been made by people handling that data, but, equally, not everyone finds this interesting, and it can get quite long and distracting. So, I’ve put this section at the bottom for those interested in seeing my work on the 87% sales figure comparison between the Oculus Quest 2 and Meta Quest 3 four months after their respective launches.

As I mentioned above, most of the data for this piece has been gathered from the Steam Hardware Survey. I had to rely on the Internet Archive’s Wayback Machine to see some historical Steam Hardware Survey data because the results page only shows the most recent month’s figures.

When looking at the relative popularity of headsets in any given month, I could just read off the figures in the survey results. However, to compare the Quest 2 and Quest 3’s four-month sales to each other, I had to use player counts from SteamDB and make a few assumptions.

The first assumption is that the Steam Hardware Survey’s data is consistent for all users. Because Steam users have to opt-in to the survey, when it says that 2.24% of Steam users used a VR headset in January 2024, what it really means is that 2.24% of Steam Hardware Survey participants used a VR headset that month. There’s no reason to believe the survey’s sample isn’t representative of the whole of Steam’s user base, and this is an assumption that’s generally taken for granted when looking at Hardware Survey data. But if I’m going to break down where my numbers come from, I might as well do it thoroughly.

Secondly, I had to assume that Steam users only used one VR headset each month and that they didn’t share their headsets with other Steam users. These assumptions allow me to say that if the Meta Quest 3 was used for 14.05% of Steam VR sessions, then 14.05% of Steam users with a VR headset (which is 2.24% of Steam’s total users) owned a Quest 3 in January 2024. Not making these assumptions leads to an undercount and overcount, respectively, so they kinda cancel each other out. Also, without this assumption, I couldn’t continue beyond this step as I’d lack the data I need.

The Oculus Quest 2 headset sat on top of its box and next to its controllers

Who needs more than one VR headset anyway? (Image credit: Shutterstock / agencies)

Valve doesn’t publish Steam’s total user numbers, and the last time it published monthly active user data was in 2021 – and that was an average for the whole year rather than for each month. It also doesn’t say how many people take part in the Hardware Survey. All it does publish is how many people are using Steam right now. This information is gathered by SteamDB so that I and other people can see Steam’s Daily Active User (DAU) average for January 2021 and January 2024 (as well as other months, but I only care about these two).

My penultimate assumption was that the proportion of DAUs compared to the total number of Steam users in January 2021 is the same as the proportion of DAUs compared to the total number of Steam users in January 2024. The exact proportion of DAUs to the total doesn’t matter (it could be 1% or 100%). By assuming it stays consistent between these two months, I can take the DAU figures I have – 25,295,361 in January 2024 and 24,674,583 in January 2021 – multiply them by the percentage of Steam users with a Quest 3 and Quest 2 during these months, respectively – 0.31% and 0.37% – then finally compare the numbers to one another.

The result is that the number of Steam users with a Quest 3 in January 2024 is 87.05% of the number of Steam users with a Quest 2 in January 2021.

My final assumption was that Quest headset owners haven’t become more or less likely to connect their devices to a PC to play Steam VR. So if it's 87% as popular on Steam four months after their respective launches, the Quest 3 has sold 87% as well as the Quest 2 did after their first four months on sale.

You might also like



from TechRadar - All the latest technology news https://ift.tt/CWlOQ5R

0 coment�rios:

CPU startup Tachyum has previously said that one of its Prodigy Universal Processor units can rival dozens of Nvidia H200 GPUs. The 192-co...

'Run AI models for as low as $5,000': plucky CPU startup that claimed 99% saving on AI costs now wants to sell you an AI workstation — with an unbelievable price tag and 1TB RAM

CPU startup Tachyum has previously said that one of its Prodigy Universal Processor units can rival dozens of Nvidia H200 GPUs. The 192-core 5nm processor delivers 4.5 times the performance of the best processors for cloud workloads, and claims to be six times more effective than GPUs for AI.

Now the firm has announced the Prodigy ATX Platform, an AI workstation that promises to run advanced AI models for just $5,000. The ambitious system, which features an impressive 1TB of memory, is designed to make sophisticated AI models accessible to a wider audience.

The Prodigy ATX Platform is built around a 96-core Prodigy processor, which is designed with only half of its die enabled, a strategy aimed at reducing power consumption and enhancing yield, thereby lowering costs and making the platform more accessible.

Effective for LLMs

The system is expected to come equipped with 1TB of DDR5-6400 SDRAM using 16 memory modules, offering a peak bandwidth of 819.2 GB/s. The system's design includes three PCIe x16 5.0 slots, three M.2-2280 NVMe slots with a PCIe 5.0 x4 interface, and SATA connectors for SSDs and HDDs.

Tachyum says its Prodigy ATX Platform is particularly effective for running LLMs which are notorious for their high memory capacity requirements. 

A single Prodigy system can reportedly run a ChatGPT4 model with 1.7 trillion parameters, which would require 52 NVIDIA H100 GPUs to run at a significantly higher cost and power consumption. 

Despite the impressive specifications, there are doubts about the economic viability of the Prodigy ATX Platform for Tachyum. As Tom’s Hardware points out, the total cost of the system components, excluding the Prodigy processor, is estimated to be around $4800. However, Tachyum's CEO, Dr. Radoslav Danilak, remains optimistic, stating that the platform's powerful AI capabilities will enable organizations of all sizes to compete in AI initiatives.

"Generative AI will be widely used far faster than anyone originally anticipated," Dr. Danilak said. "In a year or two, AI will be a required component on websites, chatbots and other critical productivity components to ensure a good user experience. Prodigy's powerful AI capabilities enable LLMs to run much easier and more cost-effectively than existing CPU + GPGPU-based systems, empowering organizations of all sizes to compete in AI initiatives that otherwise would be dominated by the largest players in their industry." 

The Prodigy ATX Platform's launch has been delayed multiple times, with the latest plan pegging the processor's launch for the second half of 2024. It remains to be seen whether Tachyum can deliver on its promises and revolutionize the AI landscape.

More from TechRadar Pro



from TechRadar - All the latest technology news https://ift.tt/EdcgkQB

0 coment�rios:

ChatGPT maker OpenAI has now unveiled Sora , its artificial intelligence engine for converting text prompts into video. Think Dall-E (also...

What is OpenAI's Sora? The text-to-video tool explained and when you might be able to use it

ChatGPT maker OpenAI has now unveiled Sora, its artificial intelligence engine for converting text prompts into video. Think Dall-E (also developed by OpenAI), but for movies rather than static images.

It's still very early days for Sora, but the AI model is already generating a lot of buzz on social media, with multiple clips doing the rounds – clips that look as if they've been put together by a team of actors and filmmakers.

Here we'll explain everything you need to know about OpenAI Sora: what it's capable of, how it works, and when you might be able to use it yourself. The era of AI text-prompt filmmaking has now arrived.

OpenAI Sora release date and price

In February 2024, OpenAI Sora was made available to "red teamers" – that's people whose job it is to test the security and stability of a product. OpenAI has also now invited a select number of visual artists, designers, and movie makers to test out the video generation capabilities and provide feedback.

"We're sharing our research progress early to start working with and getting feedback from people outside of OpenAI and to give the public a sense of what AI capabilities are on the horizon," says OpenAI.

In other words, the rest of us can't use it yet. For the time being there's no indication as to when Sora might become available to the wider public, or how much we'll have to pay to access it. 

Two dogs on a mountain podcasting

(Image credit: OpenAI)

We can make some rough guesses about timescale based on what happened with ChatGPT. Before that AI chatbot was released to the public in November 2022, it was preceded by a predecessor called InstructGPT earlier that year. Also, OpenAI's DevDay typically takes place annually in November.    

It's certainly possible, then, that Sora could follow a similar pattern and launch to the public at a similar time in 2024. But this is currently just speculation and we'll update this page as soon as we get any clearer indication about a Sora release date.

As for price, we similarly don't have any hints of how much Sora might cost. As a guide, ChatGPT Plus – which offers access to the newest Large Language Models (LLMs) and Dall-E – currently costs $20 (about £16 / AU$30) per month. 

But Sora also demands significantly more compute power than, for example, generating a single image with Dall-E, and the process also takes longer. So it still isn't clear exactly how well Sora, which is effectively a research paper, might convert into an affordable consumer product.

What is OpenAI Sora?

You may well be familiar with generative AI models – such as Google Gemini for text and Dall-E for images – which can produce new content based on vast amounts of training data. If you ask ChatGPT to write you a poem, for example, what you get back will be based on lots and lots of poems that the AI has already absorbed and analyzed.

OpenAI Sora is a similar idea, but for video clips. You give it a text prompt, like "woman walking down a city street at night" or "car driving through a forest" and you get back a video. As with AI image models, you can get very specific when it comes to saying what should be included in the clip and the style of the footage you want to see.

See more

To get a better idea of how this works, check out some of the example videos posted by OpenAI CEO Sam Altman – not long after Sora was unveiled to the world, Altman responded to prompts put forward on social media, returning videos based on text like "a wizard wearing a pointed hat and a blue robe with white stars casting a spell that shoots lightning from his hand and holding an old tome in his other hand".

How does OpenAI Sora work?

On a simplified level, the technology behind Sora is the same technology that lets you search for pictures of a dog or a cat on the web. Show an AI enough photos of a dog or cat, and it'll be able to spot the same patterns in new images; in the same way, if you train an AI on a million videos of a sunset or a waterfall, it'll be able to generate its own.

Of course there's a lot of complexity underneath that, and OpenAI has provided a deep dive into how its AI model works. It's trained on "internet-scale data" to know what realistic videos look like, first analyzing the clips to know what it's looking at, then learning how to produce its own versions when asked.

So, ask Sora to produce a clip of a fish tank, and it'll come back with an approximation based on all the fish tank videos it's seen. It makes use of what are known as visual patches, smaller building blocks that help the AI to understand what should go where and how different elements of a video should interact and progress, frame by frame.

OpenAI Sora

Sora starts messier, then gets tidier (Image credit: OpenAI)

Sora is based on a diffusion model, where the AI starts with a 'noisy' response and then works towards a 'clean' output through a series of feedback loops and prediction calculations. You can see this in the frames above, where a video of a dog playing in the show turns from nonsensical blobs into something that actually looks realistic.

And like other generative AI models, Sora uses transformer technology (the last T in ChatGPT stands for Transformer). Transformers use a variety of sophisticated data analysis techniques to process heaps of data – they can understand the most important and least important parts of what's being analyzed, and figure out the surrounding context and relationships between these data chunks.

What we don't fully know is where OpenAI found its training data from – it hasn't said which video libraries have been used to power Sora, though we do know it has partnerships with content databases such as Shutterstock. In some cases, you can see the similarities between the training data and the output Sora is producing.

What can you do with OpenAI Sora?

At the moment, Sora is capable of producing HD videos of up to a minute, without any sound attached, from text prompts. If you want to see some examples of what's possible, we've put together a list of 11 mind-blowing Sora shorts for you to take a look at – including fluffy Pixar-style animated characters and astronauts with knitted helmets.

"Sora can generate videos up to a minute long while maintaining visual quality and adherence to the user’s prompt," says OpenAI, but that's not all. It can also generate videos from still images, fill in missing frames in existing videos, and seamlessly stitch multiple videos together. It can create static images too, or produce endless loops from clips provided to it.

It can even produce simulations of video games such as Minecraft, again based on vast amounts of training data that teach it what a game like Minecraft should look like. We've already seen a demo where Sora is able to control a player in a Minecraft-style environment, while also accurately rendering the surrounding details.

OpenAI does acknowledge some of the limitations of Sora at the moment. The physics don't always make sense, with people disappearing or transforming or blending into other objects. Sora isn't mapping out a scene with individual actors and props, it's making an incredible number of calculations about where pixels should go from frame to frame.

In Sora videos people might move in ways that defy the laws of physics, or details – such as a bite being taken out of a cookie – might not be remembered from one frame to the next. OpenAI is aware of these issues and is working to fix them, and you can check out some of the examples on the OpenAI Sora website to see what we mean.

Despite those bugs, further down the line OpenAI is hoping that Sora could evolve to become a realistic simulator of physical and digital worlds. In the years to come, the Sora tech could be used to generate imaginary virtual worlds for us to explore, or enable us to fully explore real places that are replicated in AI.

How can you use OpenAI Sora?

At the moment, you can't get into Sora without an invite: it seems as though OpenAI is picking out individual creators and testers to help get its video-generated AI model ready for a full public release. How long this preview period is going to last, whether it's months or years, remains to be seen – but OpenAI has previously shown a willingness to move as fast as possible when it comes to its AI projects.

Based on the existing technologies that OpenAI has made public – Dall-E and ChatGPT – it seems likely that Sora will initially be available as a web app. Since its launch ChatGPT has got smarter and added new features, including custom bots, and it's likely that Sora will follow the same path when it launches in full.

Before that happens, OpenAI says it wants to put some safety guardrails in place: you're not going to be able to generate videos showing extreme violence, sexual content, hateful imagery, or celebrity likenesses. There are also plans to combat misinformation by including metadata in Sora videos that indicates they were generated by AI.

You might also like



from TechRadar - All the latest technology news https://ift.tt/R1TqzXl

0 coment�rios:

If you’re looking to catch up on the week’s biggest tech news then you’re in the right place. We’ve got a handy update for you that’ll brea...

ICYMI: the week's 7 biggest tech stories from AT&T's service outage to the Borderlands movie trailer giving us déjà vu

If you’re looking to catch up on the week’s biggest tech news then you’re in the right place. We’ve got a handy update for you that’ll break down the most important events from the past week into easy-to-digest chunks.

The standout story came from AT&T after it had a massive cell service outage across the US that lasted for 12 hours and got so bad some affected people couldn’t call 911. They say there's no such thing as bad publicity, but this is one time AT&T probably wishes it wasn’t making headlines.

There was also an AI tool meltdown, Apple launched a new Sports app, the first Borderlands movie trailer dropped, and there’s so much more you need to know. So in case you missed it, here are this week’s seven biggest tech news stories.

7. AT&T had an almighty cellphone outage in the US 

The AT&T logo on a black background

(Image credit: AT&T)

This week, AT&T unintentionally gave its customers a trip back to the early 90s when a huge outage crippled its cellphone service across several major US cities. The problems started in the early hours of Thursday morning when thousands discovered they had no mobile signal – and the outage ultimately hit over 1.7 million customers.

So what caused it? Was it a solar flare, a cyberattack, or an elaborate Netflix promo for Leave the World Behind? Actually, it was simple user error, according to AT&T. The network said the outage was caused by the “application and execution of an incorrect process” as it expanded its network. So if you’re feeling bad about an IT-related gaffe at work, that should make you feel better at least.

6. ChatGPT had a meltdown, and Google Gemini struggled with accurate art

Leslie Nielsen in The Naked Gun shouting 'Nothing to see here!' with a burning ChatGPT logo beside him.

(Image credit: Paramount Global, OpenAI)

ChatGPT had yet another blip in its behavior this week - and it might have been the strangest one yet. Users reported the AI chatbot getting stuck in nonsensical loops, spouting incomprehensible Spanish, and even at one point claiming to be ‘in the room’ with the user.

OpenAI, the creator of ChatGPT, has released a statement telling users not to be concerned, and that the issue has been identified and rectified - but didn’t explain the bot’s bizarre behavior. Speculation is rife among users, with some suspecting that a glitch in the chatbot’s creativity ‘temperature’ resulted in overly imaginative responses to ordinary queries. ChatGPT (including the paid GPT-4 model) now appears to be back to normal, so whatever behind-the-scenes fix OpenAI deployed seems to have worked.

Google Gemini also had some problems this week as users found it struggled to create accurate images of historical figures – particularly white men. The issue seems to have stemmed from well-meaning equality measures Google implemented to ensure Gemini produces a diverse range of people in its AI art in an attempt to counteract biases in its training data. For now, it's turned off Gemini's ability to generate images of people while it tries to fix the bug.

5. Apple launched its Sports App

Apple Sports

(Image credit: Apple)

Apple is no newcomer to sports. It’s been making deals with the big leagues across baseball and football (soccer for our European friends) for ages, but this is the first time it’s developed something just for sports or, more specifically, sports fans.

Apple’s Sports App is a wonderland of stats and real-time game scores that makes the wide world of sports glanceable. What may be even more interesting than the customizable card-based system is how Apple built its new iOS-only app. Apple’s head of services Eddy Cue told us, among other things, that the leagues didn’t have the real-time data on hand Apple needed to build the app but they helped them find it and then Apple did the massive lift of massaging that data, and making it work and look good on Apple Sports.

4. The first Borderlands movie trailer gave us déjà vu

The main characters in Lionsgate's Borderlands movie look down a sewage well

(Image credit: Lionsgate)

It seems you can’t keep Marvel out of the news. After a successful week filled with exciting announcements, the comic book giant has made the headlines again over the past few days, though that’s not through any fault of its own.

Firstly, the Disney subsidiary is reportedly renaming Avengers: The Kang Dynasty as it continues to revise its Marvel Phase 5 and Phase 6 plans in the wake of a turbulent 12-month period. Interesting as that is, however, the studio found itself at the center of another kind of discourse after film fans compared the forthcoming Borderlands movie to Guardians of the Galaxy – that online chatter emerged after Borderlands’ first trailer was released on Wednesday. 

Still, as the saying goes, there’s no such thing as bad publicity, and the MCU can use all the good word of mouth – direct or otherwise – it can get.

3. Apple Vision Pros were returned, but it's maybe a good thing?

An Apple Store staff member shows a customer how to use a Vision Pro headset.

(Image credit: Apple)

The return window for the first batch of Apple Vision Pros has officially closed after two weeks, stirring up some interesting discussions as to why many people are returning the headset. While there was some chatter on social media about a surge in returns, insider sources are painting a different picture and offering some interesting insights into who's returning their headset and why.

In our review of the Vision Pro, we dived into both the good and not-so-good aspects of this groundbreaking venture into mixed reality. It seems like the high price tag of $3,499/£2,788, AU$6349 might be causing a case of buyer's remorse for some folks. Alongside that, influencers and YouTubers, always on the lookout for the latest tech trends, have been using the return policy to create content for their channels without the steep financial commitment.

But here's the silver lining: every return comes with a detailed survey, giving users a chance to share their experiences and suggestions. This feedback could help shape future versions of the Vision Pro. Mark Gurman, a trusted Apple insider, has chimed in too, noting discomfort, motion sickness, and the hefty price as common reasons for returns.

2. Garmin launches a more a-fore-dable Forerunner watch 

The Garmin Forerunner 165 on a wrist

(Image credit: Garmin)

Garmin is well-known as a maker of some of the best running watches around, but many of its best models like the Garmin Forerunner 265 and 965 are premium purchases. So it was great to see Garmin release a cheaper model this week, the Garmin Forerunner 165. In our early tests, we found it’s shaping up to be a great GPS watch for working out and a well-designed cheaper version of the Forerunner 265.

However, it’s missing a couple of features that really elevate the line, such as Garmin’s Training Readiness score, and it’s made of much lighter plastic and not weighty polymer or stainless steel. Nevertheless, with Samsung also releasing the Galaxy Fit 3 fitness tracker, it’s a great time to want a high-quality, affordable workout tracker. 

1.  The Fujifilm X100VI landed and immediately shattered pre-order records  

Front of the Fujifilm X100VI reflected in glass table

(Image credit: Future)

It’s only February, but Fujifilm may have already released the most popular camera of 2024. This week, our hands-on Fujifilm X100VI review branded it the “best premium compact camera for most people” and Fujifilm says the retro star has already hit its “biggest pre-order numbers in history”.

Considering how good smartphone cameras have become, that’s impressive and also slightly surprising – particularly given the X100VI has a fixed 23mm f/2 lens. Then again, it’s also a beautiful little camera that combines modern comforts like in-body image stabilization and powerful autofocus with classic film camera design and fun film simulations. We’ll see you in the queue.



from TechRadar - All the latest technology news https://ift.tt/GCMnwAm

0 coment�rios:

After some pretty hefty wait times and server issues, Helldivers 2 CEO Johan Pilestedt finally confirms that concurrent player counts have...

Helldivers 2 concurrent player cap increased to 700,000 as CEO says wait times will now be 'much more bearable'

After some pretty hefty wait times and server issues, Helldivers 2 CEO Johan Pilestedt finally confirms that concurrent player counts have been increased. 

"I have one final update for tonight," Pilestedt announced in a tweet. "We have updated the max CCU cap to 700,000. Unfortunately, we expect the CCU to reach that level. We believe that the wait times will be much more bearable. Tomorrow, we are making some final improvements for the weekend."

Since its launch, Helldivers 2 has had some severe issues with severe capacity and stability as well as wait times. Earlier this week, the team-based third-person shooter capped its concurrent players at around 450,000 in an attempt to help resolve server instability. Several server-related issues had led to "some mission payouts failing, some players being kicked to their ships, or being logged out," according to a Discord message. 

The team at Arrowhead Game Studios has also been working tirelessly to add new patches in another attempt to control issues. One patch that was rolled out last week for PC and PlayStation 5 aimed to help players who wanted to spread democracy in a group. It fixed a bug in which the game would crash if they had too many friends

Despite all of the bugs and issues, Helldivers 2 has been incredibly popular. So far, it's managed to overtake Grand Theft Auto 5 (GTA 5) and Starfield in terms of all-time highest player count on Steam.  The third-person shooter managed to hit a staggering 411,259 active players on February 19, 2024, according to the third-party Steam monitoring tool SteamDB

It's very likely that after the most recent update, increasing concurrent players, and stabilizing servers, Helldivers 2 will see more players log on to fight against the Automatons and Terminids for the glory of Super Earth. 

For more excellent games, be sure to check out the best multiplayer PC games as well as the best FPS games available right now. 



from TechRadar - All the latest technology news https://ift.tt/Oh6FWiI

0 coment�rios: