Latest Technology News

Home Top Ad

Responsive Ads Here

Much of Apple Intelligence is still in beta, but Notification summaries has become somewhat of a hot topic feature within Apple’s first AI...

Much of Apple Intelligence is still in beta, but Notification summaries has become somewhat of a hot topic feature within Apple’s first AI-focused feature set. It’s had a few fumbles, most notably with summarizing a BBC story about Luigi Mangione, who is the accused killer of United HealthCare CEO Brian Thompson, quite incorrectly.

It’s not unheard of for AI and these LLMs (large language models) that power them to hallucinate at times, but this one was particularly prominent. This caused the BBC and other news organizations to ask Apple to fix or remove the feature before future errors were created. It happened to a few other outlets, including the New York Times, and Apple issued a statement, noting that the feature was in beta and that a software update would “further clarify when the text being displayed is summarization provided by Apple Intelligence.”

It also reminded us that users of Apple Intelligence can always provide feedback, and now we're seeing some results. In the latest betas (developer and public) of iOS 18.3, Apple has temporarily disabled Notification summaries for News and Entertainment apps fixing the criticisms altogether.

No more fake news

Apple iPhone 16 Plus Review

(Image credit: Future/Jacob Krol)

Firstly, you can now turn off notification summaries on an app-by-app basis, meaning you can turn it off for all news apps if you choose. This is a welcomed addition, and under Settings > Notifications > Summarize Notifications, it even states, “Summaries may contain errors.”

As you scroll through, you’ll see that News apps, as well as Entertainment or Streaming ones, might be turned on but are labeled as “Temporarily Unavailable.” This makes it clear that Apple is working behind the scenes on refining the notification summary process for these and it’s likely that we’ll see these come back in a forthcoming update, likely first in another beta.

Further, when you see these notification summaries on your lock screen, the ones that have been summarised will be shown with italicized text. This should make it easier to see them alongside regular notifications, and you can, of course, tap in to expand and see the original one.

As a whole, it’s great to see Apple being responsive here, and considering this is the latest beta, it's likely we'll see these changes in the full release of 18.3 too. There is, of course, no exact timing on when that will arrive, but we'll be sure to let you know as soon as it does.

You might also like



from Latest from TechRadar US in News,opinion https://ift.tt/RgzcevC

December 2024 has the dubious distinction of being both the 35th anniversary of the first ransomware and the 20th anniversary of the first...

December 2024 has the dubious distinction of being both the 35th anniversary of the first ransomware and the 20th anniversary of the first use of modern criminal ransomware. Since the late 1980’s ransomware has evolved and innovated into a major criminal enterprise, so it only seems apt to reflect on the changes and innovations that we’ve seen in ransomware over the past three decades.

The first use of ransomware was identified in December 1989; an individual physically mailed out floppy disks purporting to contain software to help judge if an individual was at risk of developing AIDS, hence the malware being named the AIDS Trojan. Once installed, the software waited until the computer had been rebooted 90 time before proceeding to hide directories, encrypt file names and display a ransom note requesting a cashier’s cheque to be sent to a PO Box in Panama for a license that would restore files and directories.

The individual responsible was identified but found unfit to stand trial. Ultimately, the difficulty in distributing the malware and collecting payment in a pre-internet world meant that the attempt was unsuccessful. However, technology advanced; computers increasingly became connected to networks and new opportunities arose to distribute ransomware.

The risk of a “cryptovirus” that could use encryption to launch extortion based attacks on victims requiring payment to supply a decryption key, was recognized by researchers in 1996. As were the defenses necessary to defeat the threat: effective antivirus software and system backups.

Reaping the rewards of ransomware

In December 2004 evidence of the first use of criminal ransomware, GPCode was uncovered. This attack was targeted at users in Russia, delivered as an email attachment purporting to be a job solicitation. Once opened, the attachment downloaded and installed the malware on the victim’s machine which scanned the file system encrypting files of targeted types. Early samples applied a custom encryption routine that was easily defeated, before the attacker adopted secure public-key encryption algorithms that were much more difficult to crack.

Clearly, this attack sparked the imagination of criminals, with a variety of different ransomware variants being released soon after. However these early attacks were hampered by a lack of easily accessible means to collect the ransom payment without disclosing the attacker’s identity. Providing instructions for payments to be wired to specific bank accounts left the attacker vulnerable to legal investigation to “follow the money”. Attackers got increasingly creative requesting victims to call premium rate phone numbers or even buy items from an online pharmacy and supply the receipt to receive decryption instructions.

Virtual currencies and gold trading platforms offered a means of transferring payment outside of the regulated banking systems and became widely adopted by ransomware operators as a straight forwards mechanism to receive payment, while maintaining their anonymity. However, ultimately these payment services proved vulnerable to action by regulatory authorities curtailing their use.

The emergence of crypto currencies, such as bitcoin, offered an effective way for criminals to collect ransoms anonymously within a framework that was resistant to disruption by regulatory authorities or law enforcement. Consequently, crypto currency payments were enthusiastically embraced by ransomware operators with the successful CryptoLocker ransomware of late 2013 being one of the first adopters.

Diversifying the ransomware operations portfolio

With the adoption of crypto currencies as an effective means of receiving payment, ransomware operators were able to focus on expanding their operations. The ransomware ecosystem began to professionalize with specialist providers offering their services to share some of the tasks involved in conducting attacks.

In the early 2010s ransomware operators tended to adopt their own preferred means of distributing their malware such as sending spam messages, subverting websites or partnering with botnet operators who could install malware on large number compromised systems. By developing a partner ecosystem, ransomware writers could focus on developing better ransomware and leave the distribution of the malware to less technically skilled operators who could focus on distribution and social engineering techniques.

Criminals developed sophisticated portals for their affiliates to measure their success and access new features to facilitate their attacks and collection of ransom payments. Initially these attacks adopted a mass-market style distribution of malware attempting to infect as many users as possible to maximize ransom payments without regard to the profile of the victims.

In 2016, a new variant of ransomware, SamSam was identified which was distributed according to a different model. Instead of prioritizing the quantity of infections, hitting large numbers of users for relatively small ransoms, the distributors of SamSam targeted specific institutions and demanded large sums for their ransom. The gang combined hacking techniques with ransomware, seeking to penetrate organizations' systems. Then identifying and installing ransomware on key computer systems in order to maximise disruption to the entire organization.

This innovation changed the ransomware market. Ransomware operators discovered that it was more profitable to target institutions, disrupting entire organizations and bringing their operations to a halt which allowed them to demand much higher ransoms, than encrypting the end-point devices of individuals.

Quickly, criminals prioritized certain industrial sectors; the healthcare industry became a frequent target. Presumably because ransomware affected key operational systems, seriously disrupting the operation of the healthcare facility, putting lives at risk and as a result adding pressure on senior management to pay the ransom to quickly restore functions.

Modern day ransomware is born

In November 2019, the innovation of double extortion was first used by attackers delivering the Maze ransomware. In these attacks, the attacker steals confidential data from systems before encrypting it. In doing so the attacker is able to apply two levers of pressure on business leaders to pay the ransom; the removal of access to data, and the threat of public disclosure of confidential data with consequent reputational and regulatory consequences.

Over the years a number of imitators of ransomware have appeared. We’ve seen fake-ransomware that simply presents a ransom note without bothering to encrypt any data; hoping that victims will pay no matter what.

WannaCry was a self-propagating malware that spread around the world in May 2017. Although the malware did encrypt data, the small number of common bitcoin wallets to which ransoms were requested to be paid meant that there was little opportunity for the attacker to know which victims had paid the ransom and to whom decryption keys should be released.

The NotPetya malware of June 2017, purported to be ransomware, spreading autonomously through networks. While it encrypted files and displayed a ransom note, it was a destructive attack. The unique ID in the note was irrelevant to the encryption process, and the malware wiped as well as encrypted critical data, rendering it unrecoverable even with the correct decryption key.

Ransomware is not just a financial crime. It impacts those who are affected by the disruption to essential services. People unable to access vital data or work are left feeling anxious and stressed, while IT departments working to resolve the situation suffer additional stress and risk burnout. On a human level, inevitably some people lose irreplaceable data such as photos of loved ones or projects to which they have devoted many months or years of work.

Lessons for businesses and industry

The IT landscape in 2024 is very different from that of 1989 or 2004. Improved software engineering and patch management mean that it’s more difficult for ransomware to infect systems through unpatched web browser vulnerabilities. Conversely, the number of password breaches over the years, making available potentially reused or easily guessable passwords to criminals, means that increasingly the human user is the point of ingress.

We should not feel powerless in the face of ransomware. Law enforcement activity has arrested and charged many ransomware operators. Others who have evaded arrest have been subjected to international sanctions. Infrastructure used to coordinate attacks and crypto-currency wallets have been seized. Anti-virus detection has also advanced over the years, whilst some malware may slip past detection, modern endpoint protection software constantly searches for evidence of unknown programs attempting to encrypt files without permission.

The Achilles heel of ransomware are back-ups. Data that is backed-up and stored off-line can be used to restore files that have otherwise been corrupted and lost, thus negating any need to pay the ransom to retrieve the files. The success of ransomware over the past 35 years is also the story of the failure of widespread adoption of back-up devices to restore files.

Looking to the future, it is unlikely that we will see the end of ransomware. Its profitability for criminals means that it is likely to continue to plague us for many years to come. It is also unlikely that it will stay the same. Criminals have proved remarkably inventive in devising new techniques and methods to improve the business model and evade detection of both them and their malware.

However, the cybersecurity industry is equally innovative, constantly developing new tools and strategies to combat these threats. By staying informed, adopting robust security measures, and collaborating globally, we can mitigate the risks and build a more resilient digital future.

We've compiled a list of the best cloud backup services.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro



from Latest from TechRadar US in News,opinion https://ift.tt/rlaoZ1V

Organizations increasingly depend on accurate insights from their data to drive decisions, fuel innovation and maintain their competitive ...

Organizations increasingly depend on accurate insights from their data to drive decisions, fuel innovation and maintain their competitive edge. Yet, the ability to extract meaningful, high-quality insights from this data is dependent on effective data governance.

Implementing data governance is critical, but like all data initiatives, it requires internal adoption and organizational fit. Generative AI is emerging to transform the way organizations streamline data management processes.

Data governance and its challenges

Effective data governance is the backbone of data-driven decision-making, but it is more than just a process. It is a strategic framework that ensures data is accessible, secure and aligned with organizational goals.

Data governance relies on four core pillars for success. The first is having people to define and execute the policies and standards. Secondly, the process outlines the workflows for managing data while the third pillar, technology, provides the tools for tasks like ingestion, integration, security and compliance. Finally, standards ensure data consistency and interoperability across the organization, enabling effective collaboration and decision-making to maintain the quality and usability of data assets.

However, data governance is not a simple task and requires coordination and collaboration among stakeholders, such as business users, data teams and IT departments, along with the technical expertise and tools to implement, manage and monitor it. Managing data sources across platforms, applications and business departments requires a governance policy that is tailored to the complexity of the organization's structure.

Organizations face two primary challenges: the complexity of managing diverse data sources, and how to encourage widespread adoption of governance practices among users.

Organizations are required to handle data from various sources, such as customer databases, web traffic, or after acquisition, which can be formatted in many ways from structured and semi-structured to unstructured. This diversity, along with the growing volume of data, makes integration, management and effective use difficult.

However, data is only useful if it is being utilized to serve business initiatives, and yet many enterprises continue to wrestle with the fact that user adoption remains a challenge. Business users often see governance as a burden, rather than a benefit, limiting their access to data access and therefore ability to use it effectively.

They may also lack the skills to follow data governance policies. This can lead to non-compliance and the creation of data silos or shadow IT systems that compromise data quality and security.

How generative AI accelerates data governance

Leveraging generative AI helps organizations take a new approach to data governance. By automating, optimizing and simplifying core functions, generative AI enables them to realize the full potential of their data assets. Adopting techniques like deep learning and natural language processing, generative AI can also create relevant and accessible outputs including text, audio, and images.

It can transform data governance in several ways. By automating labor-intensive data management tasks such as ingestion, cleansing, classification and profiling to ensure data accuracy, it helps data teams efficiently scale data management. It also aids data discovery by providing metadata, lineage and context information, generating natural language summaries for all data assets to make it easier for users and businesses to understand data value.

This accessibility fosters a more inclusive data culture across a business and transforms data governance in several ways to achieve operational benefits. By providing natural language recommendations or suggestions alongside analysis results, Generative AI makes insights accessible to both technical and non-technical users, helping users optimize the impact of the data and ensure that it is effectively leveraged for decision-making and innovation.

By enabling users to interact with data effectively, generative AI can ultimately increase the adoption of governance practices, and foster a data-driven culture across the organization. This not only enhances data quality but also strengthens security and promotes seamless integration across systems.

Data trust and its role in governance

Data trust is the mission-critical consequence of effective data governance. In an environment where data is increasingly shared across departments and even external partners, ensuring trust in data for all purposes is essential. Trust is built through the transparency in data management practices, clear policies on data access and robust security protocols.

Generative AI can play a significant role in enhancing data trust by providing continuous transparent monitoring, automated auditing, and anomaly detection to ensure data integrity and compliance with standards. AI-powered insights can validate the data’s accuracy which helps to maintain trust as the data moves across different systems and teams.

Gen AI in decentralized data governance

As organizations adopt modern IT paradigms like data mesh and data fabric, data governance models are shifting from centralized to decentralized or federated frameworks.

In decentralized models, individual business units retain autonomy while following governance principles. Federated models strike a balance, with a central data team providing guidelines and decentralized teams managing data at the local level.

Generative AI is particularly well-suited for these frameworks, acting as a bridge between central governance bodies and decentralized teams. It facilitates communication, ensures alignment of goals, and provides localised, tailored insights while adhering to enterprise-wide standards.

Effective data governance is essential for unlocking the full potential of an organization's data, but managing complexity and encouraging user adoption remain significant challenges. Generative AI is a powerful tool for data teams to bring value from their organization's data to the business users efficiently and accessibly.

Generative AI bridges the gap between oversight and autonomy by ensuring data quality, bolstering security and supporting robust, bespoke data governance models. Embracing this technology enables organizations to overcome common governance challenges, drive innovation, and maximize the value of their data assets to ensure continued business competitiveness.

We show what we think are the best AI tools.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro



from Latest from TechRadar US in News,opinion https://ift.tt/jmRktGx

There's no better way to start 2025 than talking about the future of tech in the beautiful city of Barcelona, and you can take part too...

There's no better way to start 2025 than talking about the future of tech in the beautiful city of Barcelona, and you can take part too. That's because Techradar will be hosting a live Future Panel Discussion at ISE 2025 in Barcelona on the 4th of February where our panel of experts will reveal the innovations that matter most and where they're going to take us next.

That's not the only reason to attend ISE 2025, although we think it's a pretty good one. ISE is the leading event for the AV industry, and this year's event will feature ground-breaking product launches, expert insights and inspiring keynotes from across the industry. And it's all taking place in Barcelona, whose historic buildings will be transformed with astonishing projection mapping displays for the duration of the event.

How you can discover the future

The exciting live panel discussion will be followed by a live audience Q&A on the future of tech, and it'll be hosted by our very own managing editor of lifestyle, Josephine Watson. She'll be joined by a panel of the most insightful futurists and technology experts including the globally acclaimed digital analyst and visionary Brian Solis; the innovative multimedia artist, ISE 2025 keynote speaker and chosen artists for this year’s projection mapping at world-heritage Casa Batlló Quayola; and Fardad Zabetian, global business technologist and CEO of KUDO, whose AI-based real-time language interpretation will be used across the ISE's many stages and events.

The panel discussion will ask the experts: what will the world look like in 20 or 30 years time? How will technology shape our world and evolve around us – and what effects will it have on the live events, concerts and sports of the future? How will technology transform our lives for the better?

The destination for innovation

ISE is where the future happens, and its 2024 event was a real record-breaker: 74,000 people from 162 countries came to see innovation in person, the most visitors in the event's 20-year history. And this year's event promises to be even better, with cutting-edge tech from over 1,400 exhibitors and an exceptional content programme too. From smart home tech to high-end audio, education and learning solutions to projectors, it's where you can get up close and personal with the very latest tech and get expert insight from industry insiders too.

ISE is much more than a trade show forum. It's where the audiovisual, systems integration, lighting, live events and IT industries all come together, and when you factor in its online audience as well as its in-person attendees it has an audience impact of over 1.387 billion people.

People come to ISE for three key reasons: to be inspired; to explore trends and developments in the industry for their professional development; and to discover new equipment, products, services and suppliers. It's an exceptional event, and it's exceptionally affordable too.

See the future for less as an ISE Early Bird

Your attendee ticket to ISE 2025 covers all four days and gives you access to ISE 2025's many attractions, including:

  • Eight halls, seven Technology Zones
  • Innovation Park
  • Discovery Zone
  • Audio Demo Rooms and the Outdoor Sound Experience
  • ISE 2025 Keynotes (Tuesday & Wednesday)
  • Free-to-attend sessions on the ISE Live Events Stage, AVIXA Xchange LIVE, the CEDIA Smart Home Technology Stage and Congreso AVIXA
  • A free public transport pass, collected on-site
  • And much much more…

An attendee ticket is normally just €215, but: use the code ISE2025trmag on the registration page here and you can get your ticket for free.

Once you've registered for the show you can then choose to add a Content Day Pass for as little as €385, which gives you access to all Summits and Track Sessions for that day, or an All-Conference Pass, which gives you access to all Summits, Track Sessions and the Smart Home Technology Conference too. Details of both kinds of conference passes are available online here.



from Latest from TechRadar US in News,opinion https://ift.tt/plXUbFO

Artificial Intelligence (AI) has rapidly evolved into a cornerstone of technological and business innovation, permeating every sector and f...

Artificial Intelligence (AI) has rapidly evolved into a cornerstone of technological and business innovation, permeating every sector and fundamentally transforming how we interact with the world. AI tools now streamline decision-making, optimize operations, and enable new, personalized experiences.

However, this rapid expansion brings with it a complex and growing threat landscape—one that combines traditional cybersecurity risks with unique vulnerabilities specific to AI. These emerging risks can include data manipulation, adversarial attacks, and exploitation of machine learning models, each posing serious potential impacts on privacy, security, and trust.

As AI continues to become deeply integrated into critical infrastructures, from healthcare and finance to national security, it’s crucial for organizations to adopt a proactive, layered defense strategy. By remaining vigilant and continuously identifying and addressing these vulnerabilities, businesses can protect not only their AI systems but also the integrity and resilience of their broader digital environments.

The new threats facing AI models and users

As the use of AI expands, so does the complexity of the threats it faces. Some of the most pressing threats involve trust in digital content, backdoors intentionally or unintentionally embedded in models, traditional security gaps exploited by attackers, and novel techniques that cleverly bypass existing safeguards. Additionally, the rise of deepfakes and synthetic media further complicates the landscape, creating challenges around verifying authenticity and integrity in AI-generated content.

Trust in digital content: As AI-generated content slowly becomes indistinguishable from real images, companies are building safeguards to stop the spread of misinformation. What happens if a vulnerability is found in one of these safeguards? Watermark manipulation, for example, allows adversaries to tamper with the authenticity of images generated by AI models. This technique can add or remove invisible watermarks that mark content as AI-generated, undermining trust in the content and fostering misinformation—a scenario that can lead to severe social ramifications.

Backdoors in models: Due to the open source nature of AI models through sites like Hugging Face, a frequently reused model containing a backdoor could lead to severe supply chain implications. A cutting-edge method developed by our Synaptic Adversarial Intelligence (SAI) team, dubbed ‘ShadowLogic,’ allows adversaries to implant codeless, hidden backdoors into neural network models across any modality. By manipulating the computational graph of the model, attackers can compromise its integrity without detection, persisting the backdoor even when a model is fine tuned.

Integration of AI into High-Impact Technologies: AI models like Google’s Gemini have proven to be susceptible to indirect prompt injection attacks. Under certain conditions, attackers can manipulate these models to produce misleading or harmful responses, and even cause them to call APIs, highlighting the ongoing need for vigilant defense mechanisms.

Traditional Security Vulnerabilities: Common vulnerabilities and exposures (CVEs) in AI infrastructure continue to plague organizations. Attackers often exploit weaknesses in open-source frameworks, making it essential to identify and address these vulnerabilities proactively.

Novel Attack Techniques: While traditional security vulnerabilities still pose a large threat to the AI ecosystem, new attack techniques are a near-daily occurrence. Techniques such as Knowledge Return Oriented Prompting (KROP), developed by HiddenLayer’s SAI team, present a significant challenge to AI safety. These novel methods allow adversaries to bypass conventional safety measures built into large language models (LLMs), opening the door to unintended consequences.

Identifying vulnerabilities before adversaries do

To combat these threats, researchers must stay one step ahead, anticipating the techniques that bad actors may employ—often before those adversaries even recognize potential opportunities for impact. By combining proactive research with innovative, automated tools designed to expose hidden vulnerabilities within AI frameworks, researchers can uncover and disclose new Common Vulnerabilities and Exposures (CVEs). This responsible approach to vulnerability disclosure not only strengthens individual AI systems but also fortifies the broader industry by raising awareness and establishing baseline protections to combat both known and emerging threats.

Identifying vulnerabilities is only the first step. It’s equally critical to translate academic research into practical, deployable solutions that operate effectively in real-world production settings. This bridge from theory to application is exemplified in projects where HiddenLayer’s SAI team adapted academic insights to tackle actual security risks, underscoring the importance of making research actionable, and ensuring defenses are robust, scalable, and adaptable to evolving threats. By transforming foundational research into operational defenses, the industry not only protects AI systems but also builds resilience and confidence in AI-driven innovation, safeguarding users and organizations alike against a rapidly changing threat landscape. This proactive, layered approach is essential for enabling secure, reliable AI applications that can withstand both current and future adversarial techniques.

Innovating toward safer AI systems

Security around AI systems can no longer be an afterthought; it must be woven into the fabric of AI innovation. As AI technologies advance, so do the methods and motives of attackers. Threat actors are increasingly focused on exploiting weaknesses specific to AI models, from adversarial attacks that manipulate model outputs to data poisoning techniques that degrade model accuracy. To address these risks, the industry is shifting towards embedding security directly into the development and deployment phases of AI, making it an integral part of the AI lifecycle. This proactive approach is fostering safer environments for AI and mitigating risks before they manifest, reducing the likelihood of unexpected disruptions.

Researchers and industry leaders alike are accelerating efforts to identify and counteract evolving vulnerabilities. As AI research migrates from theoretical exploration to practical application, new attack methods are rapidly moving from academic discourse to real-world implementation. Adopting “secure by design” principles is essential to establishing a security-first mindset, which, while not foolproof, elevates the baseline protection for AI systems and the industries that depend on them. As AI revolutionizes sectors from healthcare to finance, embedding robust security measures is vital to supporting sustainable growth and fostering trust in these transformative technologies. Embracing security not as a barrier but as a catalyst for responsible progress will ensure that AI systems are resilient, reliable, and equipped to withstand the dynamic and sophisticated threats they face, paving the way for future advancements that are both innovative and secure.

We've compiled a list of the best identity management software.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro



from Latest from TechRadar US in News,opinion https://ift.tt/9kvPowJ

Christmas has been and gone, but January is once again proving to be the most wonderful time of the year for Android phone fans. Following ...

Christmas has been and gone, but January is once again proving to be the most wonderful time of the year for Android phone fans. Following the launch of the OnePlus 13 series last week, Honor has debuted its own flagship offering for 2025 – the Honor Magic 7 Pro – on the international stage.

Boasting a powerful Snapdragon 8 Elite chipset, a supersized 5,270mAh silicon-carbon battery, and an eye-friendly 6.8-inch display, the Magic 7 Pro sports hardware specs to rival the best Android phones on the market right now (OnePlus 13 included). However, as promised, it's Honor’s suite of decidedly unique software features that really catch the eye this year.

Among the most interesting is Deepfake Detection, an on-device security tool that uses AI to scan for facial trickery during video calls. Honor teased the innovative new feature last year, and it finally ships with the Magic 7 Pro alongside numerous other “human-centric” AI tools, including AI Translation, Real-time Transcript, and Magic Portal 2.0.

The Magic 7 Pro’s cameras have been decked out with some neat AI wizardry, too. The snappers themselves comprise a 50MP wide lens (f/1.4-2.0, 23mm), a 50MP ultra-wide lens (f/2.0, 12mm), and a 200MP periscope telephoto lens (f/2.6, 69mm), but Honor’s proprietary AI Image Engine brings various AI-powered tricks to each one.

Honor Magic 7 Pro being held in the hand

The Honor Magic 7 Pro in Lunar Shadow Grey (Image credit: Honor)

The brand’s Harcourt Portrait mode – which promises to deliver studio-quality portraits in everyday lighting scenarios – has been pulled over from the Honor 200 Pro and Honor Magic V3 to work with Magic 7 Pro’s wide and telephoto lenses. The new flagship also introduces three new software-based photography features: AI Motion Sensing Capture, HD Super Burst, and AI Super Zoom.

The first two of that number allow you to capture high-speed movements or sequences in optimal clarity. AI Super Zoom, meanwhile, aims to enhance the detail of natural landscape shots taken at 30x or more using generative AI.

We’re currently testing the latter tool and look forward to sharing our impressions in an upcoming report.

Portal to Anywhere interface in Magic Portal 2

Magic Portal 2.0 on the Honor Magic 7 Pro (Image credit: Honor / Future)

Other key features of the Magic 7 Pro include its industry-leading durability credentials – specifically IP68 water resistance and IP69 temperature resistance – and pre-installed integration with the Google Gemini app. As for color options, you've got two to choose from: Lunar Shadow Grey and Black.

Honor’s latest flagship launches today (January 15) in the UK for £1,099.99, the same launch price as last year’s Honor Magic 6 Pro. Given Honor's recent release strategies, availability in the US and Australia seems unlikely.

The Magic 7 Pro also releases alongside a cheaper, lower-spec variant – the Honor Magic 7 Lite – which is now available in the UK for £399.99. Stay tuned for our review of its big brother in the coming days.

You might also like



from Latest from TechRadar US in News,opinion https://ift.tt/b7pCklI

The EU AI Act came into force earlier this year, marking a major milestone as the first regulation of its kind for this emerging technology...

The EU AI Act came into force earlier this year, marking a major milestone as the first regulation of its kind for this emerging technology. While the Act has raised concerns about compliance costs and potential impacts on innovation, its overarching goal is to position the EU as the “global hub for trustworthy AI” and reduce risks associated with the new technology.

Although the Act will affect many industries, its immediate impact on financial services (FS) may seem less significant at first. The FS sector is already heavily regulated to ensure the safety and soundness of the financial system and protect consumers. However, there’s room for improvement in the eyes of the banks. Mitek’s 2024 Identity Intelligence Index found over a third (36%) of banks want better clarity on new regulations to enhance customer protection.

So, while the EU AI Act’s impact on banks may be limited for now the industry faces a fast-evolving regulatory landscape that will increasingly shape its future. Adapting to these changes will demand greater flexibility in managing emerging technologies and compliance complexities.

Now is the time for banks to refine their strategies, leveraging innovative processes and technology to combat identity theft and safeguard their customers. Let’s explore how they can adapt to meet these challenges effectively.

Putting safety first

The Mitek Index found on average, 76% of banks surveyed believe fraud cases and scams have become more sophisticated. Of the challenges and concerns leaders have in their roles today, AI-generated fraud and deepfakes on the rise (37%) took the top spot. We see billions lost to fraud last year, with more than half a billion pounds in the UK, $8.8 billion in the US, and €1.8 billion in Europe.

Some banks may not even realize they are falling victim to these advanced tactics. Current anti-fraud systems and processes often lack the capability to detect deepfakes and other AI-driven threats, leaving institutions fighting an invisible enemy. Dealing with unknowns creates a rising tension within banks that could make these organizations fear that every transaction could be fraudulent.

Despite acknowledging the need to address these threats, many banks struggle to act quickly due to limited expertise and reliance on siloed, outdated systems that cannot keep up with the fluidity of modern AI-driven fraud tactics. Compounding this issue is the rise of increasingly sophisticated fraud tactics, including the creation of "fake" customers using synthetic identities or AI-generated personas. Banks often fail to fully grasp the scope of fake profiles, leaving critical gaps in their defenses.

To combat this, banks are investing in technology to analyze customer interactions and detect fraud. Success requires a balanced approach that prioritizes customer experience, compliance, and fraud prevention equally. By leveraging data and weighing customer lifetime value against fraud risks, banks can adopt a more nuanced strategy.

The stakes are high: once a fraudulent or synthetic identity successfully opens an account, it could persist indefinitely, posing long-term risks to both customer security and operational costs. By adopting this nuanced approach, banks will be able to make the necessary changes required to keep customers safe, and on their side, amid an increasingly complex fraud landscape.

The build or buy conundrum facing banks

Compliance is more than a tick-box exercise – regulations are needed as they solve real world problems. Financial institutions should start viewing fraud prevention and regulatory compliance as long-term, strategic opportunities to differentiate and bolster their cybersecurity.

To satisfy regulators, safeguard the customer experience, and stand toe-to-toe with fraudsters, financial services organizations should have a clear picture of the scale and nature of fraud within their systems. This can be achieved through specific techniques such as advanced anomaly detection using AI tools and machine learning, analysing transaction patterns for irregularities, and implementing tools like identity verification systems to spot synthetic or stolen identities.

Banks must constantly test the edge to balance both, giving the customer a frictionless ‘phy-gital' experience, while also identifying fraudulent activity. However, we have reached a tipping point where it’s no longer feasible for internal IT teams in banks to keep up with this increasing volume of regulations through manual, inefficient, and expensive processes that don’t meet expectations for seamless user journeys.

Align with regulatory standards, today and tomorrow

Banks should work with technology vendors to ensure product roadmaps align with regulatory standards, today and tomorrow. The FS industry has an opportunity to collaborate leveraging technology to develop better identity lifecycle strategies.

Multi-layered fraud detection allows banks to anticipate the constantly changing identity landscape, helping to protect vulnerable customers from increasingly sophisticated fraudulent attacks. In this way, fraud prevention must focus on converting raw data - such as login attempts, transaction anomalies, and device usage patterns - into actionable intelligence.

While banks can all individually work on protecting their own customers, it’s work that is not as efficient if done alone. To be more effective, the financial services industry needs to establish an identity intelligence ecosystem where banks and other financial institutions can collaborate and share fraud threats in real time. By working together and exchanging data on emerging fraud patterns, suspicious activities, and known threats, banks can enhance their ability to detect and prevent fraud more quickly, improving security for all customers.

Viewing regulation as a commercial opportunity

With regulatory requirements emerging and tightening across various sectors, banks and other financial institutions find themselves between a rock and a hard place. The good news is that banks have the hard-earned experience and many tools at their disposal to develop robust compliance programs and effectively navigate these regulatory challenges.

With the right combination of resources, institutions can develop scalable programs that adapt to future regulatory changes. While delivering compliance and risk programs is challenging, firms that build a cohesive strategy today will have a much easier time tomorrow. From there, establishing a fraud intelligence ecosystem between organizations and law enforcement could be essential to help all banks stay on top of regulations and keep their customers safe.

We've compiled a list of the best identity management software tools currently available.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro



from Latest from TechRadar US in News,opinion https://ift.tt/932UQhk

Nominet warns customers about a recent cyberattack The company says the attackers abused an Ivanti zero-day So far, there is no evidenc...


  • Nominet warns customers about a recent cyberattack
  • The company says the attackers abused an Ivanti zero-day
  • So far, there is no evidence of data tampering or backdoor dropping

Top domain registrar company Nominet has warned its customers of a cyberattack it suffered due to a zero-day vulnerability in Ivanti VPN products.

Citing a letter being sent to affected individuals, The Register claims the company suggests the criminals may have made their way in using the recently-highlighted Ivanti security flaws.

“The entry point was through third-party VPN software supplied by Ivanti that enables our people to access systems remotely." the letter apparently reads. "The unauthorized intrusion into our network exploited a zero-day vulnerability."

Abusing a zero-day

Nominet says it has not yet found any evidence of data leaks, or theft, and says that the attackers did not plant any backdoors or other malware onto its systems.

"Aided by external experts, our investigation continues, and we have put additional safeguards in place, including restricted access to our systems from VPN," it said.

The company confirmed its systems are operating normally and that the attack did not cause any significant disruption.

While it was not specifically mentioned, The Register speculates the attackers could have abused CVE-2025-0282, a zero-day recently found in Ivanti Connect Secure, Policy Secure, and Neurons for ZTA gateways.

Ivanti had recently warned customers of a critical vulnerability impacting its VPN appliances being actively exploited in the wild to drop malware. In a security advisory, the company said it uncovered two vulnerabilities recently - CVE-2025-0282 and CVE-2025-0283, both of which are impacting Ivanti Connect Secure VPN appliances.

The former was given a severity score of 9.0 (critical), and is described as an unauthenticated stack-based buffer overflow. “Successful exploitation could result in unauthenticated remote code execution, leading to potential downstream compromise of a victim network,” it was said.

The company urged customers to apply the patch immediately, and provided further details about the threat actors and their tools.

Via The Register

You might also like



from Latest from TechRadar US in News,opinion https://ift.tt/y2cwbIx

Few other communication tools have as much reach as SMS. While we might only sporadically check emails and instant messages from our WhatsA...

Few other communication tools have as much reach as SMS. While we might only sporadically check emails and instant messages from our WhatsApp groups and other channels, the simplicity and universal nature of SMS means it has a wide array of use cases for businesses. Communicating about appointments, bookings, or an order’s delivery status, the personal nature of messages - and the fact we all have a device in our pocket to receive them - has long made it a powerful tool for brand-customer interactions.

Many consumers like SMS because they know it will work on any smartphone their family and friends are using. GSMA estimates we’ll see 6.3 billion mobile subscribers by 2030 – and with phones so integral to many of our lives, this represents billions of opportunities for businesses to engage with consumers wherever they are.

But while the first SMS was sent in 1992, little has changed about the technology in that time. Although we had MMS - ‘multimedia messaging service’ - as an evolution of SMS, users couldn’t send these messages particularly cheaply, so it didn’t take off in the same way as their online messaging platform successors. SMS is still resolutely text-oriented, and held back by various restrictions and a 160-character limit - hence the name ‘short messaging service’.

Despite SMS messages having very high open rates - as much as 98% - they therefore can lack the marketing sophistication of other platforms, impacting their engagement rates. With little scope for experimentation, businesses need an SMS alternative to make such marketing messages more useful, personalized, and engaging.

So, what comes next?

RCS: A revolutionary format

It all changed with the emergence of RCS - or ‘rich communication services’. As the name suggests, these advanced messages are rich with additional features, building on the original SMS format. RCS has been around for as long as 2008, but it has only been adopted by the main mobile platforms relatively recently, finally bringing it into the mainstream.

Apple, for example, adopted RCS in Messages with iOS 18 in September 2024, while Google also supports the technology via Android, even if a user’s phone carrier doesn’t. With already 1bn RCS users via Google Messages alone, and as many as 2.5 billion monthly active users and rising according to Omdia, RCS traffic is forecast to increase by more than a trillion messages year on year. Now is the time for brands to seriously consider this channel, especially given its advanced capabilities.

Interactive, shoppable moments: What differentiates RCS from SMS is its ability to deliver more visually compelling content. For example, with RCS you can send far larger media files, such as high-quality photos, videos, and GIFs, while the format supports longer messages without the 160-character limit of SMS, or splitting them into segmented texts. This all makes for a more expansive form of communication.

RCS messaging also supports dynamic features like buttons, carousels, and other interactive elements. This includes options for suggested replies and call-to-action (CTA) buttons that streamline customer interactions. This creates a seamless, shoppable experience with little barriers to purchase, significantly improving customer interaction and conversions. One example of a paint brand that moved one of their SMS campaigns to RCS saw their revenue increase by 115%, while their clickthrough rate rose from less than 3% with SMS, to 21% with RCS.

Security and trust: With scams rife, consumers are increasingly skeptical of texts from random numbers that prompt them to click unfamiliar links. This uncertainty and perceived risk becomes a thing of the past with RCS, thanks to branded messaging - such as attaching logos and taglines - and verified sender IDs. Alongside consumers being able to verify the legitimacy of a message’s sender, RCS is also backed by encryption between sender and recipient. By establishing such high security and privacy standards, brands can create a trusted relationship and form of communication with customers, while enhancing brand credibility. This in turn leads to greater response rates and improved customer engagement.

A broad reach: With RCS, there are no barriers to entry. Unlike many other forms of digital communication, users don't need to download a new app or set up a new account; they can receive an RCS message just as they would any other text message. Even if a user’s device doesn’t support RCS, the message will automatically be sent as an SMS.

This allows businesses to upgrade their messaging, refreshing a pre-existing, ubiquitous communication channel, but with the reassurance they can still reach the exact same audience.

Improved deliverability: RCS also has improved deliverability compared to SMS. The latter is sent on a lower priority compared to other types of traffic which have much lower tolerances for latency - like voice and data - by mobile carriers. RCS, on the other hand, can be sent over any Internet connection, which means far lower chances of missed deliveries and delays between messages being sent and them being opened by users.

Analytics and insights: RCS puts marketers in the driving seat, providing businesses with detailed delivery and read receipts alongside other analytics. This means brands can stay agile, allowing them to track and analyse message effectiveness and optimize communication strategies. It also represents another touchpoint - and opportunity - for brands to better understand their customers and their preferences, leveraging first-party data consensually shared by customers.

The next wave of marketing

As a powerful communication tool with an already established audience base, RCS is expected to reshape the current SMS marketing landscape. This effect will only increase as more countries expand their support for it, beyond the 17 countries currently with it in place (as of October 2024). According to Juniper Research, 60% of mobile subscribers are already RCS-capable as of 2024. By using RCS, brands can boost engagement, efficiency, and trust, while offering a more dynamic experience that truly leaves an impression on customers. And in a landscape filled with plenty of competing digital noise, RCS represents an opportunity to stand out from the crowd.

We've compiled a list of the best content marketing tools.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro



from Latest from TechRadar US in News,opinion https://ift.tt/yoU0utz

Is 2025 the year we move in earnest from GenAI hype to GenAI results? Recent research suggests yes, particularly for the UK, which could se...

Is 2025 the year we move in earnest from GenAI hype to GenAI results? Recent research suggests yes, particularly for the UK, which could see almost a doubling in economic growth over the next 15 years thanks to this cutting-edge technology.

However, every tech leader knows they can't predict every advance on the horizon, even while acknowledging their responsibility to plan for the future as much as possible. Across industries, leaders are faced with taking the plunge and investing in technologies like AI tools to future-proof their businesses. But without the right strategy and plan for adoption, you can end up adrift without a clear idea of where you're headed next.

Walking this tightrope takes a pragmatic approach, leveraging the best available tools while also maintaining flexibility and control. Practical GenAI implementation isn't about rigidly committing to one path. Instead, it's about creating an AI ecosystem that adapts and evolves with your business needs. That could mean choosing platform-agnostic solutions to avoid vendor lock-in, embracing open source to benefit from flexibility and transparency, adopting hybrid and multi-cloud strategies to ensure the best environment for your AI workload or focusing on right-sizing your AI solutions.

Pillars for practical GenAI implementation

Partnering with technology providers can ensure customers harness the power of AI —tackling complexity, risk, and the cost of diving into and supporting AI now and in the future. By offering flexible consumption models, an end-to-end AI-optimized IT infrastructure portfolio, an open ecosystem of deep partnerships with other leading AI companies, and a commitment to open standards, they can support a GenAI implementation that aligns with a business's unique needs, risk tolerance and long-term vision. In short, they can help to ensure a strategy that is not just cutting-edge but also pragmatic and sustainable.

We can do that for our customers because of the lessons learned on our own AI journey. By implementing AI within our own operations, we've gained first-hand experience of its challenges and opportunities, giving us a deep understanding of what works and what doesn't in real-world business settings. Our "customer zero" approach, where we become our own first and best customer, ensures that our AI solutions are not just theoretical concepts but are grounded in practicality, refined through real-world experience and ready to deliver tangible results for our clients.

Through that practicality, we developed these five guiding principles to help you more rapidly and efficiently deploy AI technologies that will serve your business today and prepare you for your future business. These pillars for practical GenAI implementation are a testament to our own journey and our commitment to helping customers simplify complex technology.

1. Enterprise data is your differentiator

Never lose sight of the fact that your data is a goldmine of insights, and unlike your competitors, you have exclusive access to it. You have a treasure trove of customer, operational, and market data – information that reflects your company's unique journey and expertise. This data is the secret to success in the AI race.

By building upon pre-trained models and customizing them with your proprietary data, your differentiator, you can gain a competitive edge through deeper customer insights (AI can analyze your customer data to uncover hidden patterns and predict future behavior), proactive risk management (AI can detect fraudulent transactions in real-time by analyzing customer patterns and flagging anomalies) and enhanced decision making (AI can analyze vast amounts of data to identify trends, forecast demand, and optimize pricing strategies – giving you the insights you need to make smarter, faster decisions).

2. Respect data gravity

Although data may be a treasure trove, it's never found all in one pot. Data is highly distributed, with most of it residing on-premises and more than 50% of enterprise data generated at the edge.

For data to be effective, it must be near applications and services that rely on it for efficient processing and analysis. It is better to yield to "data gravity" and bring AI to the data (where the majority is on-prem) rather than move enterprise data to available computing resources. Most organizations are finding it more effective and efficient to train and run AI models on-prem to minimize latency, lower costs, and improve security. To turn data into actionable insights with AI, often in real-time, a combination of on-premises, edge and cloud deployments is vital. For this reason, 66% of UK decision-makers prefer to build an on-prem or hybrid approach for AI use and procurement.

3. Right-size your AI infrastructure

There is no one-size-fits-all approach when it comes to AI. I've witnessed customers across multiple industries, in organizations of varying sizes, implement their AI in innumerable ways - from locally on devices and at the edge all the way to massive hyperscale data centers. Not all models are large and not all AI workloads run in a data center. Or in the cloud. To avoid massively over or under provisioning, it will be important to right-size the AI solutions you adopt to your use case and requirements, so analyze your use cases and goals to determine the most appropriate infrastructure and model types.

4. Maintain an open, modular architecture

Equally as important is a mindfulness that the AI landscape is constantly evolving, and that no one can predict its future course. This means that a rigid, closed system can quickly become obsolete Therefore, maintaining an open, modular architecture will be crucial to help enterprises adapt to fast-paced changes in AI technologies and avoid being locked into outdated or inflexible architectures.

AI / GenAI workloads are a new class of workload – requiring a new class of open, modern innovation spanning the entire AI estate: data layers and lakes, compute, networking, storage, data protection and AI software applications. But it's entirely plausible, if not likely, that new GPU infrastructure, algorithmic infrastructure, or inventions could emerge in the future that would require enterprises to adapt. The worst mistake you can make today is to bet on and commit to a closed, proprietary, single-dimensional AI system that is not flexible.

Open-standards AI tools offer flexibility, transparency, and a vibrant community for support and innovation. By integrating open-standards solutions into their AI strategy, businesses can avoid being beholden to a single vendor and customize tools to meet their specific needs.

5. Forge a thriving AI ecosystem

No single vendor can solve every AI challenge; collaboration is key. AI is a composite of many technologies, intellectual capabilities, and services, which enterprises will need to mesh with each other to succeed. Be sure to embrace vendors that enable an open ecosystem of partners, from AI major players like Microsoft to silicon providers like NVIDIA and Intel to open-source leaders like Hugging Face.

Open ecosystems provide equal opportunity across the tech ecosystem, support the creation of new GenAI breakthroughs, and give customers greater access to innovation and flexibility.  Access to open models and technologies can accelerate progress and solve problems worldwide, fueling a global "innovation engine" across all corners of the industry, from individual developers and startups to public sector and enterprise organizations.

A real-world approach for real-world results

Successfully navigating a new landscape nearly always requires a pragmatic approach that balances excitement with realism, preparation and careful execution. Being able to realize value from new technologies demands the creation of strategic roadmaps, and when it comes to AI, the preparation, quality and storage needs of the data that feeds it have added importance. Don't be caught up in the feeling that you need to transform into an AI powerhouse overnight. Start by identifying a specific, achievable goal that has the capacity to generate business ROI, and strengthen the route to success with a clear vision and the right partnerships.

We've compiled a list of the best free cloud storage.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro



from Latest from TechRadar US in News,opinion https://ift.tt/PUCoWLG

The start of a new year presents an opportunity for businesses to take stock and evaluate the effectiveness of their data storage. As the w...

The start of a new year presents an opportunity for businesses to take stock and evaluate the effectiveness of their data storage. As the world continues to generate record volumes of data, particularly through the evolution of AI capabilities, it’s more important than ever that organization ensure they safeguard against future storage challenges.

With surging data volumes, industry is set to face two key challenges in 2025: an impending data shortage crisis, and the environmental impact of data centers. However, there are actions organizations can take to navigate these challenges.

The explosion of global data will cause a data shortage crisis

The world is creating data at unprecedented volumes, with no signs of slowing down anytime soon. For reference, in 2028, as many as 400 zettabytes will be generated, with a compound annual growth rate (CAGR) of 24%.

To put into perspective how large this quantity is, consider how many grains of sand there are on all the world’s beaches – in theory, the latest research indicates there are over seven sextillion. Research by the California Institute of Technology equates one single zettabyte of information to exactly this, the amount of sand across the world’s beaches. Now times that by four hundred and we can begin to understand just how much data that will be generated and processed by the world’s computers in 2028 alone.

With the development of AI tools continuing to mature and grow in scale globally, the value of data will increase, which will lead us towards storing more data, for longer periods of time. However, the storage install base is forecasted to have only a 17% CAGR, which is a significantly slower pace than the growth of data being generated. As it takes a whole year to build a hard drive, the disparity in growth rates will subsequently disrupt the global storage supply and demand equilibrium, causing a data shortage crisis.

Looking ahead, organizations will likely become less experimental and more strategic in their use of AI. Navigating this looming storage crisis will require businesses to start building long-term capacity plans now, to ensure adequate storage supply, and fully monetize investments in AI infrastructure.

Storage innovation is imperative to tackling the data center crunch and protecting the planet

As the global data boom continues unabated, it will eventually reach the point where data centers will become overwhelmed. According to the UK’s National Grid, power demand from commercial data is expected to increase six-fold within the next 10 years along. This increase in demand will clearly impact the capabilities and performance of data centers, resulting in a crunch in resource.

However, there are a number of barriers to tackling this issue, including financial, regulatory and environmental hurdles. These barriers will increasingly challenge and oppose the need for greater, physical data center space, and capacity.

According to CBRE, AI advancements are specifically projected to be a significant driving factor for future data center demand. To manage the rising need for power density, high-performance computing will require rapid innovation in data center design and technology.

That being said, it’s not just innovation in computing that is needed to help address this data crunch. The implementation of higher area density hard drives, which expand the amount of data stored on a given unit of storage media, can enable greater data capacity in data centers. Investing in these drives can help data centers avoid the need for building new data storage sites, resulting in significant TCO savings and reducing the future environmental impacts of new centers.

As we look towards the year ahead and the potential obstacles that may affect business operations, there are key actions that organizations should implement now to be ahead of the curve.

Businesses should prioritize building robust long-term capacity plans, to minimize the future disruption caused by rapid global data growth. There are also huge benefits in investing in improved AI infrastructure and higher areal density hard drives, to effectively tackle the impact of increased demand on data centers.

As we end this year and approach the next, and as organizations map out their 2025 business plans, it’s critical they factor in implementing effective data storage solutions for the good of their performance, bottom line and the planet.

We've compiled a list of the largest SSD and hard drives.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro



from Latest from TechRadar US in News,opinion https://ift.tt/VORIuFZ

Researchers have found a fake Telegram Premium Android app It is being distributed through a phishing page pretending to be a Russian ap...


  • Researchers have found a fake Telegram Premium Android app
  • It is being distributed through a phishing page pretending to be a Russian app store
  • Malware is capable of exfiltrating all sorts of sensitive information

Experts have detected a new piece of information-stealing malware posing as one of the most popular messaging apps around.

Cybersecurity researchers from CyFirma recently discovered an Android app, pretending to be a premium version of Telegram, but which actually steals victim’s login credentials and sensitive information.

The researchers explained how back in 2022, when the Russian invasion of Ukraine began, the West imposed heavy sanctions on Putin’s regime. These sanctions meant Russians could not access Google's Play Store or Apple’s App Store. To provide Russian citizens with access to mobile software, the country’s Ministry of Digital Development, together with VK (the country’s social media behemoth and essentially a Facebook clone) created RuStore, a mobile app marketplace.

FireScam

Cyfirma now claims someone created phishing websites on GitHub designed to look like RuStore. Victims that visit the website will first get a dropper module named GetAppsRu.apk, which lists apps installed on the device, gains access to device storage, and installs additional packages.

Among the additional packages is the main malware, called Telegram Premium.apk. This malware, dubbed FireScam, requests permissions to monitor notifications, clipboard data, SMS, and more. It also displays a fake Telegram login page to steal the credentials.

Furthermore, FireScam will monitor app activity, the clipboard, look for e-commerce transactions, and virtually anything else that could be useful. The data is then extracted to a third-party server where it’s filtered and transferred elsewhere. The information that is deemed worthless is wiped, it was added.

Cyfirma could not attribute FireScam to any known threat actor, but it did describe the operation as a “sophisticated and multifaceted threat” that “employs advanced evasion techniques.” There was no word on the number of potential victims. It also recommended users be careful when opening files from sources they’re not entirely familiar with, or when clicking on potentially dangerous links.

Edit, January 13 - After the publication of this article, a Google spokesperson reached out to confirm that the malware is not present in the Play Store:

“Based on our current detection, no apps containing this malware are found on Google Play. Android users are automatically protected against known versions of this malware by Google Play Protect, which is on by default on Android devices with Google Play Services," the spokesperson said. "Google Play Protect can warn users or block apps known to exhibit malicious behavior, even when those apps come from sources outside of Play."

Via BleepingComputer

You might also like



from Latest from TechRadar US in News,opinion https://ift.tt/Ktkuxhe

Great businesses are built on data. It's the invisible force that powers innovation, shapes decision-making, and gives companies a comp...

Great businesses are built on data. It's the invisible force that powers innovation, shapes decision-making, and gives companies a competitive edge. From understanding customer needs to optimizing operations, data is the key that unlocks insights into every facet of an organization.

In the past few decades, the workplace has undergone a digital transformation, with knowledge work now existing primarily in bits and bytes rather than on paper. Product designs, strategy documents, and financial analyses all live within digital files spread across numerous repositories and enterprise systems. This shift has enabled companies to access vast volumes of information to accelerate their operations and market position.

However, with this data-driven revolution comes a hidden challenge that many organizations are only beginning to grasp. As we look deeper into corporate data, organizations are uncovering a phenomenon that's as pervasive as it is misunderstood: dark data.

Gartner defines dark data as any information assets that organizations collect, process, and store during regular business activities but generally don't use for other purposes.

What makes dark data that insidious?

Dark data often contains a company's most sensitive intellectual property and confidential information, making it a ticking time bomb for potential security breaches and compliance violations. Unlike actively managed data, dark data lurks in the background, unprotected and often forgotten, yet still accessible to those who know where to look.

The scale of this problem is alarming: according to Gartner, up to 80% of enterprise data is “dark,” representing a vast reservoir of untapped potential and hidden risks.

Let's consider the information from annual performance reviews as an example. While official data is stored in HR software, other sensitive information is stored in various forms and across various systems: informal spreadsheets, email threads, meeting notes, draft reviews, self-assessments, and peer feedback. This scattered, often forgotten data paints a clear picture of the complex and potentially dangerous nature of dark data within organizations.

A single breach exposing this information could lead to legal liabilities and regulatory fines for mishandling personal data, damaged employee trust, potential lawsuits, competitive disadvantage if strategic plans or salary information is leaked, and reputational damage that could impact recruitment and retention.

The unintended consequences of AI

AI is changing how organizations handle dark data, bringing both opportunities and significant risks. Large language models are now capable of sifting through vast troves of unstructured data, turning previously inaccessible information into valuable insights.

These systems can analyze everything from email communications and meeting transcripts to social media posts and customer service logs. They can uncover patterns, trends, and correlations that human analysts might miss, potentially leading to improved decision-making, enhanced operational efficiency, and innovative product development.

However, this newfound ability to access data is also exposing organizations to increased security and privacy risks. As AI unearths sensitive information from forgotten corners of the digital ecosystem, it creates new vectors for data breaches and compliance violations. To make matters worse, this data that is being indexed by AI solutions is often behind permissive internal access controls. The AI solutions make this data widely available. As these systems become more adept at piecing together disparate bits of information, they may reveal insights that were never intended to be discovered or shared. This could lead to privacy infringements and potential misuse of personal information.

How to combat this growing problem

The key lies in understanding the context of your data: where it came from, who interacted with it, and how it's been used.

For instance, a seemingly innocuous spreadsheet becomes far more critical if we know it was created by the CFO, shared with the board of directors, and frequently accessed before quarterly earnings calls. This context immediately elevates the document's importance and potential sensitivity.

The way to gain this contextual understanding is through data lineage. Data lineage tracks the complete life cycle of data, including its origin, movements, and transformations. It provides a comprehensive view of how data flows through an organization, who interacts with it, and how it's used.

By implementing robust data lineage practices, organizations can understand where their most sensitive data is stored and how it is being accessed and shared: By combining AI based content inspection along with context on how it’s being accessed and shared (i.e. data lineage), organizations can quickly identify dark data and prevent it from being exfiltrated.

We've compiled a list of the best document management software.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro



from Latest from TechRadar US in News,opinion https://ift.tt/mHdUyXE

As CES 2025 draws to a close, there are a few key takeaways in the world of TVs including brighter OLEDs, like the LG G5 , bigger screens,...

As CES 2025 draws to a close, there are a few key takeaways in the world of TVs including brighter OLEDs, like the LG G5, bigger screens, like Hisense's mammoth 136-inch and 163-inch micro-LED TVs, as well as the introduction of new technologies like Samsung's 8K TV with an RGB micro-LED backlight.

One market, however, looks like it'll be just as competitive if not more so than it did in 2025; and that's mini-LED TVs. Some of the best TVs on the market are mini-LED TVs. Highlights from 2024 included the Samsung QN90D, Hisense U8N and the ground-breaking Sony Bravia 9, but there were so many more models that could count themselves among the best mini-LED TVs.

2025 looks set to be another big year for mini-LED TVs, with some exciting tech innovations introduced at CES. Mini-LED TVs from Samsung, Hisense, TCL and Panasonic have been confirmed – and Sony hasn't even announced its sets yet!

The state of mini-LED in 2025

The Samsung QN90F TV

The Samsung QN90F (pictured here at CES 2025) is likely to be one of the best mini-LED sets of 2025, if it's anything like its predecessor. (Image credit: Future)

Samsung revealed its 2025 mini-LED TV lineup and the most eye-catching news was the introduction of Glare Free tech to the Samsung QN90F and QN990F, its 2025 flagship 4K and 8K models. The reflection beating tech (which is a matte screen) was first introduced in 2024 on the Samsung S95D, one of the best OLED TVs on the planet.

We were blown away by the effectiveness of the Glare Free tech on the S95D, so adding it to the QN90F – which we expect to have high brightness levels and great motion processing like its predecessor the QN90D – is likely to make it one of the best TVs for sport. This is one for sports fans to keep an eye on.

One of the other major reveals at CES was the introduction of Hisense's RGB mini-LED backlit TV, the 116-inch UX. Hisense says the new tech will provide bolder, more vibrant colors and 10,000 nits of peak brightness, while also being 10% more energy efficient. We saw the 116UX in person and its as vibrant as promised.

This is likely to be a super-premium TV, with the 110-inch UXN with a standard mini-LED backlight retailing for a hefty $15,000 / £20,000 (roughly AU$24,000) so you can expect the 116UX to be even pricier, but its still an exciting new tech.

TCL's 2025 backlight close-up

TCL's new and improved mini-LED backlight (pictured here at CES 2025) could improve picture quality for its TVs across the range. (Image credit: Future)

TCL also revealed a new type of mini-LED backlight, which demonstrated more precise backlight control and higher brightness of up to a mind-blowing 50% increase in its 2025 mini-LED TV lineup, without compromising the image's darker areas.

Panasonic introduced the W95B in its 2025 TV lineup, while LG announced two mini-LED sets, the QNED99 and QNED92, into its 2025 QNED TV lineup. While these didn't quite have the same headline-grabbing innovations as the sets above, they are still set to be packing with excellent gaming features and upgraded processors promising higher brightness once again and better contrast.

These are just some of the mini-LED TV models we can expect in 2025. We're waiting to hear about Sony's 2025 lineup, which could include the successor to the brilliant Sony Bravia 9. The Bravia 9 is a mini-LED TV with OLED-rivaling contrast and black levels, so could Sony look to one-up it?

We're also still waiting to hear about Hisense's latest ULED lineup, which follows on from last year's Hisense U8N, U7N and U6N, and these along with TCL's other sets are sure to make up the backbone of the mid-range and budget mini-LED sets in 2025.

The battle rages on and we're spoilt for choice

The Samsung QN990F TV with its Wireless One Connect box

Even 8K mini-LED TVs are getting innovations, as the Samsung QN990F (pictured here at CES 2025) has a wireless connection box and matte screen. (Image credit: Future)

While these brands will be looking for mini-LED supremacy, we're the real winners. These brands are looking to make their TVs faster, brighter, more colorful and detailed than ever before – and they'll be looking to offer the best prices they can to tempt us into choosing a mini-LED over an OLED.

Brands are starting to invest in bigger screens through their lineups too, with Hisense's 116-inch UX, Samsung's 115-inch QN90F and the TCL 98-inch QM6K just some of the larger mini-LED sets on offer. While these screens will exceed most people's budgets, does this mean we could see a price drop on smaller sizes? Hisense is most likely to offer this, but we'll be keeping an eye out on prices as they are revealed over the coming months. Plus, we can all still dream of a cinema sized screen and having the option is always going to get a thumbs-up from me.

It's also great to see so many tech innovations coming through for mini-LED TVs as well. A common problem for mini-LED TVs is backlight blooming (where light surrounds brighter objects on dark backgrounds, creating a halo effect) but if these new innovations can reduce this issue altogether, then that just means better TVs for us.

Higher brightness, both peak and fullscreen, is always a theme in new mini-LED TVs. While the numbers are reaching eye-watering heights of 10,000 nits plus, it means more eye-catching pictures and less reflections for those of us with bright rooms. Yet another positive thanks to the spirit of competition.

2025 is shaping up to be the most hotly contested year for mini-LED TVs in years and I can't wait to find out who's going to come out on top. Thankfully for us, it looks like we're going to be spoilt for choice.

You might also like



from Latest from TechRadar US in News,opinion https://ift.tt/w7KNDOh