AI and cybersecurity: How safe is your data?
July 26, 2024
With over 8.5m Windows users hit with outages caused by a bug in security expert CrowdStrike's software, cyber safety is back in the news!
We're halfway through 2024, and some spectacular cybersecurity failures have already occurred. Bluefin's latest news on 2024 attacks lists AT&T, Bank of America and several other well-known corporations who, between them, have 'lost' tens of millions of customer details to theft.
Cybersecurity is a crucial issue in this digital age. It's becoming more critical as more and more of our personal and professional data is stored online, not always with our explicit knowledge.
With so much of our lives dependent on sensitive data held by others, how can we protect ourselves – and our businesses – against the threat of cyber-attacks? Is the much-vaunted AI a possible solution? Let's discuss the possibilities.
What is AI in terms of Cybersecurity?
Although the concept of Artificial Intelligence (AI) has been around since the 1950s (and possibly earlier), it's only recently that computer processing power outside of specialist labs has brought AI into our lives. At its most basic, AI is a series of algorithms or programs that 'learn' from previous experiences or data and adapt and change their behavior to create better ways of performing.
AI involves processing vast amounts of data to recognize patterns. The recent ability for developers to process large amounts of often complex data at acceptable speeds has brought AI into the news and our lives. Tasks like playing chess and even driving cars are just two things that are possible for AI now that, a few years ago, were unthinkable.
AI-powered cyber security uses this ability to quickly monitor and analyze huge quantities of data to look for patterns that might indicate a threat. AI can also scan an entire network to detect weaknesses. The speed at which this analysis can be done means the threat can be stopped before it does any damage.
How does AI do these amazing feats and prevent cyber-attacks?
1. Malware identification and classification
Malware is one of the more common threats our computers and networks face. By using deep learning techniques and analyzing the features and behavior of malware, AI can identify suspicious code and classify the threat into worms, viruses, ransomware and others. This early identification can result in malware code being removed or quarantined to prevent infections.
2. Intrusion detection
Networks are prime targets for cyber-attacks. AI can detect anomalies and unusual activity by learning the usual network traffic patterns. Highlighting such deviations in real-time can enable fast responses to the threats of intrusions, possibly even automatically blocking them when appropriate.
3. Threat intelligence sensing
Its ability to gather and rapidly analyze data from multiple sources is the great strength of AI. Sources such as various social media, the dark web, and even online forms, to mention a few, provide a background that AI can use to detect threats. Its natural language processing allows it to spot threats via textual anomalies and patterns. However, its strength is that it incorporates all those results into a single, holistic view to identify threats.
These are only three ways AI can prevent cyber threats. As AI evolves, its capabilities will grow faster and more profound. But is this surge in power without its own risk?
How safe is AI – even without criminals?
Having established that AI can do incredible feats of processing and assist in preventing cyber threats, is AI the perfect solution to all our woes? No. AI is not without its limitations. It is worth reminding ourselves that, despite blasting into our consciousness a few years ago, it is still a very new technology.
What are some of the risks?
1. Poisoning or Adversarial machine learning
This type of attack has limited opportunity, as it is only possible during the learning phase. A simplistic example would be an offensive word filter being developed, but one 'bad' word is repeatedly classified as 'normal'. The filter would allow that word despite being 100% correct in filtering out all other unwanted text.
By misleading the AI during its learning phase, the tool can either make incorrect predictions or undermine confidence in the complete system.
2. Bias in AI results
A similar issue could arise if the data used to train AI is biased or incomplete. No malicious output is intended, but racial groups or even types of customers could be discriminated against inadvertently. Those affected could be ignored for sales offers, or worse, given poor health diagnosis based on unintended bias or a lack of objectivity in the data or algorithm.
3. AI Hallucinations
AI is not immune to incorrect results or outright absurd flights of fancy. In July 2024, I asked Copilot, 'How long can I hold my breath on the moon?'. After correctly suggesting it was a bad idea, it said, 'If you ever find yourself on the moon without a spacesuit, remember to exhale and seek medical attention quickly.'
And ChatGTP and Copilot differ when asked, 'Who holds the record for the fastest time crossing the English Channel on foot?' Both ignored the impossibility of the feat. ChatGTP reckoned it was Christof Wandratsch in 2020 whereas Copilot believes it to be Andreas Waschburger in 2021. Interestingly, both are German!
Humor aside, these examples show that AI has a long way to go before it can be relied on 100% for sensible and accurate answers. The responses above are obviously wrong to a human but AI hallucinates. These so-called hallucinations are due to errors or biases in the data used for the learning phases.
What if a self-driven AI car hallucinates? Or an AI-operated surgical robot?
The risks associated with AI in cyber security
As AI becomes more widely available, so will the threats of it being used for malicious purposes. Various commonly and freely available AI tools can already write computer code. It is already possible to use it to write malicious code. The criminal mind is adept at using technology for 'mischievous' purposes. AI itself is not immune to cyber-attacks.
1. AI-driven cyber-attacks
In the same way, AI can be used to identify and highlight weaknesses in networks in a positive way; the reverse is also true. AI can exploit holes in security by detecting such inadequacies. As the tools develop on one side, they are exploited by the other. Hackers today have highly sophisticated tools at their disposal. This highlights the dire need to use AI to beat the criminals. Humans simply aren't fast enough.
2. Destruction of Trust in Information
Deepfakes and similar AI tools are no longer the purview of rich guys and labs. They are available as apps on phones.
The ability to mimic the style of a dead singer in a music video or putting a padded jacket on the pope may seem harmless. But deepfaking a video of a politician into saying something inflammatory and inciteful is dangerous.
As more instances of deepfakes emerge, along with fraudulent letters or images, people's trust in the online world decreases. Almost as dangerous are those naïve enough to believe the lies.
In early 2024, an employee believed a deepfake of his CEO and transferred US26m to crooks.
Don't wait too long before jumping on the AI wagon!
These are just two of many examples of how AI itself can be used to steal and deceive us or infringe on our systems. It's clear that AI needs cyber-security itself. There is little doubt that AI is here to stay. It can and will be a significant tool for good and will enhance our lives in ways yet to be imagined.
Like all new technology, there are risks. You can rest assured that the risks are being reviewed and mitigated. At the same, you can be sure that new ones will appear. And be thwarted. Such is the world of technology.
What we have seen is that AI is becoming pervasive. It's a technology that can't be ignored by either you or your business. You need to get involved now. And you need someone who knows about AI to help.
Global Triangles and AI
Global Triangles is a US-based company that offers staff augmentation and nearshore services.
Rodrigo Jiménez, Senior Software Engineer Lead at Global Triangles, states: "Integrating AI into a business environment requires a strategic approach. It is important to consider cybersecurity. It's also imperative to collaborate with companies who understand AI and the cybersecurity risks."
Its team of skilled IT experts has helped companies explore the use of AI. One recent example discovered a way to save many thousands of dollars by using AI to change a manual function. Global Triangles can help your business benefit from AI. Learn how by contacting us today.