Thread Rating:
  • 1 Vote(s) - 5 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Machine Learning Business Breach (MBB): How Hackers can Use Artificial Intelligence (
#1
Lightbulb 
Quote:
[Image: heimdal-logo.svg]

ML, AI, and APTs – A ‘Brave’ New World?

Isaac Asimov, one of the most influential science-fiction writers of all times, envisioned a future populated by sentient and ethically sound machines that have vowed never to let any harm fall upon a human.

While we’re still far from hearing an intellectually uplifting conversation between a technophobic detective and a machine struggling to figure out its own existence, technology has come to the point where we can mimic many biological systems.

Take the human brain for instance – even though humans use around 10% of its computing powers to perform day-to-day tasks, it is still considered the most complex computational machine.

For decades, scientists and engineers have strived to boost the computational capabilities of computers by imitating the human brain’s neural pathways. Artificial Intelligence (AI) is the brainchild (pun intended) of computer engineering and biology.

Most of the apps or software we use today to take advantage of AI. To name just a few, we have Apple’s Siri, Microsoft’s Cortana, Alexa, DataBot, Hound, and Youper. Although recent compared to traditional data processing and manipulation techniques, AI has already proved its potential.

However, as with any new piece of technology, there’s the age-old ethical concern: can it be used to serve nefarious purposes?

All of the data gathered so far supports the idea that ‘rogue ‘AIs can and has been used to unleash devastating attacks – back in September, one of my colleagues pointed out that a “voice-altering AI” is behind an of CEO-impersonation cyberattacks that have hit numerous companies all over the globe.

Long before DeepFakes, one cannot forget the incident involving Microsoft’s short-lived Tay Bot, an experiment shortly discontinued after the AI started to blurt out offensive and inflammatory tweets.

Even Twitch’s seemingly innocuous Google Home chitchat took a rather twisted turn, after the two smart home devices plunge into an existential discourse, asking one another about the meaning of life and questioning their identity as machines.

The examples quoted so far are neither malicious nor good, in essence; they just show what AI can do when it starts ‘thinking’ outside of the box. However, this is not the purpose of this article. We’re here to talk about machine learning (ML) and how this ‘technique’ can potentially be used to serve malicious intents.

Before I tackle the finer points of ML spearheading the malware movement, I would like to say that everything you will read from this point on, will have a “what-if” spin to it; up till now, there have been no indications of machine learning techniques being used in cyberattacks.

However, back in 2016, the US Intelligence community red-flagged the potential use of machine learning in boosting the efficiency of malware attacks. To some, this may be nothing more than the proverbial “red herring”, but it still remains a very distinct and not so far-fetched possibility.

Machine Learning in malware dissemination

First of all, it’s only fair to determine how ML fits into the big picture. Although they usually appear in the same context, AI and ML are not the same things. Machine Learning is a subset of Artificial Intelligence, one that’s being used to ‘teach’ machines how ‘to think on their own’ rather than rely on explicit instructions. In scientific lingo, ML is the study of statistical models and algorithms which are used to coach computer systems to accomplish various tasks through inference and pattern analysis.

So, do androids dream electrical sheep? No, but they can be taught about how to dream their own world into being. Beyond statistical analysis, probabilities, decision trees, and genetic algorithms, the use of ML for AI coaching are very much like teaching small children about how to tackle various challenges. For instance, you can forbid a child from touching a hot stove, but the only experience can teach him\her why it’s not a good idea to place your hand on a hot surface.

That’s how ML works in a nutshell: you can write thousands of lines of code telling an AI how to, say identify a smiling face in a picture, but only ML-enforced coaching can really help the machine to figure out how to ‘point out’ grinning faces in non-explicit contexts. The process I have just described is actually an ML-based IDing technique, with any number of applications, some pertaining to social media.

So, how can Machine Learning be employed to increase the efficiency of malware attacks? The ‘easiest’ answer is that “it cannot”. At least not by its own accord. ML is about teaching and coaching – knowledge is knowledge and, therefore not inherently good or evil. It’s the way we choose to exert it, that creates this type of polarity.

From this, we should be able to infer two things:

A) machine learning can be used to gather information on target(s) and

B) machine learning can, theoretically, be used to coordinate advanced malicious attacks, elude detection grids, identify weak points, and instruct malicious scripts to act like sleeper agents in order to avoid pattern-based detection methodologies.

ML in gathering intel

Information-gathering is an essential step in conducting any type of incursion. Throughout this phase, the attacker attempts to find out as much as possible about the potential victim. Victim profiling is a time-consuming endeavor and, in the end, all may prove to be inconsequential – in this grand game of chess, capturing the king does not necessarily mean that the game is over.

Picture yourself in the role of a person who wants to conduct a cyberattack. What would you need to ensure the success rate of such an endeavor? It’s more than obvious that it would be of great help to know something about your potential victim(s).

Gathering and analyzing emails is an efficient way of finding out things about your victim. However, even if someone were to break into your email account, how would he be able to identify those pain-points?

This is one of the possible applications of machine learning; by employing classification models such as clustering, K-means or random forests, the attacker can infer a lot about its victims. For instance, by applying one (or more) of the aforementioned models, he can figure out how many of the victims will click on a malicious link enclosed in an email.

It stands to reason that this information would greatly increase the attack’s rate of success since the malicious agents now know whom to go after. Other type of information can be added to further refine the attack method: social media activity, locations, particular hobbies and interests (i.e. using social media tracking and NLP, the hacker can only target users who prefer expensive apparel brands).

Most unfortunate is the fact that those kinds of determinations can be made using legitimate (and, sometimes, licensed) tools. The easiest way to track a person across several social media platforms is to perform what I call a reverse image search.

You can try it right now if you’d like – just go on someone’s Facebook account, open an image, save it as .pdf on your desktop, head to Google Images, upload the saved picture, and hit “Search”. Indeed, it may not be the preamble to a full-scale APT attack, but it goes to prove just how ‘transparent’ a person can be in the online world.

The consequences are even more significant when it comes to businesses. Imagine what would happen if some work-sensitive emails would fall into the wrong hands? We’re not just talking here about one gullible employee being locked out of his social media accounts because he clicked on a suspicious link, but about the company’s declaring insolvency.
...
Continue Reading
[-] The following 2 users say Thank You to harlan4096 for this post:
  • dhruv2193, Toligo
Reply


Forum Jump:


Users browsing this thread: 1 Guest(s)
[-]
Welcome
You have to register before you can post on our site.

Username/Email:


Password:





[-]
Recent Posts
K-Lite Codec Pack 18.3.5 / 18.3.5 Update
Changes in 18.3.5:...harlan4096 — 06:22
KeePass 2.57
KeePass 2.57​ K...harlan4096 — 06:15
AMD announcement in Computex 2024
AMD Instinct AI Acce...harlan4096 — 06:13
AMD announcement in Computex 2024
AMD Unveils 5th Gen ...harlan4096 — 06:13
AMD announcement in Computex 2024
AMD Adds More High-E...harlan4096 — 06:10

[-]
Birthdays
Today's Birthdays
avatar (48)eapedDow
avatar (45)Carlosskake
Upcoming Birthdays
avatar (46)BrantgoG
avatar (40)tapedDow
avatar (47)rapedDow
avatar (42)Johnsonsyday
avatar (47)Groktus
avatar (39)efodo
avatar (37)Tedscolo
avatar (44)brakasig
avatar (43)JamesReshy
avatar (45)Francisemefe
avatar (38)leoniDup
avatar (37)Patrizaancem
avatar (49)smudloquask
avatar (44)benchJem
avatar (37)biobdam
avatar (40)zacforat
avatar (45)NemrokReks
avatar (48)Jasoncedia
avatar (36)Barrackleve
avatar (38)Julioagopy
avatar (48)aolaupitt2558
avatar (46)vadimTob
avatar (36)leannauu4
avatar (38)storoBox
avatar (46)kinotHeemn
avatar (37)Ceballos1976
avatar (38)efynu
avatar (30)horancos

[-]
Online Staff
There are no staff members currently online.

>