Wednesday, 17 July 2019

The Power of Social Engineering, Part Two: In the Crosshairs of an Attacker




In the previous blog post we covered the fundamentals of social engineering, from Cialdini’s 6 Principles of Influence through to how attackers leverage social engineering. In this article we will cover the techniques and tactics used to profile a target using Open Source Intelligence, and how this information can be used to generate highly effective pretexts. We will also briefly cover some of the other types of social engineering attacks. In the final article we will cover how you can detect and, more importantly, protect yourself from a range of attacks that use social engineering.


What is OSINT?

To understand how attackers build profiles on their targets, we must first dive into the wonderful world of open-source intelligence or OSINT.

“Open-source intelligence (OSINT) is data collected from publicly available sources to be used in an intelligence context. In the intelligence community, the term ‘open’ refers to overt, publicly available sources (as opposed to covert or clandestine sources). It is not related to open-source software or collective intelligence.” -Wikipedia

There’s a saying that goes, “if you have nothing to hide, you have nothing to fear.” The reality is that everyone has something they want to hide from the general public or, more aptly, an attacker. The key is identifying what form this information is in, how well protected it is, and if compromised, what the personal / professional impact would be.

Attackers are constantly profiling targets, looking for potential weaknesses in security and, from personal experience, it can take less than 1 hour of online recon using manual and automated OSINT techniques to gather enough information on a target to learn their:

·         Full Name
·         Location
·         SSN / NI number
·         Date of Birth
·         Email Accounts and Passwords
·         Mother’s Maiden Name
·         Online Digital Footprint
·         Employment Information
·         Financial Information
·         Mobile / Work Telephone Numbers
·         Social Media Information / Posts
·         Family / Friends / Colleagues
·         Interests
·         Work ID / Passes
·         Online Usernames for Third Party Sites / Forums

Armed with the above information, a motivated attacker could do some serious damage – especially as many people reuse passwords, and the same email as a login for multiple web apps or use an email / username that can identify something about you, such as year of birth.

A lot of the aforementioned information can be gathered with ease by using Google (or DuckDuckGo, Bing, etc.) but combine this with a powerful set of Open Source tools, it can be automated to perform at scale, even with manual verification. Below is a diagram depicting the tools and methodology for performing recon on an organisation.


All this information is extremely useful in the hands of a skilled social engineer, as it can be used to create a highly effective pretext or provide context for building an ongoing campaign against an organisation and its key employees / stakeholders.


What Is Pretexting?

Pretexting is a form of social engineering where the attacker uses information already obtained through OSINT or other sources to build a fabricated scenario to convince a target to disclose information or perform an action that is not in their own best interest.

Capable social engineers will often convince their targets to perform actions that enable the attacker to gain unauthorised access to information or physical access to restricted areas of a building. There are many times where I have gained unauthorised access to buildings in a range of industries such as financial services, e-commerce, gambling, pharma, and retail by using very simple pretexts and plenty of confidence. The key to good pretexting is in the research conducted beforehand and looking / sounding the part; without this any decent security guard or employee will easily see through the scenario and deny access.

There are many case studies on pretexting, but the most notable is the cybergang group Crackas With Attitude (CWA) who, in 2016, used social engineering to impersonate their victims by calling their cell phone carrier using basic verification such as the last 4 digits of the victim’s social security number. CWA were able to gain access to sensitive information, including emails, using this to further compromise their targets. In more than one case they accessed secret information from the CIA and FBI, and even John Brennan’s (the then CIA Director) personal email, and cell phone accounts. It was reported that the attackers also leaked information about 20,000 LEA (Law Enforcement Agency) officers, though this was never fully proven.

Although the attackers were caught and then subsequently prosecuted, it shows how effective vishing using basic pretexting and OSINT can be, even in the hands of high school kids.

Pretexting utilises most of the core principles of influence, but is weighted more on Authority and Social Proof, to build credibility with their targets.


What is Baiting?

Baiting, as the name suggests, is used to exploit a target’s peaked curiosity; an attacker will offer something (usually free) to lure a victim into clicking a link or running a malicious application. Classic examples of baiting include USB drops, or more recently, competitions on social media where malicious apps are found to steal login tokens or cause information leakage. Memes are also used for baiting, as many popular memes found on the internet have been found to contain some form of adware or malware.

If it seems too good to be true, then it probably is.

Baiting is heavily weighted on the use of Social Proof and Scarcity / Urgency to manipulate targets.


What is Quid Pro Quo?

In the simplest term possible, quid pro quo means “something for something” in Latin; today this means the exchange of goods or services, or a favour for a favour, and it’s the latter we will focus in on this article.

Today, quid pro quo is used in highly effective marketing campaigns, especially at conferences, where exhibitors will offer free merchandise, usually for the exchange of information (say, a business card or scanning a badge - which will contain valuable contact information). The exchange is definitely weighted in the favour of the exhibitor, but the attendee is still getting what they want - the free t-shirt or some branded lightsaber (talking from recent con experience here).

Social engineering scammers, especially Tech Support scammers, use quid pro quo to great effect. They call an unsuspecting victim and tell them they have a virus, but because they are from Microsoft, they can fix the issue. Usually this is either a free service as the objective is to drop a banking trojan, or there is a fee payable to have ongoing “support” (because they “fixed” the non-existent issue). The victim (usually a vulnerable person), then feels obliged to pay for the service not received.

I know of a close friend whose parents were scammed out of £25,000 using a similar scam, but under the guise of a BT fraud department working with their bank. They convinced the victims to transfer the money to a temporary holding, so they could investigate the compromised router and to protect their bank account which had been compromised; they felt obliged because the attacker fixed an issue on the work laptop and router. It was a basic but convincing scam; unfortunately, the money was lost and unrecoverable by the bank, even when the victims finally realised the mistake and contacted the bank. The scammers called for 2+ days even after they scammed the victims, still using the same pretext.

Attackers using quid pro quo leverage the use of Reciprocity and Commitment in their attacks.


In Summary

There are many types of social engineering attacks, such as phishing, spear phishing, whaling, and tailgating. Each attack vector is highly effective given the right amount of research conducted by the attacker. The attack surface for social engineering is huge within most organisations. However, defending against these attacks relies upon a fine balance between training, technology, and the correctly implemented policies and procedures. This is a subject that will be covered in detail in our final post in the series.

Tuesday, 11 June 2019

Si Vis Pacem, Para Bellum


Photo credit: https://forums-de.ubi.com/showthread.php/189053-si-vis-pacem-para-bellum

By Will Lambert

As we all know, prevention is better than the cure, you can’t close the stable door after the horse has bolted, etc. These common sayings have a clear and concise meaning that basically translates that it is better to prepare for an incident, rather than not prepare and have to clean up the mess, as sometimes the consequences are irrecoverable. So, how does this transcend to information security? Information security incidents can take any number of varying guises and may not just be singular in vector. As an example, two, three or even more attacks can be deployed as distraction tactics designed to flank and confuse incident response teams. How would your organisation fare against such attacks? Can your organisation even prevent such attacks in the first place? If not, can your organisation detect, react and recover from attacks? The preparation work is known as Incident Response Preparedness (IRP).

Unfortunately, the realisation of an organisation’s absolute dependence on IT systems to deliver their operational output, feeds the delusion that information security is solely an IT department’s problem. However, we know in the real world that information security is the responsibility of everyone. This is cemented through the stark realisation that although every department uses IT systems, it is not the systems themselves but rather the information they process in order to deliver the operational output – not forgetting the information processed by suppliers. Only through effective Supplier Evaluation and Risk Management can an organisation depict and manage the risk to their operational deliverables posed by suppliers (see SERM - The Domino Effect, Automating SERM and Defining the Damage).

Injury to an organisation’s information is usually measured against the well-known Confidentiality, Integrity and Availability (CIA) triad and while the mitigation techniques are many, they are usually broken across three main categories: people, process and technology.

·         People
o   Does your organisation have and continue to support its people in the;
§  Implementation, maintenance & effective use of mitigation controls, including your policies and technology stack?
o   How receptive are they to proposed information security changes?
§  Do they see information security as an enabler, rather than a blocker?
·         Process
o   What processes do you have to ensure you can deter, react and recover from attacks?
o   Are changes made within your environment adequately managed to ensure you can continue to output your deliverable?
§  Can your existing Business Continuity Plan (BCP) restore your business to a Business as Usual (BAU) state in the quickest time frame possible, highlighting lessons to prevent reoccurrence or gaps in the proposed plan?
·         Technology
o   What mitigation technologies do you have?
§  What controls do you take to ensure you are constantly surveying the data you keep? Remember, if you don’t need to retain it, get rid of it! (see Data Assessment)
§  How do you monitor your admin accounts? – remember hackers are known to target these in attacks as they hold the keys to kingdom (see Protect Your Superusers)
§  Are you alerted to changes within your environment, such as log event clearance, authentication failures, suspicious outbound connections? Are these alerts fed with various threat intelligence feeds to reduce the false positive likelihood? (see Protecting the Pack)

You’ll find from this list that it is not as clear-cut as what can be believed. Most mitigation controls are interlinked, intertwined into prevention. Take people and process for example. You may well have documented your incident response plan, but have you trained your people in its use? Training (see Battle Plans) is usually the tool to bridge the gap between your organisation’s agreed policy and informing the masses; secure code training is another example of this. Demonstrating you have a process in place is great, but is your development team trained to recognise software vulnerabilities and reduce your risk exposure? (see Secure Coding).

It is easy to think that mitigation controls can be complicated beasts – and of course they can be (firewalls, IPS / IDS, PIM / PUM, SOC, ACL, etc.) but don’t forget about physical controls (fences, guards, CCTV, etc). Attackers will make use of both technological and physical defence vulnerabilities - each of which will be considered by social engineers (see The Power of Social Engineers). What protection controls do you use to deter a social engineer in the first instance? How are these controls reviewed or tested?

We do need to consider an organisation’s need versus their appetite. Even today, some organisations’ appetite for information security is so low, they are almost hoping that information security is a fad that will soon pass. If they perceive that they can avoid paying for mitigation controls, they most likely will. However, as we said at the start, prevention is better than the cure and the level of IRP will assist an organisation reduce potential unwanted attention from regulatory bodies and media if breached (see Invest Early to Protect Your Business). If the level of IRP meets or even exceeds an organisation’s need, this will cement their brand as one which accepts its information security responsibilities with utmost clarity and forethought of mind. This further demonstrates to their employees, customers and even the world that information security is not a nuisance, it’s here to help us deliver our operational output safely, securely and with careful regard to all risks involved with processing activities. But more importantly, it’s here to stay.

If you want peace, prepare for war

Wednesday, 15 May 2019

Security Training for Developers: Cost Saver & Business Enabler




By Peter Ganzevles

Having delivered training for a long time and being involved in the process for even longer, I have come across many different people and even more questions. I’ll get questions I never thought about being asked, coupled with questions that I expect. However, there is one question I get more than any other, and it usually reads something like this:

“Thank you for the training, the developers gave good feedback on it. Now I know someone who I think would benefit from a training too, but how can I convince them that it’s worth it?”

This blog post will answer this question and address the topic more thoroughly than in a quick email response. In this post, we’ll address the value of training developers, and why it is worth doing.


Just Like… Fishing
And no, I don’t mean phishing. The famous quote from 12th century philosopher Maimonides told us that you can “give a man a fish, and you feed him for a day. Teach a man to fish, and you feed him for a lifetime.” This quote is key to understanding the importance of training from a business perspective. Over the course of my penetration testing career, I’ve tested clients’ applications, found some horrible flaws and reported them in the best way I could to help them fix the issue. Then, a year later, I’d carry out a retest, and while they had fixed that instance of the issue, the very same flaw that caused it to begin with was applied elsewhere in a new feature, which caused a similar issue.

These recurring issues are the prime example of a lack of knowledge in a particular area, and there are two ways to deal with it. The first is to come back for a retest just before every major release, but that requires a lot of time and effort scheduling, is rather costly, and generally inspires little confidence in the application. The second option is to train the developers, testers and project leaders to be aware of the risks when writing applications to prevent them from writing vulnerable code or missing it during testing.


Just Like… Puppies
There are hundreds of ways to train a developer, and while each has its merits, some are more effective than others. When toilet training a puppy, owners often make the mistake of pushing the puppy’s nose into where the ‘accident’ happened, to teach the dog that it has misbehaved. While the dog will learn quickly and avoid that spot, the same might happen five foot to the left. Similarly, shoving coding mistakes into a developer’s face and hoping that they’ll learn will likely have the same effect. The same code won’t be rewritten, but the underlying issue is likely to rear its head again in the future.

Another method is to run the class through a one-day course where we show the major flaws that often occur in applications. While this is quick and relatively cheap, it is unstimulating for most; it will not necessarily be adjusted to their skill levels since it needs to be challenging for the more experienced and comprehendible for the less experienced, which makes it somewhat unsuited for both. While trainings like this exist for companies with a smaller group of developers and testers who all have a similar skill level, it is not the solution I’d consider best.


Just Like… University
That brings us to the method that we’ve tried and tested for a few years now, which is a two-day course that functions similarly to a university. While the initial information is still presented to the group, it is offered in a way that allows for discussion and questions throughout each topic. Then, after the topic is over, every student gets the chance to practise what they’ve learned hands-on, either alone or in pairs, to ensure they fully understand what they’ve learned. The best thing about this hands-on part of the training is that it’s not just me teaching and helping, it’s the students as well. Ideas are exchanged, tips are given, and real stories from their own development career are shared. I’ve even had people leave the room to fix code on the spot!


Just Like… That
So, what are the long-term benefits of this method? I have given training to many companies and each has given a different answer. Some were able to grow further without hiring more testers, as fewer mistakes were made and the existing testing team had a lighter workload as a result. Many of them also explained that while training is an investment early on, it decreases the amount of issues found during penetration tests, which reduces the amount of time developers spend fixing issues, allowing them to spend that time on feature requests instead. A handful of clients even hinted that they were getting more customers, as they could prove that they were more secure than their competitors. Another client said that they were now using their newly found security knowledge in their recruiting process to find even better and more suited additions to the team, which then helped to increase the overall maturity. And finally, it is valuable for employees as they can put the skills on their resume should they ever change jobs, and with the ever-increasing demand for security knowledge, that isn’t a bad thing.

Wednesday, 17 April 2019

The Power of Social Engineering, Part One: Know Your Enemy




Written by Stuart Peck

ZeroDayLab hosted our Social Engineering Masterclass, where I presented (with a line-up of other ZeroDayLab speakers), the tactics, techniques, and procedures of how attackers deploy social engineering to great effect.

This trilogy of articles looks to build upon that message in greater detail. In part one, we will detail the tactics used by attackers, with an explanation of each of the 6 core principles of social engineering and influence. In the second article we will delve into attacks in operation, looking at case studies where social engineering is most effective, and discuss target profiling and pretexting. The final article will discuss active social engineering defence, designed for both individuals and organisational strategies that can be deployed to reduce the risk of a successful attack.


What is Social Engineering?

"Social engineering, in the context of information security, refers to psychological manipulation of people into performing actions or divulging confidential information…a type of confidence trick for the purpose of information gathering, fraud, or system access, it differs from a traditional ‘con’ in that it is often one of many steps in a more complex fraud scheme.”

It has also been defined as "any act that influences a person to take an action that may or may not be in their best interests." -Wikipedia

In the context of this article we are going to focus on the techniques and tactics of modern social engineering, with examples of phishing, vishing, smishing, and how this supports other attacks such as hacking and physical entry.

Robert Cialdini is famous for identifying and cataloguing the 6 principles of persuasion, the foundations on which most modern-day social engineering is built upon. These include:

  • Reciprocity- people tend to return the favour if they see the value in what has been offered. This is used a lot by intelligence agents and police to coerce their target into cooperation.
  • Commitment- if people commit, orally or in writing, to an idea or goal, they are more likely to honour that commitment because they have stated that that idea or goal fits their self-image. 
  • Social Proof- people will do things that they see other people doing. There have been studies conducted which have successfully convinced people in a group that a red object is blue, for example.
  • Authority- people will tend to obey authority figures, even if they are asked to perform objectionable acts. In the workplace this is driven also by the company culture.
  • Likability- people are easily persuaded by other people whom they like. If you combine this with trust, the effect is compounded. Always be wary of the attacker who comes at you with a smile.
  • Scarcity/Urgency- people are influenced by fear of loss, or negative impacts to missing deadlines. This creates urgency, where human error is likely to be exploited the most, e.g. ransomware.

With these principles in mind, you can see how unsuspecting users can be coerced or influenced into making decisions that are not in their, or their respective employer’s, interest.

For example, one of the biggest threats (and tools for attackers) in recent history is social media, which has desensitised many to the dangers of oversharing, and has led to people sharing:
  • Images of their credit cards
  • Images of boarding passes- which when you scan the barcode contains personal information including passport information
  • Information that be used to work out passwords
  • Information about their employer including images of their badge, maybe business cards
  • Selfies and videos that contain personal information, again that can be used to build a profile.

If you combine this with the wealth of information that can be gathered relatively easily on a company and their employees using open source intelligence (OSINT), this provides an attacker with a dossier of information to build a solid pretext, or in some cases actually access mailboxes where employees have re-used weak passwords or credentials found in publicly available breaches. 

Pretext, or the scenario being presented to the target, is built upon 4 conditions to establish trust. For example, through OSINT the attacker would have gathered some critical information about the target and the organisation- this could be names of direct management, colleagues in another location, project code names, information about systems, or direct impersonation of a trusted third party.

This information builds some form of credibility, which the attacker can pivot to establish some of the following:

 Figure 1: Principles of Trust

The reality is that most phishing emails try to use credibility and some form of authority because it is very difficult to build likability without the proper tone over email, but other attacks such as vishing or in person social engineering will leverage a combination of Likeability, Authority and/or Empathy to great effect. And the reality is that without the proper training, tools, and ongoing awareness to the threats, social engineering is going to continue to be used as part of the attacker’s toolkit.


Who Uses Social Engineering, and Why?

Social engineering is used as part of or as the main attack vector for a range of threat actors. These include:
  • Hackers- social engineering is a valuable part of the toolset for black-hat hackers, usually deploying a range of techniques to gain a foothold on a target’s network.
  • Scammers- highly effective but simple attacks deployed by telephone scammers is costing the global economy billions. Vishing is still a very viable attack vector.
  • Identity Thieves- using stolen information obtained through hacking or purchased on the Dark Web, these social engineers assume the identity of their target to obtain new credit or control of existing accounts, for huge financial gain.
  • Cyber Criminals- these attackers use a full suite of social engineering techniques, but phishing is the weapon of choice, either to deliver malware to gain a foothold on the network or harvesting of Personally Identifiable Information (PII).
  • Governments- state sponsored attackers use social engineering for a range of objectives from IP theft, influencing elections (in other countries), or targeted espionage.
  • Insiders- according to the 2018 Insider report, 90% of organisations feel vulnerable to insider threats. They know your system, data, and can cause maximum damage.

The main reason why social engineering is the most widely used in the attacker’s toolkit, is that there is literally very little infrastructure or cost of the attack, yet it yields amongst the highest in returns for the attackers.

To fully understand the effectiveness of social engineering, we have to deep dive into case studies, tactics and why they work, and what companies and individuals can do to detect and protect themselves, which will be covered in part 2 and 3.