Tuesday, 11 June 2019

Si Vis Pacem, Para Bellum

Photo credit: https://forums-de.ubi.com/showthread.php/189053-si-vis-pacem-para-bellum

By Will Lambert

As we all know, prevention is better than the cure, you can’t close the stable door after the horse has bolted, etc. These common sayings have a clear and concise meaning that basically translates that it is better to prepare for an incident, rather than not prepare and have to clean up the mess, as sometimes the consequences are irrecoverable. So, how does this transcend to information security? Information security incidents can take any number of varying guises and may not just be singular in vector. As an example, two, three or even more attacks can be deployed as distraction tactics designed to flank and confuse incident response teams. How would your organisation fare against such attacks? Can your organisation even prevent such attacks in the first place? If not, can your organisation detect, react and recover from attacks? The preparation work is known as Incident Response Preparedness (IRP).

Unfortunately, the realisation of an organisation’s absolute dependence on IT systems to deliver their operational output, feeds the delusion that information security is solely an IT department’s problem. However, we know in the real world that information security is the responsibility of everyone. This is cemented through the stark realisation that although every department uses IT systems, it is not the systems themselves but rather the information they process in order to deliver the operational output – not forgetting the information processed by suppliers. Only through effective Supplier Evaluation and Risk Management can an organisation depict and manage the risk to their operational deliverables posed by suppliers (see SERM - The Domino Effect, Automating SERM and Defining the Damage).

Injury to an organisation’s information is usually measured against the well-known Confidentiality, Integrity and Availability (CIA) triad and while the mitigation techniques are many, they are usually broken across three main categories: people, process and technology.

·         People
o   Does your organisation have and continue to support its people in the;
§  Implementation, maintenance & effective use of mitigation controls, including your policies and technology stack?
o   How receptive are they to proposed information security changes?
§  Do they see information security as an enabler, rather than a blocker?
·         Process
o   What processes do you have to ensure you can deter, react and recover from attacks?
o   Are changes made within your environment adequately managed to ensure you can continue to output your deliverable?
§  Can your existing Business Continuity Plan (BCP) restore your business to a Business as Usual (BAU) state in the quickest time frame possible, highlighting lessons to prevent reoccurrence or gaps in the proposed plan?
·         Technology
o   What mitigation technologies do you have?
§  What controls do you take to ensure you are constantly surveying the data you keep? Remember, if you don’t need to retain it, get rid of it! (see Data Assessment)
§  How do you monitor your admin accounts? – remember hackers are known to target these in attacks as they hold the keys to kingdom (see Protect Your Superusers)
§  Are you alerted to changes within your environment, such as log event clearance, authentication failures, suspicious outbound connections? Are these alerts fed with various threat intelligence feeds to reduce the false positive likelihood? (see Protecting the Pack)

You’ll find from this list that it is not as clear-cut as what can be believed. Most mitigation controls are interlinked, intertwined into prevention. Take people and process for example. You may well have documented your incident response plan, but have you trained your people in its use? Training (see Battle Plans) is usually the tool to bridge the gap between your organisation’s agreed policy and informing the masses; secure code training is another example of this. Demonstrating you have a process in place is great, but is your development team trained to recognise software vulnerabilities and reduce your risk exposure? (see Secure Coding).

It is easy to think that mitigation controls can be complicated beasts – and of course they can be (firewalls, IPS / IDS, PIM / PUM, SOC, ACL, etc.) but don’t forget about physical controls (fences, guards, CCTV, etc). Attackers will make use of both technological and physical defence vulnerabilities - each of which will be considered by social engineers (see The Power of Social Engineers). What protection controls do you use to deter a social engineer in the first instance? How are these controls reviewed or tested?

We do need to consider an organisation’s need versus their appetite. Even today, some organisations’ appetite for information security is so low, they are almost hoping that information security is a fad that will soon pass. If they perceive that they can avoid paying for mitigation controls, they most likely will. However, as we said at the start, prevention is better than the cure and the level of IRP will assist an organisation reduce potential unwanted attention from regulatory bodies and media if breached (see Invest Early to Protect Your Business). If the level of IRP meets or even exceeds an organisation’s need, this will cement their brand as one which accepts its information security responsibilities with utmost clarity and forethought of mind. This further demonstrates to their employees, customers and even the world that information security is not a nuisance, it’s here to help us deliver our operational output safely, securely and with careful regard to all risks involved with processing activities. But more importantly, it’s here to stay.

If you want peace, prepare for war

Wednesday, 15 May 2019

Security Training for Developers: Cost Saver & Business Enabler

By Peter Ganzevles

Having delivered training for a long time and being involved in the process for even longer, I have come across many different people and even more questions. I’ll get questions I never thought about being asked, coupled with questions that I expect. However, there is one question I get more than any other, and it usually reads something like this:

“Thank you for the training, the developers gave good feedback on it. Now I know someone who I think would benefit from a training too, but how can I convince them that it’s worth it?”

This blog post will answer this question and address the topic more thoroughly than in a quick email response. In this post, we’ll address the value of training developers, and why it is worth doing.

Just Like… Fishing
And no, I don’t mean phishing. The famous quote from 12th century philosopher Maimonides told us that you can “give a man a fish, and you feed him for a day. Teach a man to fish, and you feed him for a lifetime.” This quote is key to understanding the importance of training from a business perspective. Over the course of my penetration testing career, I’ve tested clients’ applications, found some horrible flaws and reported them in the best way I could to help them fix the issue. Then, a year later, I’d carry out a retest, and while they had fixed that instance of the issue, the very same flaw that caused it to begin with was applied elsewhere in a new feature, which caused a similar issue.

These recurring issues are the prime example of a lack of knowledge in a particular area, and there are two ways to deal with it. The first is to come back for a retest just before every major release, but that requires a lot of time and effort scheduling, is rather costly, and generally inspires little confidence in the application. The second option is to train the developers, testers and project leaders to be aware of the risks when writing applications to prevent them from writing vulnerable code or missing it during testing.

Just Like… Puppies
There are hundreds of ways to train a developer, and while each has its merits, some are more effective than others. When toilet training a puppy, owners often make the mistake of pushing the puppy’s nose into where the ‘accident’ happened, to teach the dog that it has misbehaved. While the dog will learn quickly and avoid that spot, the same might happen five foot to the left. Similarly, shoving coding mistakes into a developer’s face and hoping that they’ll learn will likely have the same effect. The same code won’t be rewritten, but the underlying issue is likely to rear its head again in the future.

Another method is to run the class through a one-day course where we show the major flaws that often occur in applications. While this is quick and relatively cheap, it is unstimulating for most; it will not necessarily be adjusted to their skill levels since it needs to be challenging for the more experienced and comprehendible for the less experienced, which makes it somewhat unsuited for both. While trainings like this exist for companies with a smaller group of developers and testers who all have a similar skill level, it is not the solution I’d consider best.

Just Like… University
That brings us to the method that we’ve tried and tested for a few years now, which is a two-day course that functions similarly to a university. While the initial information is still presented to the group, it is offered in a way that allows for discussion and questions throughout each topic. Then, after the topic is over, every student gets the chance to practise what they’ve learned hands-on, either alone or in pairs, to ensure they fully understand what they’ve learned. The best thing about this hands-on part of the training is that it’s not just me teaching and helping, it’s the students as well. Ideas are exchanged, tips are given, and real stories from their own development career are shared. I’ve even had people leave the room to fix code on the spot!

Just Like… That
So, what are the long-term benefits of this method? I have given training to many companies and each has given a different answer. Some were able to grow further without hiring more testers, as fewer mistakes were made and the existing testing team had a lighter workload as a result. Many of them also explained that while training is an investment early on, it decreases the amount of issues found during penetration tests, which reduces the amount of time developers spend fixing issues, allowing them to spend that time on feature requests instead. A handful of clients even hinted that they were getting more customers, as they could prove that they were more secure than their competitors. Another client said that they were now using their newly found security knowledge in their recruiting process to find even better and more suited additions to the team, which then helped to increase the overall maturity. And finally, it is valuable for employees as they can put the skills on their resume should they ever change jobs, and with the ever-increasing demand for security knowledge, that isn’t a bad thing.

Wednesday, 17 April 2019

The Power of Social Engineering, Part One: Know Your Enemy

Written by Stuart Peck

On 9th April 2019 in London, ZeroDayLab hosted our Social Engineering Masterclass, where I presented (with a line-up of other ZeroDayLab speakers), the tactics, techniques, and procedures of how attackers deploy social engineering to great effect.

This trilogy of articles looks to build upon that message in greater detail. In part one, we will detail the tactics used by attackers, with an explanation of each of the 6 core principles of social engineering and influence. In the second article we will delve into attacks in operation, looking at case studies where social engineering is most effective, and discuss target profiling and pretexting. The final article will discuss active social engineering defence, designed for both individuals and organisational strategies that can be deployed to reduce the risk of a successful attack.

What is Social Engineering?

"Social engineering, in the context of information security, refers to psychological manipulation of people into performing actions or divulging confidential information…a type of confidence trick for the purpose of information gathering, fraud, or system access, it differs from a traditional ‘con’ in that it is often one of many steps in a more complex fraud scheme.”

It has also been defined as "any act that influences a person to take an action that may or may not be in their best interests." -Wikipedia

In the context of this article we are going to focus on the techniques and tactics of modern social engineering, with examples of phishing, vishing, smishing, and how this supports other attacks such as hacking and physical entry.

Robert Cialdini is famous for identifying and cataloguing the 6 principles of persuasion, the foundations on which most modern-day social engineering is built upon. These include:

  • Reciprocity- people tend to return the favour if they see the value in what has been offered. This is used a lot by intelligence agents and police to coerce their target into cooperation.
  • Commitment- if people commit, orally or in writing, to an idea or goal, they are more likely to honour that commitment because they have stated that that idea or goal fits their self-image. 
  • Social Proof- people will do things that they see other people doing. There have been studies conducted which have successfully convinced people in a group that a red object is blue, for example.
  • Authority- people will tend to obey authority figures, even if they are asked to perform objectionable acts. In the workplace this is driven also by the company culture.
  • Likability- people are easily persuaded by other people whom they like. If you combine this with trust, the effect is compounded. Always be wary of the attacker who comes at you with a smile.
  • Scarcity/Urgency- people are influenced by fear of loss, or negative impacts to missing deadlines. This creates urgency, where human error is likely to be exploited the most, e.g. ransomware.

With these principles in mind, you can see how unsuspecting users can be coerced or influenced into making decisions that are not in their, or their respective employer’s, interest.

For example, one of the biggest threats (and tools for attackers) in recent history is social media, which has desensitised many to the dangers of oversharing, and has led to people sharing:
  • Images of their credit cards
  • Images of boarding passes- which when you scan the barcode contains personal information including passport information
  • Information that be used to work out passwords
  • Information about their employer including images of their badge, maybe business cards
  • Selfies and videos that contain personal information, again that can be used to build a profile.

If you combine this with the wealth of information that can be gathered relatively easily on a company and their employees using open source intelligence (OSINT), this provides an attacker with a dossier of information to build a solid pretext, or in some cases actually access mailboxes where employees have re-used weak passwords or credentials found in publicly available breaches. 

Pretext, or the scenario being presented to the target, is built upon 4 conditions to establish trust. For example, through OSINT the attacker would have gathered some critical information about the target and the organisation- this could be names of direct management, colleagues in another location, project code names, information about systems, or direct impersonation of a trusted third party.

This information builds some form of credibility, which the attacker can pivot to establish some of the following:

 Figure 1: Principles of Trust

The reality is that most phishing emails try to use credibility and some form of authority because it is very difficult to build likability without the proper tone over email, but other attacks such as vishing or in person social engineering will leverage a combination of Likeability, Authority and/or Empathy to great effect. And the reality is that without the proper training, tools, and ongoing awareness to the threats, social engineering is going to continue to be used as part of the attacker’s toolkit.

Who Uses Social Engineering, and Why?

Social engineering is used as part of or as the main attack vector for a range of threat actors. These include:
  • Hackers- social engineering is a valuable part of the toolset for black-hat hackers, usually deploying a range of techniques to gain a foothold on a target’s network.
  • Scammers- highly effective but simple attacks deployed by telephone scammers is costing the global economy billions. Vishing is still a very viable attack vector.
  • Identity Thieves- using stolen information obtained through hacking or purchased on the Dark Web, these social engineers assume the identity of their target to obtain new credit or control of existing accounts, for huge financial gain.
  • Cyber Criminals- these attackers use a full suite of social engineering techniques, but phishing is the weapon of choice, either to deliver malware to gain a foothold on the network or harvesting of Personally Identifiable Information (PII).
  • Governments- state sponsored attackers use social engineering for a range of objectives from IP theft, influencing elections (in other countries), or targeted espionage.
  • Insiders- according to the 2018 Insider report, 90% of organisations feel vulnerable to insider threats. They know your system, data, and can cause maximum damage.

The main reason why social engineering is the most widely used in the attacker’s toolkit, is that there is literally very little infrastructure or cost of the attack, yet it yields amongst the highest in returns for the attackers.

To fully understand the effectiveness of social engineering, we have to deep dive into case studies, tactics and why they work, and what companies and individuals can do to detect and protect themselves, which will be covered in part 2 and 3.

Monday, 4 March 2019

Supply and Demand, Risk and Severity – Defining the Damage

Credit: https://wallpapercave.com/w/klH3B3q

Written by Will Lambert

Suppliers- we all have them, we all need them. Some are essential to our day-to-day business activities, whether they provide website hosting, supply power, provide heating or air con systems and maintenance, payroll software, CCTV, education services, physical or information security, the list goes on. With this almost never-ending list of suppliers, each poses an individual risk to our organisations. You should already have a good understanding of how suppliers interact with your workplace, how important they are, and which ones are most important. This can be described as a rank of criticality. If a supplier is high on this rank of criticality, we also need to understand what risks they present to our organisations

Let’s revisit defining risk. A risk is defined by ascertaining a threat (anything that can harm an asset) multiplied by vulnerability (a lack of safeguard). The initial identification of risk is no easy task but what is usually misunderstood is assigning a severity to a risk. So, for example, if we have a supplier who handles all our customer data (asset), the risk is that they become breached (through lack of safeguard), so what is the severity of the risk being realised upon our business?

Severity can be defined by using a quantitative guide called a likelihood / impact matrix; for this blog, this is what we will be using. For each organisation, impact and likelihood metrics will be different but let’s use the following as a brief example.

Likelihood can be defined using the following matrix;

The following can be used to help define impact for an organisation.

Using the above matrices, each individual asset that a supplier is providing must be assessed to gauge the severity of risk each supplier presents. Likelihood values can be ascertained by using a qualitative assessment, which is a subjective or personal view in gauging how likely a supplier is to fall victim to a cyber-attack – essentially, it’s a gut feeling. However, we can also use Supplier Evaluation Risk Management (SERM) which provides a much more accurate picture of how resilient your supplier is to a cyber-attack and any incident response preparation actions they may have taken in order to return to a BAU state.

Before we look at likelihood, let’s have a quick review of impact. Impact is a bit trickier; it’s all about considering the effect it would have on your organisation. Impacts would include consideration of any regulatory fines imposed by governing bodies, such as the ICO (EU GDPR and DPA18), PCI DSS, and if you are an Operator of Essential Services (OES) whomever your Competent Authority (CA) is. Impact would also consider items like reputational damage and remediation activities such as credit monitoring for all your customers like Equifax did after their 2018 breach.

Asset Value (AV), Single Loss Expectancy (SLE) and Annual Loss Expectancy (ALE) metrics can (and in the case of mature organisations should) be used to help guide the assessment of impact but this process can be a convoluted one, especially when you consider the fines and remediation activities, and is therefore a different blog post entirely!

Circling back to identifying likelihood values, essentially, we are asking ourselves, how likely is it that this supplier will become compromised. The SERM approach allows us to ask how seriously our suppliers take information security and gauge their responses. This is more than just a simple gut feeling, this is using industry best practices, applicable standards and almost anything else you feel is relevant to your business, incorporated into a questionnaire format and sent to your suppliers.

Depending on the rank of criticality we described earlier, matched with your organisation’s statement of information risk appetite, and even consideration of possible impact levels, suppliers can be sent a real in-depth Supplier Validation Questionnaire (SVQ). Supplier responses will be reviewed by your information security team upon return, and then followed up with prompts for evidence of policies, processes or even (where required) a visit to your suppliers’ premises to ratify responses. As you move down the rank of criticality, a lighter touch of questionnaire should be used. For instance, you wouldn’t want a stationery supplier being sent a 200 question SVQ unless you had a sufficient business requirement to do so.

As an example, Stan’s Stationery supplies your business with pens, paper, etc. Let’s give this particular supplier an impact rating of 1. As this supplier can inflict only a small amount of damage, we send Stan’s Stationery a light SVQ. The response from this supplier states that they have no information security measures in place, they have no policies or protection measures or even the slightest interest in information security. Therefore, the likelihood of their breach is almost certain - 5. We feed the impact and likelihood into the risk matrix and we get an overall risk rating of 5. See the Risk Matrix below.

This is a low impact, high probability of breach, but because we have validated the supplier, we know this for sure.

It is important to realise that incorporation of other business processes may be required- a Data Protection Impact Assessment (DPIA) springs to mind. If your SVQ response from Stan’s Stationery showed that they provide a lot more for your organisation than you first realised, in fact, it hosts your website, or processes payments as brief examples. In this case, they process high amounts of personal data and so if breached, would mean you may face the ICO and subsequently receive the fines – dependant on your contracts in place and situation surrounding the breach. You will need to carry out a DPIA on this supplier if not already done. As a result of this new information, the impact level has also changed from 1 to 4 (depending on your organisation’s information risk appetite) in this example, and a subsequent risk score of 20 (See Fig 4 – Updated Risk Matrix) – a big change up from the original score of 5. A greater understanding of their information security practices will be required, and a deeper SVQ will need to be sent and validated.

Of course, Stan’s Stationery can be replaced by any supplier- this is a high-level overview of how SERM can be used. Depending on your quantity of suppliers this may need the automating of this process, or at least employing a managed service to manage your supply chain risk. Following on from a suppliers’ response, your organisation will need to identify what actions you will take to either help them improve their information security practices and defences, or simply cease the relationship with them. This is a cost / benefit analysis and business decision of which SERM will help you best understand the real cost behind each supplier.

For further information regarding supplier risk management, more blog posts can be found here:

  1. The Domino Effect
  2. Automating SERM