How We Got Here: A Brief Reflection on Cybersecurity’s Foundations

Computer technology as we know it, has existed for the merest blip of time in human history. In less than 90 years we have gone from valves and punchboards to pushing the boundaries of quantum states in an attempt to achieve computations that would take millions of years to achieve otherwise. We landed people on the moon with computers that were no more powerful than graphing calculators available at schools in the 1990s. To me, that is astounding. You could argue, that the field of cybersecurity although known as an alternative name, was born at the same time as Colossus, with the first code breakers using it to attack Axis powers’ encryption.

Regardless, it wasn’t until computers became more accessible and people were given the opportunity to experiment more freely that the first virus was created in the early 1970s with the Creeper Virus; the first anti-virus, Reaper followed shortly after that. Since then, we have seen an escalating rise in offensive and defensive computer capabilities grow. When the 1990s rolled around, computer interconnectivity exploded into homes and businesses around the world and the internet as we know it today took shape (the internet in fairness has existed since the 1950s but it didn’t become wholesale accessible until the 1990s).

Why mention this? Context. Computers, networks, and widespread computer literacy only started to become a thing just over 30 years ago. People who grew up in that generation, grew up with access to these tools and capabilities, and yet those capabilities became widespread almost overnight, with businesses thrown into the deepend of needing to adapt and adopt to keep up and remain competitive. They did that without expertise in the board rooms, without considering how they would implement those capabilities securely, and had to learn the hard way what the impact of this technology would be.

Today we are dealing the adoption of those systems and the speed with which they came into play. The generation that grew up programming VCRs, coding on BBC Micros, grabbing gaming magazines for cheat codes, are now entering board rooms and making decisions for the next generation. We have to keep in mind that when a red team comes in, and takes a look at a network and identifies issues like poor credential hygiene, poor network segmentation, ineffective access controls, and improper administration tiering, we are looking at a network that may have been designed, torn up, merged, reimplemented, and reconfigured multiple times over decades with no one starting fresh and building to principles we now recognise as necessary for security rather than competitiveness. That does not excuse these issues, but we do need to be cognisant of how we got here, and recognise that we are still in the infancy stages of changing mindsets as we adapt to the implications of this technology and recognise that bolting on random security products will not solve the problems if we don’t address the foundations we started with.

If you’d like to explore how red teaming can help you uncover and address these foundational risks, feel free to get in touch.

Prism Infosec: Cyber Security Testing and Consulting Services

Abuses of AI

Much like Google and Anthropic, OpenAI have released their latest report on how threat actors are abusing AI for nefarious ends, such as using AI to scale deceptive recruitment efforts, or using AI to develop novel malware.

It is no surprise that as AI has become more pervasive, cheap to gain access to, and readily accessible, that threat actors are actively abusing it to further their own agendas. So having companies like Google, OpenAI and Anthropic openly discussing the abuses they are seeing is immensely helpful in terms of understanding the threat landscape and to understand the direction that threat actors are taking.

These reports should be C Suite level required reading. They contain nuggets of information that affect business from recruitment practices to securing their perimeter, and best of all they are free to access.

Adversarial Misuse of Generative AI | Google Cloud Blog

Disrupting malicious uses of AI: June 2025

Detecting and Countering Malicious Uses of Claude \ Anthropic

For us at Prism Infosec, we not only use these reports to help inform our clients, but we also feed them into our scenarios for tabletop exercises and red team scenarios, so we can help our clients prepare for and defend against being victims of threat actors abusing these technologies.

If you would like to know more, please reach out to us.

Prism Infosec: Cyber Security Testing and Consulting Services

Why Not Test in Dev?

We frequently get asked by clients if we can do our red team tests in their DEV or UAT environments instead of production. We are told its identical to production – same systems, dummy but similar data, same security controls, same user accounts, etc. Etc.

We get it, DEV and UAT environments are there to de-risk threats to production. However, no matter how close they resemble production, they are not what threat actors are going to target. No matter how similar it is, it won’t have the entire company working on it helping to hide threat actor activity. If alarms go off in it, are we absolutely certain that it will be treated with the same priority as the production system, even if multiple alarms are already going off in production?

Red team testing is only effective if it is the live, production environment because we need to ensure that the organisation can defend the network that is most critical to the day-to-day running of the business. If your DEV or UAT environments go down, how long can your business operate compared to if your production systems go down?

At Prism Infosec we do appreciate the concerns about allowing red team testing on production environments. We do not want to disrupt your business. That’s why we have an exceptionally robust risk management strategy. We collaborate and manage risks to ensure the business can protect itself against realistic threats without unforeseen disruptions.

Talk to us today to find out more. Prism Infosec: Cyber Security Testing and Consulting Services

Bait and Switch – Are You Accidentally Recruiting Insider Threats?

Over the last couple of years, we have seen a marked increase in criminal groups infiltrating companies. Either using AI and stolen identities or fronting interviews with disposable candidates all the way through until the contract is signed, and then an alternative person shows up to start the job. In many cases once they have their position, they then either attempt to request greater privileges to gain access to corporate repositories for useful information they can steal. In many cases, even when caught they will simply vanish, corporate asset and all requiring lengthy investigations, access audits, risk management headaches and policy reviews on recruitment practices.

I have personal knowledge of 1 case where this actually happened to a multinational company. Whilst they were shocked and embarrassed by being the victim of such an attack, they did catch the individual quickly and were satisfied that they didn’t lose any sensitive data, even though the individual did get away with a corporate device (for all I know it’s now being used to inefficiently mine bitcoin). Regardless, this was a wake-up call for the company, they had heard about this sort of scam, considered that they could never be a victim to such an approach, and were then utterly astonished when it happened but they learned from it, and now factor it into their recruitment programmes, have put into place new safeguards, such as ensuring the person has to visit the office with their ID to collect their IT rather than relying on remote verification, and during the interview process, devising questions which cannot be easily answered by AI.

This sort of scenario can be played out in a tabletop exercise, for HR, Risk, Legal, and IT, to help you simulate what you would do should this happen to you. You can also play this out in a practical red team scenario, building on the tabletop exercises to help you understand how you can detect and defend against such an attack. At Prism Infosec we can help with both of these sorts of exercises, and with incident response should you ever be a victim yourself. Please feel free to reach out to us, should you like to know more.

Prism Infosec: Cyber Security Testing and Consulting Services

Underinvestment in Cybersecurity

In the last few decades IT systems have become a significant factor for every industry, increasing productivity, improving service offerings and increased the speed at which companies can deliver services. It is only right therefore that we ensure that these systems are not abused, damaged, or misused in a manner which can undermine the organisation or its customers.

Whilst every industry wants to ensure that their IT systems was continuing to deliver massive benefits to them, the cost of securing such systems and ensuring they remain secure is an area in which many companies underspend as security is often viewed as a cost centre with no discernible benefit. This is because of a combination of economic, psychological and organisational reasons.

Cybersecurity is seen as a cost, not an investment. This is because there is no immediate rerun on the investment as it does not generate visible revenue, and its hard to quantify the benefit of an attack not happening. if it is working and effective, there is no loss of service.

Companies will also often underestimate the risks they are running. Too many believe there are too small or unimportant to be targeted, without the consideration that any income a threat actor can squeeze out of a business regardless of their size makes them a potential victim. This can also be attributed to lack of awareness at how damaging cyberattacks can be. Not every attack needs to result in ransomware – sometimes you can be a victim purely because of who your clients are, the data you hold, or because of who you are affiliated with. Not to mention opportunistic criminals who would seek to abuse your IT systems to mine cryptocurrencies!

On an organisation level, there can be many factors for underinvestment. This can be a result of the c-suite not really understanding technical threats or how to prioritise them in the context of the business. CISOs can struggle to make the business case for financial investment when competing with other growth-oriented spending like sales. There can be an overconfidence in existing defences – the fallacy that anti-virus and firewalls is all that is needed to keep you secure, combined with a “check-the-box” approach to compliance can give a false sense of security.

What we see time and time again is that too many organisations only invest in their security after a breach or regulatory penalty. Security has traditionally only been prioritised after a failure, and not before one.

These issues have been identified by regulators in the financial industry, and beyond. This is why schemes such as CBEST exist. Not to force companies to spend money where they would rather not, but to validate the security spends, demonstrate the impact at board level for underinvestment and enable companies to move from a reactive culture to a proactive one. These types of regulator led tests are not pass/fail events. They are about ensuring that organisations build resilience and capability, and maintain the trust they have worked hard to gain from their customers.

Prism Infosec are proud to be part of this industry – security should be a priority for every organisation and not just the regulated ones. We want to help our clients on their security journey, raising awareness, demonstrating the value of security investments, and supporting them to be trusted, secure and robust whilst achieving their goals.

If you would like to discuss how Prism Infosec can help your company, then please reach out to us:

Prism Infosec: Cyber Security Testing and Consulting Services

The Cost of a Breach

IBM’s 2024 Cost of a Data Breach report, identified that the average cost of a data breach in the UK reached £3.58 million, and that this cost had increased 5% since 2023.

Verizon’s 2025 Data Breach Investigation report, suggested there was a 37% increase in ransomware attacks being reported, with a median payout of $115,000 paid by 36% of victims, of which 88% were smaller businesses. Keep in mind, this is just the cost of decrypting the ransomware, when you consider lost productivity, reputational risk, shareholder losses, service impacts, and potential fines, the cost skyrockets.

Even the European Union Agency for Cybersecurity (ENISA) has published a report discussing the impact of cyber security breaches, and highlights the impacts of such breaches across the financial sector; this reporting will only increase now that the Digital Operational Resilience Act (DORA) has come into force.

The news so far this year has identified a number of significant breaches: M&S, Co-Op, Harrods, Cartier, and North Face. More could be on the horizon, and the expectation is that this trend will only continue upwards.

Organisations do have tools to help them prepare for and potentially prevent these sorts of issues. Companies such as Prism Infosec offer red team engagements, where for a fraction of the cost of dealing with a breach, we can simulate how these threat actors operate, and help the organisation identify how they could be attacked, what they can do about it, and exercise how they would respond if or when this occurs, to minimise the impact, disruption, and damage these actors profit from. If your organisation is serious about managing the risk of being breached, then do reach out to us at Prism Infosec: Cyber Security Testing and Consulting Services so we can discuss how we can help secure your business.

ENISA Threat landscape: Finance sector

2025 Data Breach Investigations Report | Verizon

Cost of a data breach 2024 | IBM

AI and Red Teaming

Red teaming is still fairly young as far as cybersecurity disciplines go – most of us in this part of the industry have come in from being penetration testing consultants, or have some sort of background in IT security with a good mix of coding and scripting skills to develop tools. Our work often requires us to not only simulate the threat actors as closely as we can, but also manage the risks of our operations to avoid impacting our client’s business. This dichotomy of outcomes (simulating a threat actor who’s objective is to disrupt, whilst simultaneously trying not to disrupt) may seem confusing, but we also need to remember what red team is for. Its to help our clients test their detection and response capabilities. The objective of the red team is almost incidental – it merely sets a direction for the consultants to work towards whilst we determine what our clients can and cannot detect and what they do about it, if a detection occurs. That latter part is where the disruption is more likely to occur but even there, we can manage the risks.

So where does AI come into it? Well, we have all seen the news about AIs going to take over jobs in a number of fields, and red teaming is no different from the fears of this. The problem is, most AI systems these days are just really good guessers – I prefer to think of these things as almost expert systems, instead of a true intelligence. By that, I mean you can train them to be exceptional at specific tasks but if you go too broad with them, they really struggle. They don’t explain themselves; they can’t repeat steps identically particularly well; and they often forget or hallucinate critical elements when faced with large and complex tasks. A red team is a very large and complex series of tasks where forgetting or imagining critical steps will often lead to a poor outcome. Add into that mix, live environments and risk management, and the dangers of impacting a client become uncomfortably high. As a result, I have not yet met a single professional in this industry who would be happy to take the risk of letting a red team run entirely with AI, and I don’t see that changing any time soon.

However, I do see a future in which AIs help co-pilot red teams. By this I mean, that if the privacy concerns can be addressed, I can foresee a point where a private specialist red team LLM AI would be permitted to ingest data the red team acquires during an engagement (such as network mapping information, directory lists, active directory data, file contents, source code, etc.), and having it perform analysis on it. It can then provide suggestions on how the engagement can proceed. This would also have the added benefit of it being able to answer questions rapidly for the red team to help them consider additional attack paths, identify additional issues in the environment, and suggest additional things they could try. It could also quickly confirm if the red team had any interactions with systems within the client environment to deconflict if issues occur. In time I could even see this being an added real-time benefit for client control groups who would be able to interrogate the LLM for quicker results as to what the red team are doing and what has been identified to date.

AI is here now, and its evolving. We can’t really ignore it as it becomes a tool more and more used in everyday lives, and that means we need to find ways to make it work with the concerns we have. I personally feel that pushing them into smaller expert system roles is the right way forward, as this then allows them to fulfil the role of an assistant more fully. We also need to acknowledge that the public models have been trained unethically on source data taken without consent from authors and copyright holders. As their use grows, not only is there a considerable environmental impact, but I believe they will start to show strain in the near future. This is because, as the public further embraces these tools and uses them to generate new content, that AI generated content will also be absorbed by LLMs. This risks us entering a situation where the snake will eat its own tail and turn into the LLMs into an echo chamber, and we will see the quality of their output drop considerably. This will also likely be compounded by people losing critical thinking skills, which ultimately will harm us more than the AIs can help us.

Data Hygiene

Most organisation’s that are breached and compromised are done so not because they are lax with security, have poor patching, or are gambling that they will never be a victim; instead they usually suffer from poor data hygiene.

Users store data on desktops, in shared folders, in online repositories (such as Jira, SharePoint, Confluence, etc.), sometimes without appropriate controls, encryption, or consideration for who else may have access to it. As a result, threat actors who establish a foothold will often spend time sifting through these data repositories, harvesting credentials and testing if they are valid and what damage they can cause with them. This is a tactic we use in red teams to great success for completing objectives. The days of needing to throw zero days and exploits to compromise networks is not quite done, but why would any threat actor waste burning an exploit when an organisation’s data hygiene is poor and they can get all the credential material they need to threaten the organisation just by looking in accessible file stores?

Unfortunately hunting across corporate data stores for poorly secured passwords is not easy, in all my years of testing I’ve not seen a single solution that is 100% effective at this. Instead it often requires multiple sweeps, policies, user education, users being provided with appropriate tools and guidance, amnesty periods, and if all else fails, disciplinary measures to fix this sort of issue. Often it is not addressed until after a breach occurs, and even worse is that most firms don’t realise how bad the situation might be.

At Prism Infosec, we conduct red teams, where we do some analysis of your data hygiene and can help you address issues we find.

DORA TLPT Guidance Update

Today the EU provided the long awaited updated guidance in relation to DORA’s TLPT: DORA TLPT Guidance Update

This 30 page document further clarifies the necessity for Threat-Led Penetration Tests (TLPTs) under DORA.

We will be posting a more in-depth post about this in the very near future, but the key points that should be taken away are:

Who can Invoke a TLPT?

DORA’s TLPT requirements mirrors the TIBER-EU methodologies, process and structures – they will use the same structure for overseeing DORA TLPT’s as TIBER-EU engagements and will be overseen by either EU or national level authorities. The authorities are defined as those who are the single designated public authority for the financial sector; or an authority in the financial sector who has been authorised and delegated to manage TLPTs; or any competent authorities referred to in Article 46 of Regulation (EU) 2022/2554.

Who is in scope for a TLPT?

It will be down to the national or EU wide authorities to determine who will be in scope for a TLPT test; however the guidance is clear that it should be restricted to entities for which it is justified. This can include financial entities that operate in core financial services subsectors, as long as a TLPT cannot be justified for them.

Ultimately this means it will be down to the regulator’s discretion as to whether or not a TLPT should apply for any financial organisation and will be taken on a case-by-case basis. This will be based on the overall assessment of an organisation’s ICT (Information and Communications Technology) risk profile and maturity, the impact on the financial sector and of related financial stability concerns which must meet qualitative criteria. 

In Article 2 of the update the specific requirements for the identification of financial entities required to perform TLPTs are defined. Essentially the authorities will consider the following factors:

  • The size of the entity
  • The extent and nature of the financial entities connections with other entities in the financial sector of one or more EU member states.
  • The criticality or importance of the services the entity provides to the financial sector
  • The substitutability of services the entity provides
  • The complexity of the entity’s business model
  • The entity’s role in a wider enterprise with shared ICT systems

The authorities will also consider the following ICT risk-related factors:

  • The entity’s risk profile
  • The threat landscape for the entity
  • The degree of dependence their critical/important/supporting functions have from ICT systems
  • The complexity of the entity’s ICT architecture
  • The entity’s ICT services which are supported by third parties (including the quantity and contractual arrangements for third party and intra-group service providers)
  • The outcomes of any supervisory reviews relevant for assessment of the ICT maturity of the entity
  • The maturity of ICT business continuity plans and the ICT response and recovery plans
  • The maturity of ICT detection and mitigation controls
  • And whether the entity is part of group that is active in the financial sector of the EU that shares ICT systems.

The expectation is that that TLPT will be required for entities such as:

credit institutions, payment and electronic money institutions, central security depositories, central counterparties, trading venues, insurance and reinsurance undertakings. The definitions for these types of entities is included in the update and many will be related to their definition in other EU articles (all referenced), or in relation to total payment transactional amounts within a 2 year calendar period, or for entities which provide undertakings for gross written premiums (GWPs) or technical provisions above specified levels. It should be noted however that these same entities could be excused from a TLPT if the authority agrees it is inappropriate.

The authority is also required to consider points such as market share positions, and the range of activities the financial entity provides when making this assessment.

Furthermore, that criteria must also be applied and assessed in light of new markets as they enter the financial sector, such as crypto asset service providers authorised under  Article 59 of Regulation (EU) 2023/1114 of the European Parliament and of the Council.

Shared ICT Service Providers

The guidance also touches on financial entities that have the same ICT service provider. In those cases it will be down to the regulator as to whether a shared or entity level assessment is conducted, if a TLPT is deemed necessary.

If a TLPT is deemed as required by the authority, then the financial entity will be contacted and clearly presented with the authority’s expectation with regards to testing.

This regulation update will come into force 20 days after its publication (8th July 2025), so after this date is when entities could be contacted by letter from authorities notifying them of the requirement to conduct a TLPT test.

Additional Notes

Much of the rest of the regulation update covers the delivery of a TLPT in regards to roles, responsibilities, and expectations for TLPT providers (both Threat Intelligence and Red Team/Penetration testers). It also covers the basic expectations for financial entities being tested with regards to secrecy, procurement and scoping of the TLPT engagements. We will touch on those topics in more detail in a later blog post.

TIBER-BE Insights

The TIBER-EU framework is designed to help organisations improve their Cyber resiliency.

It has multiple stages: initiation (scoping, procurement, planning), threat intelligence, penetration testing (red teaming), purple teaming (attack replays, additional untested control tests, variances in attack methodologies working alongside the Blue team), and closure (reporting, remediation plans, attestation).

As a framework, TIBER can be used by any organisation, even though it was created for financial institutions. However, using the framework does not make your organisation compliant for the regulator or with DORA unless it is supported by an EU TIBER regulator team, and a TIBER test manager.

This information was presented and discussed at the NBB (National Bank of Belgium) TIBER-BE TLPT (Threat-Led Penetration Testing) launch event. The morning session was only for institutions who are, or will be undergoing a TIBER to inform of them of the framework. Prism Infosec were invited to the event as suppliers, and joined other suppliers and the institutions to mingle and attend relevant presentations.

The NBB TIBER-BE team discussed their implementation of TIBER and how it will align with DORA. At present additional guidance on the TLPT element of DORA is still pending (and has been since February), though is expected at some point in June, which should help clarify the TLPT phase, requirements and implementation in greater detail. Until that arrives, DORA compliant TLPT exercises cannot begin.

During the TLPT launch event there were a number of presentations. These included a keynote from the newly formed Belgian Cyber Force, a presentation on NIS2, the Belgian Cyber Fundamentals (CyFun) framework (looks like the UK’s Cyber Essentials) and was linked to the Belgian Centre for Cybersecurity who have a role similar to the UK’s NCSC and can support Belgian entities during cyber incidents. 

We also had a presentation on how one multinational Belgian organisation had implemented their own internal red team, what they learned along the way and importantly, how they measured and showed to the board how the organisation’s maturity and capability to defend itself improved over time.

The panel discussion contained a number of useful insights, from a variety of c-suite level individuals, some of which had been through TIBER and others who were waiting to go through TIBER. They shared insights into how to plan for and prepare for engagements, suggesting organisations prepare by doing a small red team before their TIBER to understand the process. They recommended choosing scenarios where you will get key learnings and do as much preparation for contingencies (leg ups, backup accounts, information) as you can.

These presentations, panels, and even the quiz were all backed by networking discussions over food and softdrinks. 

All in all, it was an insightful and useful event!