As someone who spends their working life protecting others from risk, you surely have the right to bullet-proof your own career by talking to a recruitment firm that has the experience and reach to put you in touch with organisations that have the need, appetite and budget to place information security right at the top of their priorities.
Most of our clients are operating within financial services. This places them right at the frontier of cyber risk. When a robber was asked why he robbed banks, he famously answered “because that’s where the money is”. Nothing’s changed. Whether it’s pounds, dollars or bitcoins, the organisations storing or transferring them have an acute need for cyber specialists to safeguard their operations.
So if you can demonstrate experience-hardened cybersecurity skills gained on a permanent or interim basis, we feel certain that our clients will want to talk to you.
International Professional Services Consultancy
The candidated was engaged on major a DLP programme with a leading UK building society.
Global Leader in Application Security Risk Management
Major Aviation Brand
International Professional Services Consultancy
The candidated was engaged on major a DLP programme with a leading UK building society.
Global Leader in Application Security Risk Management
Major Aviation Brand
We want to talk to you. Drop us a line and tell us about yourself!
Our lives are becoming increasingly integrated with smart technology, and as we move away from computers being stand-alone devices and tech is embedded into everyday objects, the Internet of Things has come into existence and is developing rapidly.
The Internet of Things (IoT) refers to physical objects containing sensors, software, or other technologies to connect and exchange data with other devices and systems over the Internet. It is essentially a way to make inert objects into smart hardware.
It is different from the Internet itself because it can create information about connected objects, analyse and share the data, and make decisions based on its collected data.
Examples of the IoT in action are home automation, such as operating lights, heating and security cameras controlled by smartphones or voice-activated assistants like Alexa or Siri.
The IoT also has substantial industrial applications such as in; healthcare for remote monitoring of patients; transportation for traffic control, parking and vehicle management and even; agriculture, where data can be collected for a range of environmental needs, including temperature, rainfall, humidity, wind speed, pest infestation, and soil content. It also has uses in the military, manufacturing and many other commercial sectors.
IoT technology is even being used in a humanitarian capacity, with start-up business, Moeco developing wireless trackers to monitor the safe transit of aid packages and other vital supplies to dangerous and complicated to reach locations in war-torn Ukraine.
The benefits of IoT are clear to see across all walks of life for both individuals and companies, offering cost and time savings through automation, enabling enhanced security possibilities and streamlining processes. Recent research shows that 83% of organisations have significantly improved their efficiency by introducing IoT technology.
On the downside, there are concerns over cyber security with news just this week that there is a DNS bug which could allow cyberattacks on IoT devices. Many businesses recognise the privacy risks of exchanging and storing large amounts of data. They know that customers need to feel assured that their information is being managed safely and responsibly and have ownership of their data.
To combat security risks, there have been calls for global regulations for IoT, and in response, governments around the world are starting to introduce legislation around IoT security; however, in most cases, this is still in its infancy, and there is no joined-up approach with different countries and regions implementing their own rules.
However, not all challenges are bad, and there are challenges creating opportunities within the sector. It's estimated that the number of active IoT devices will surpass 25.4 billion by 2030 and as the industry expands and developments are made, there is a constant need to update technology to enable devices to support more complex functions, find innovative ways to reduce power consumption and allow more significant numbers of 'things' to be connected.
The upside is that the tech job market is looking very healthy. There are an increasing number of opportunities for tech professionals in roles such as; IoT developers, embedded system designers, infrastructure architects and IoT solutions engineers, as businesses realise that they need more IoT specialists to support developments.
It is predicted that IoT has the potential to generate $4-11 trillion in economic value, with up to 152,200 IoT devices connecting to the Internet every minute by 2025. With legislation being gradually introduced around the globe to make it safer for users, it would appear that the industry will continue to see substantial growth for a long time to come.
If you are looking for your next career move in technology or engineering, or if you are looking to grow your IoT team, contact our expert specialist recruiters at McGreggor Boyall and see how we can help you achieve your Tech goals.
As technology evolves, so does the way we manage our money. Open Banking is a growing sector within the banking industry and could be set to change how we manage our finances completely, but how much do people know about it? In this article, we explore what Open Banking means, how it differs from traditional banking and what we can expect in the future…
Open Banking is the practice of allowing third-party financial service providers open and secure access to customers’ banking and financial data through application programming interfaces (APIs). Open access is only granted once a customer has given their consent for data to be shared.
The main aims of Open Banking are; to provide consumers with more control over their finances, offer faster and more accessible banking processes via third-party platforms, and allow financial institutions to provide more tailored services to their customers.
It is generally felt that traditional banking methods have stunted innovation in the financial sector as customer data is usually stored but not analysed. Open Banking allows banks to understand more about their customers to identify better products and services that will benefit them.
Unlike traditional banking, where the bank holds client data and offers customers a standard set of products, Open Banking creates a network or ‘ecosystem’ where the individual’s needs are at the centre of the process. Data about how customers save, borrow and transact can be shared quickly and easily between financial institutions, technical platforms and non-banking third parties. These institutions can then provide a wide range of tailored products and services for managing and moving money to provide the best option for the individual.
Research shows that in 2020, 24.7 million individuals worldwide used Open Banking services, with forecasted figures suggesting this could reach 132 million users by 2024, so clearly, open banking is gaining momentum.
And you can see why when you consider that products and services can be personalised to the end-user. Open Banking provides greater customer choice and encourages banks to be competitive, which benefits the consumer. For instance, it’s pretty unlikely that anyone would want to pay higher interest rates or expensive overdraft charges if you can shop around.
In addition, Open Banking also offers the benefit of consolidating account information, allowing customers to monitor multiple bank accounts via one app. This gives people a better understanding of their finances, enabling individuals to manage their money more effectively and make more informed decisions.
Open Banking can assist with global financial activities via payment platforms, such as streamlining the trading of shares and cryptocurrency and increasing the ease and speed of domestic payments, foreign exchange and international transactions. Open Banking via payment platforms can also simplify opening new overseas bank accounts, especially in countries where there may be issues around residency regulations when using traditional banking methods.
Financial institutions are also beginning to enjoy some benefits of Open Banking, especially challenger banks and fintech organisations that are less familiar to consumers. Open Banking provides them with a platform to showcase their innovative products and services. However, even the big banks are seeing some advantages whereby consolidated data on Open Banking apps can reduce the number of back-office staff needed to manage customer accounts and record individual transactions.
The evidence would suggest that the future of Open Banking is a bright one. Just three weeks ago, the UK government and UK regulators renewed their support for growth in the sector, announcing plans to establish a Joint Regulatory Oversight Committee to oversee the development of Open Banking. The government also estimates that by September 2023, 60% of the UK population will be using Open Banking payments.
As the sector grows and companies expand their Open Banking services, recruiters report a rise in vacancies in Open Banking and fintech-related jobs. The market is now buoyant, offering many opportunities for professionals in the field.
In other developments, Open Banking could threaten credit card companies. With Open Banking making it increasingly easy to carry out A2A (account to account) digital payments without the need for cards, credit cards could gradually become a thing of the past. However, there is a good chance that credit card companies will find a way to reposition themselves to be part of the process rather than disappear completely.
It’s not just credit cards that could become obsolete. Open Banking also facilitates a new real-time digital payment method called ‘Request to Pay’. This is a way for bills to be settled between individuals or businesses using a real-time messaging service, usually via a banking or fintech app, which allows billers to request payment instantly and electronically. When a request is received, customers can pay in full, pay in part, ask for more time, communicate with the biller, or decline to pay. The new payment method can help prevent fraud and money laundering and potentially replace credit cards, invoices, and direct debits; it has been gaining traction since its launch in May 2020.
Moreover, it could be that the future of Open Banking is the future of all banking, with high street banks, including HSBC, Lloyds and Citibank, amongst others, now offering Open Banking options for customers. The big banks themselves are keen to get on board with Open Banking rather than resist the changes and get left behind.
Open Banking has a long way to go before it reaches its full potential. It will be exciting to see how it can improve financial services for institutions and customers alike and perhaps change how we manage our money forever.
If you’re looking for a new challenge in the banking or tech industry or if you want to recruit the best candidates for your team, talk to our expert recruiters today at McGregor Boyall.
As the world of virtual currency edges ever further into the mainstream, banks and governments are becoming increasingly aware of the risks involved for investors and the need to apply formal regulations to the system to protect people from theft and scams.
Cryptocurrencies or crypto-assets are digital money forms that use encryption to protect and authenticate online payments. They are designed as a way to transact via a decentralised computer network using virtual ‘coins’ or ‘tokens’, removing the need for a central bank or government to oversee or maintain the process.
There are now thousands of different cryptocurrencies globally, such as Bitcoin and Ethereum. They can be bought and traded using distributed ledger technology such as Blockchain, whereby a database of ‘blocks’ securely record and link details of all cryptocurrency transactions. New cryptocurrencies entering the market via Blockchain as crypto miners, who validate transactions, are rewarded with newly generated tokens.
Like most things in life, investing in cryptocurrency has pros and cons.
Those who are ‘pro-crypto’ will be the first to tell you that there are several advantages to transacting in this way, including faster payments, no risk of a single point of failure during the process and no transaction fees which banks would typically charge. Cryptocurrency can also offer some anonymity to users as there is no need to register with a third-party financial institution to complete payments. Many cryptocurrency supporters see it as a way to level the playing field for global payments as it removes the necessity for currency exchange in cross-border transactions.
In addition, many investors have started buying cryptocurrency to generate a profit. Although there are no guarantees, Bitcoin prices have risen as much as 65% in one day, and the cryptocurrency market is now worth over $2.2 trillion compared to only $10 million in 2011.
However, there are some significant disadvantages of cryptocurrencies too, including the fact that the market is highly volatile and can fluctuate unpredictably, meaning you could lose vast amounts of money as quickly as you could make it, with the market being seen to fall by 25% just a few days after a significant rise.
Not only does the value change constantly, but unlike credit cards or other traditional payment methods, crypto transactions come with no legal protection meaning that transactions are uninsured and usually irreversible. Governments also don’t support cryptocurrencies, so if you hold your investment in a digital wallet with a third party, you will not receive any assistance in getting your money back if the wallet was hacked or the holding company collapsed.
Another issue is illegal activity with criminals using crypto for tax avoidance, money laundering and illicit purchases, and hackers often favouring cryptocurrency payment in their ransomware demands. In addition to this, scammers frequently use cryptocurrency to either offer bogus investment opportunities where they claim to be able to grow your crypto investment by transferring it all to them, or they offer to reward you for investing with them before walking off with all of your digital cash.
Finally, and perhaps somewhat surprisingly, there is also an environmental downside to cryptocurrency, as its mining requires a massive amount of energy. This is expensive and detrimental to the environment as it creates a considerable carbon footprint.
When you consider the risks involved for organisations, individual investors, and the environment, it is easy to see why governments and financial institutions are looking to regulate the market as crypto grows in popularity.
So far, regulations are mainly still in the planning stage. However, the US Department of Justice did release its Cryptocurrency Enforcement Framework back in October 2020 and has successfully prosecuted several individuals for illegal activity involving crypto-assets.
Things are starting to move forward more quickly across the board now, driven in part by the need for the international community to ensure that, as the war continues in Ukraine, sanctions enforced on Russia can’t be sidestepped by dealing covertly in crypto.
It is reported that President Biden is about to sign a new CBDC (crypto and central bank digital currency) executive order in the next few days, which will explore national security, the economic impact of digital assets and the future of money as a whole, with a potential for the creation of a central bank digital currency which could be regulated and controlled by the government.
Meanwhile, the European Union will also be voting this week on their Markets in Crypto Assets (MiCA) framework, which includes a proposal for issuing licences to crypto companies to help ensure financial stability and investor protection throughout the continent.
The UK has indicated that they fully support the innovation and benefits of cryptocurrency but have also announced plans to legislate against misleading crypto advertising, ensuring that promotional literature is held to the same standard as other financial advertising and that it is ‘fair and clear’ to increase consumer protection.
It would appear that there is a global shift that will see the crypto space becoming more strictly monitored and regulated to protect investors and reduce illegal activity. However, these new rules are also likely to slow processes and raise costs for legitimate crypto companies, which could reduce the attraction of investing. With governments also considering the option of creating digital currency for their central banks, it could be that one day in the future, we all use cryptocurrency as our only legal tender.
What drives some people to hack into computer systems? There can be no doubt that money is a huge motivation for many cybercriminals, with around 86% of data breaches being financially motivated, according to an investigation carried out in 2020. But is it the sole reason for computer hacking?
As Europe sits on the brink of war, reports this week show that 74% of ransomware revenue goes to Russian-linked hackers. With Russia, China, the USA, and Iran ranking in the top 10 countries with the most hackers globally.
A cyber hacker is any unauthorised user who breaks into an individual’s or organisation’s computer systems. They often install dangerous malware such as Trojan Viruses or Ransomware without the owner’s knowledge or consent to steal, change or destroy information. Generally, hackers have high technical ability and expertise to breach security software and access personal or confidential information.
Perhaps somewhat surprisingly, there are good and bad hackers out there, and different types of hackers have different motivations for their activity. Here we explore a few:
Black Hat Hackers: These are the bad guys. They intend to use scams and hacks to steal funds or sensitive information from individuals, businesses and banks, either to make money by stealing it directly from hacked accounts, selling the information they access to other organisations on the dark web for a profit, or by holding the victim to ransom, demanding cash to remove the malware they install.
Nation-State Hackers: Some countries have officially employed hackers to carry out government-backed cyber-attacks with the aim of either releasing information to the public to cause political unrest in an enemy state or attacking an enemy country’s websites and servers to cause disruption. They may also use the opportunity to gather military intelligence information. While this is still criminal activity, it could be argued that these hackers are classed as good or bad depending on who’s side you’re on.
Corporate Espionage: Employed by companies, these types of criminal hackers are mainly tasked with stealing intellectual property such as trade secrets, business plans or financial data from competitors to gain a competitive advantage or damage another company’s reputation.
Hacktivists: Not driven by money and perhaps in some cases with good intent, these hackers often work in groups to make a political, ethical or social statement. They tend to either publicise hacked information that will embarrass an organisation or create mayhem by disrupting a company’s computer network and making changes to their website to post their message. Both are intended to advertise their cause and expose what they consider wrongdoing.
Revenge Hackers: Some hackers want to take revenge on an individual or organisation they feel has wronged them somehow. Motivated purely by anger, this type of hacker is just looking to inflict virtual pain on the victim through methods such as locking their devices, deleting data or even hijacking their social media accounts to post inappropriate content.
Just for Fun!: There are hackers out there who like to cause chaos. They want to challenge themselves and prove to fellow hackers what they are capable of to gain notoriety. They don’t have any real motivation other than infamy and their thrill from creating disruption.
White Hat Hackers: This is the only type of hacking that is considered legal. White hat hackers are usually computer security experts companies employ to protect from cybercriminals. They use the same methods as illegal hackers but with the organisation’s permission, looking for gaps in a network’s security to prevent or fix any threats to the system.
Red Hat Hackers: Like White Hat Hackers, these are vigilantes who take it upon themselves to hack into networks to fight off the Black Hat Hackers of their own volition. The difference is that they are not invited to do so and often cause as much harm as they do good by employing quite ruthless techniques such as installing additional malware to counter the original threat. Therefore, this is still classed as an illegal activity.
Hackers will always find new ways to break into computer networks but keeping your cyber security updated is key to combating the problem. Installing the latest security software and regularly backing up data can both assist with this, as well as the option to employ White Hat Hackers to look for holes in your security systems if your budget allows.
Educating staff in best practices with their devices is another good way to avoid possible hacking issues, including being mindful of opening links or attachments from unknown email addresses, using passwords and encryption to protect sensitive information, only downloading apps from reputable sources and avoiding logging into accounts while using public WiFi.
No system is entirely safe from hackers, but understanding their motivations and what measures to input to combat the problem can help keep your computer systems safe.
If you are looking for a new role in cyber security, risk, compliance, IT or any other business function, get in touch with McGregor Boyall today to discover how our expert team can help.
The greater our reliance on technology, the more attention we must pay to cybersecurity. This has become crystal clear during the pandemic, where many organisations have invested in digital infrastructure and have more digital information than ever in the face of increasing cyber threats. However, it is notoriously difficult to protect against all threats, with several high-profile organisations falling victim to attacks such as ransomware during recent years. The pressure is on to develop more effective cybersecurity measures — is self-repairing technology the approach we need?
There has been much recent buzz about self-repairing (sometimes called self-healing) technology. Companies like ABN Amro and Absolute Software have been developing this cutting-edge technology, and many firms are already using it. The core feature of self-repairing technology is automation — it must be able to independently identify an incident, repair the damage and prevent the issue from arising again.
This new technology takes inspiration from the human immune system. The idea is to accept that some attacks are inevitable and instead focus on a quick and efficient recovery. ABN Amro has drawn on the expertise of immunologists using the principle of disposability to refine their software. In a human body, disposability means that some uninfected cells are periodically replaced alongside potentially infected cells. Systems will use behavioural analytics to detect suspicious activity and quickly respond to changing conditions.
Self-repairing technology is quicker to respond to threats than traditional methods. Often, malware can lie undetected for several months, and only then can IT teams attempt to fix the issue. In contrast, self-repairing technology can often spot and repair a problem in a matter of days. For example, using traditional methods, Adobe found that the average time to correct a data batching failure was 30 minutes — self-repairing technology reduced this to just 3 minutes. In addition, many self-repairing systems proactively seek out design flaws and rebuild themselves to become more resilient. Some can even monitor everyday wear and tear to predict when a device might be likely to malfunction and recommend when action is required.
In the face of IT shortages, self-repairing technology may become an essential component of cybersecurity. Problems can be quickly isolated before escalating into a full-blown data breach. Most self-repairing technology utilises a container system, which allows it to remove suspicious activity before it reaches the central system. This local intervention protects the broader digital infrastructure from harm.
Increased automation would allow IT professionals to focus on the tasks that add value to the company rather than solely fighting the growing wave of cyber threats. However, it isn't easy to know whether this new technology can be trusted to be completely independent. IT staff would likely be expected to monitor and work with self-repairing technology.
New technology is often expensive. Organisations will need high-quality self-repairing technology that works across devices and deployment scenarios. However, effective implementation of self-repairing technology should save an organisation money. According to a recent survey by Red Hat, up to 30% of infrastructure service tickets could be resolved by self-repairing technology. Therefore, the potential savings could far outweigh the initial costs.
Given the sky-rocketing rates of cybersecurity attacks and the growing sophistication of threats, new approaches are vital. Self-repairing technology is one of the most exciting new developments in recent years and could represent a new phase of cybersecurity technology. Furthermore, in complex IT environments, at least some level of automation is essential — so why not embrace the next stage of cybersecurity development today?
The payment industry is rapidly evolving. With the rise of fintech companies and challenger banks, financial institutions are constantly looking for innovative ways to stay competitive.
One way to gain a competitive edge is through payment hubs. Not widely adopted until recently, payment hubs provide a single point of payment for companies to automate and execute all of their transactions, improve efficiency, increase fund control, and reduce the risk of human error and risk from using multiple payment processes.
Originally, payment hubs were not popular with banks, as the first generation of available solutions didn’t deliver enough functionality.
However, as legacy systems are becoming outdated and unable to keep up with advances in technology, such as cryptocurrency and digital payments, many financial institutions now see payment hubs as the only credible way forward to keep up with the competition.
As well as increasing efficiency and reducing costs, payment hubs allow for real-time payments, fraud protection, cross-border payments, and virtual accounts, all of which benefit both banks and companies alike when making transactions.
Another advantage of many payment hubs is that they are ISO20022-compliant, meaning that they obey the international standard for exchanging electronic messages between financial institutions. This is an essential consideration for companies and banks as SWIFT estimates that 87% of high-value transactions will have migrated to ISO 20022 by 2023.
Many financial institutions have already invested heavily in payment platforms that no longer fulfil their needs. The issue for banks is to decide if it is worth trying to adapt these existing solutions to create a payment hub that stands up to current and future needs or start again from scratch, which could be time-consuming and expensive.
It has become a real topic of discussion that FinExtra, a leading independent information service for the global fintech community, held a webinar this month on precisely this issue: Should banks rethink how they invest in payment hubs?
Suggestions to resolve the problem include a phased approach whereby banks could implement payment hub solutions to address short-term needs capable of integrating with a broader payment hub platform later, minimising the immediate outlay while gradually modernising their payment infrastructure.
Whatever financial institutions decide, businesses are beginning to demand streamlined payment options. With the need to keep up with the competition and stay compliant with new regulations, payment hubs seem to be the way forward.
We hope that this insight into the future of the payments industry has provided some helpful information. If you work in the payments sector or any of our specialist areas and you’re thinking about a career move, talk to McGreggor Boyall today and find out how we can help.
The UK government has declared their intention to become a ‘global science and technology superpower’. As a result, there has been plenty of investment into research and development in recent years, and a number of exciting new projects in everything from education to military technology. Here is a quick overview of some of the technology that government departments are using, and some projects that have been carried out so far.
The NHS and Department of Health recently invested £10 million in two IoT led projects. One of which, referred to as Technology Integrated Health Management, involves a network of sensors, monitors, wearables and other devices. The technology is used to help those with dementia and other cognitive impairments to live safely and independently within their own homes. The IoT network monitors the user’s health and alerts health professionals to step in when required. A further project, the Diabetes Digital Coach, uses glucose monitoring devices to guide users through an e-learning process to better manage their condition.
These projects have tangible benefits for those taking part and could potentially reduce costs for healthcare services. However, they also advance IoT by providing an opportunity for a large scale study in an applied environment. The data collected will provide valuable insight into how IoT devices are used in the real-world and help further develop such platforms.
Blockchain is essentially a digital ledger that makes it near impossible to alter or hack records. It is already proving its worth in industry, particularly for banking, where it can help keep transaction records safe. However, it also has huge possibilities for government, given the implications for tackling fraud, error and inefficiency.
So far, the Food Standards Agency has trialled blockchain for livestock distribution, using the digital ledger to share information such as age and veterinary history. The Department for Work and Pensions have also tried blockchain to see whether a bespoke cryptocurrency, Govcoin, could make payments to benefit claimants easier and more secure.
The Ministry of Defence (MOD) and the Home Office have been funding research into autonomous robotics, with the goal of creating a machine that can investigate chemical and biological hazards. The Merlin Robot is the first prototype of this research, developed by industry partner, HORIBA-MIRA. In January 2021, the Merlin Robot successfully conducted chemical reconnaissance tasks over an area of 10,000 square metres. The robot uses artificial intelligence (AI) for object recognition and is equipped with autonomous search and mapping tools. It is a hugely exciting result, given the potentially life-saving capabilities of this technology.
Government has taken a special interest in AI, recently listing it as a critical element of driving economic growth and innovation in the UK. The government’s data science unit has previously used AI to analyse results and attendance records to see whether a school is at risk of ‘failing’ and requires special attention. The recent launch of the ‘National AI centre’ aims to take AI in education further. Although new, the initiative has been inspired by projects such as Bolton College’s digital assistant, Ada, which provides administrative support to students, freeing up staff to focus on teaching.
The HM Courts and Tribunals Service is currently undergoing a £1 billion transformation to modernise their services. One important part of this is user design and research. During COVID-19, many aspects of the legal system were kept going with remote hearings. Now, a new digital platform is being rolled out to deal with approximately 1.5 million criminal cases a year. User experience is a key aspect of this modernisation, ensuring that services are easy to navigate and that all aspects of virtual hearings are considered for those involved.
It is clear there is no shortage of ambitious government projects. This is great news for those across the technology industry, as successful projects can help stimulate demand for key digital skills. Finally, when government involvement is done well, it can increase public trust in emerging technologies, which is a good thing in all respects.
It is well documented that human error is a major weakness in information and cyber security — around 90% of UK data breaches are caused by this. As a result, organisations are increasingly turning to behavioural science to understand their vulnerabilities and improve defences. Cyber security company, CybSafe, raised $7.9 million in a recent funding round for their behavioural security platform. They already have an impressive list of clients, including HSBC and NHS trusts. It is clear startups like CybSafe are meeting a real gap in the market, but what is the science behind such companies, and how are these techniques being used to enhance security?
Generally, security awareness training is a task that employees dread. It takes time away from their core responsibilities, can be repetitive and, well… dull. Many organisations are now turning to gamification, a technique that utilises game-like features such as points, trophies and level progress to motivate employees. There is growing evidence that gamification really works — a large analysis of research in this area found that gamification significantly increased engagement in online programs.
Gamification works because it is both rewarding and immersive. Positive reinforcement is a key driver of behavioural change. Therefore, creating enjoyable experiences where employees can progress through a game, be rewarded for the progress and interact with their colleagues helps to make gamification a success. In addition, making the game interesting and challenging enough means employees remain engaged with the process without having to exert high levels of effort.
PwC’s Game of Threats ™ is a great example of gamification done well. Players have to either defend or attack their fictional organisation and make real-time decisions that affect the outcome. It has proved an excellent way to raise awareness about cyber security and increase understanding about real-world hacker behaviour.
Nudging is Richard Thaler’s Nobel prize winning technique that encourages a change in behaviour through positive, indirect suggestion. For example, healthy eating could be encouraged by filling the snack cupboard with fruit alternatives and hiding the chocolate in a difficult to reach spot. This doesn’t ban the unhealthy alternative (which could cause resentment and backlash) but gently encourages the desired behaviour by working with our human biases, and not against them.
A well-known example in cybersecurity is password strength feedback (i.e., encouraging a stronger password by having a rating system turn green). Other creative nudges include showing users examples of the data they will share when they accept privacy permissions (e.g., an image the application will have access to) and how many times their location has been shared with a certain organisation. However, the best nudges are often simple — it could be as basic as a printed sign that reminds employees how to store or dispose of sensitive documents once used.
It is easy to make mistakes when we have multiple demands on our attention. In behavioural science, this is referred to as cognitive load — a strain upon processing that impacts performance. You might have seen the famous ‘invisible gorilla’ video, where people watching a basketball game video fail to notice a person dressed in a gorilla costume casually walking through the game. This is called inattentional blindness and is one of the potentially alarming consequences of high cognitive load.
This psychological principle has important implications for the design of cybersecurity measures. Namely, that cognitive load should be kept as low as possible. Research into this area found that cybersecurity analysts were more likely to report missing a key piece of information when faced with a large amount of information from automated security measures.
The application of behavioural science to security is still relatively new. As such, there is a rich literature of existing psychological principles to explore. For example, the bystander effect (referring to our belief that someone else will deal with a problem) and optimism bias (our belief that we are the exception to the rule, causing us to underestimate risk). As the tech industry takes a greater interest in behavioural science, these terms may become increasingly familiar, and help turn the tide in the fight against rising cybersecurity threats.
Recite Me is innovative cloud-based software that lets visitors to our website view and use it in the way that works best for them.
We have added the Recite Me web accessibility and language toolbar to our website and Careers site to make it accessible and inclusive for as many people as possible.
It helps the one in five people in the UK who have a disability, including those with common conditions like sight loss and dyslexia, to access this website in the way that suits them best.
It also meets the needs of the one in ten people in the UK who don’t speak English as their first language, by being able to translate our web content into over 100 different languages.
You can open the Recite Me language and accessibility toolbar by clicking on the “Accessibility” button. This Accessibility button now appears in the top right on every page of our website.
After you click on Accessibility button the Recite Me toolbar opens and displays a range of different options for customising how the website looks and how you can access the content.
Recite Me helps people to access our website to do the things they need to do, like applying for our latest vacancies, finding out about life at McGregor Boyall using the tool bar on our dedicated Careers site and reading our news and insights.
The Recite Me toolbar has a unique range of functions. You can use it to:
You can find out more about how Recite Me works from the Recite Me user guide.
If you have any questions about Recite Me you can contact us by email at email@example.com or call us on +44 (0)20 7422 9000.
It’s no secret that the pandemic has increased the frequency and sophistication of cyber-attacks. And according to a recent government survey, they’re showing no signs of slowing down. Increased cross-sector collaboration has been suggested as a method of combatting the increased threat. The UK’s new self-regulatory body, the UK Cyber Security Council, and Microsoft’s new initiative, The Asia Pacific Public Sector Cyber Security Executive Council, are welcome steps in this direction. Why are these private-public partnerships so important, and can they help create more cyber-resilient organisations?
Cyber-attacks do not distinguish between sectors. One of the highest profile attacks of 2020, the SolarWinds breach, exploited a flaw in the Orion security tool to target industry and government agencies alike. BitSight has estimated the financial impact of the SolarWinds attack to be approximately $90,000,000 in insured losses. Not to mention the unquantifiable effect on national security.
Sometimes, lack of collaboration can result in the need for court-ordered cybersecurity intervention, as was the case with Microsoft — legal approval was required for the government to remove compromised web shells in Microsoft Exchange servers. These kinds of actions can cause unnecessary delays and substantial losses. In some time sensitive cyber-attacks, such as medical ransomware, the consequences could be devastating.
Given the significant consequences of a cybersecurity breach, many organisations are calling for greater collaboration — the benefits of which include greater intelligence sharing, a cohesive response to threats and robust international infrastructure.
According to a study by the Ponemon Institute, organisations with high cyber-resilience were more likely to participate in some form of threat-sharing program (e.g., open source, commercial sources, threat intelligence platforms). Sharing intelligence allows organisations to identify likely threats in their industry and develop appropriate responses based on what similar organisations have tried. Intelligence sharing between public and private sectors is vital because of the distinct perspectives each sector has. For example, government agencies can conduct cyber espionage operations and, therefore, have insight into adversary networks. In contrast, business providers often have greater understanding of cyber-attack victims.
Increased cross-sector talk could vastly improve cybersecurity responses, and even prevent attacks before they occur. Microsoft’s new initiative, The Asia Pacific Public Sector Cyber Security Executive Council, aims to facilitate private-public partnerships, to share information and strengthen government cyber defences. The council plans to meet quarterly going forward.
Having a clear response to cybersecurity incidents helps to protect organisations against cyber threats — particularly for smaller organisations that may lack expertise and/or resources. IBM have often emphasised the importance of having an incident response process that is consistent, repeatable and measurable, and has worked with organisations across sectors to help develop resilient solutions.
However, there is still remarkable variation in the cybersecurity industry because of the lack of professional regulation. The UK Cyber Security Council plans to correct this issue, bringing private and public sectors together to create regulatory standards in cybersecurity, similar to what already exists in industries such as accounting and finance. This hope is that this will create a set of standards that improves the quality of cyber defence strategies and the efficiency of incident responses.
Many organisations operate internationally and therefore, so are the attacks. For example, while the impact of the SolarWinds attack was the most severe in the US, at least seven additional countries were impacted (including the UK, Belgium, Spain, Canada, Mexico, Israel and the UAE). However, the response from US allies was far from cohesive, and none matched the impact of the sanctions the US imposed on Russia for their suspected role in the attack.
It’s crucial that private-public partnerships are not only encouraged on a national scale, but globally. Participating in global forums, sharing intelligence and developed global frameworks will inevitably improve cyber-resilience. Finally, co-ordinated global responses may deter nation state attacks, and increase trust between co-operating countries.
Clearly, many are working hard to facilitate cross-sector collaboration. However, there is much further to go. Cybersecurity is no longer optional — protected digital environments are crucial for organisations of all kinds, so they must work together to secure a cyber-resilient future.
GDPR is thought to be one of the most comprehensive data privacy regulations in the world. The consequences of breaching GDPR are harsh — with potential fines of up to 20 million euros or 4% of annual revenue (whichever is greater). And the incoming ePrivacy regulation will clamp down further on online privacy. However, data privacy compliance has slipped down the agenda during the pandemic, and we may soon see numerous high-profile violations come to light. What is the current situation, and how concerned should organisations be?
In order to survive the pandemic, many companies underwent significant digital transformation, rushing to get their services online to meet shifting customer needs. For example, in May 2020, a reported 40% of surveyed organisations were fast-tracking their move to the cloud in response to COVID-19.
With the focus elsewhere, there have been doubts over whether data privacy has been properly embedded into new digital processes. GDPR also sets out expectations about how data must be protected against cybersecurity risks and unauthorised access, but a recent survey found that 92% of respondents were worried about cloud misconfiguration breaches. The shift to remote work further increases vulnerability to both data privacy violations and cybersecurity breaches.
Regulation enforcement was relaxed in the early stages of the pandemic; however, companies are now expected to ensure compliance. Organisations that cannot do this will face financial and reputational consequences.
Recent security breaches and scandals have increased public awareness of data privacy. For example, political consultancy, Cambridge Analytica, caused outrage after it was revealed they had improperly obtained the private psychological profiles of Facebook users, and this data was then used for political gain.
The Information Commissioners Office (ICO) reported a large increase in helpline calls (24.1%) and live chat requests (31.5%) in 2018 compared to the previous year. The majority of these queries were regarding subject access, suggesting that members of the public want to know how their data is being handled. According to McKinsey, 87% of surveyed people said they would stop doing business with a company because of concerns over security, and 71% would stop if the company gave away sensitive data without permission. Therefore, not addressing data privacy concerns could have a severe and detrimental impact on business demand.
Physical safety has had to be prioritised over data privacy during the pandemic — and most regulation, such as GDPR, includes government exemptions for such situations. Contact-tracing and symptom-tracking apps have been launched to gather information on and reduce transmission of COVID-19. However, there have been concerns raised about unclear purpose specification, data minimisation, data sharing and risk of re-identification.
Such apps rely on mass download to be effective. In the UK, 1 in 8 people who test positive for COVID-19 are not reached by contact tracers. A further 18% provide no details for close contacts. While the reasons for this low co-operation are complex, lack of public trust is clearly a factor that can undermine the efficacy of such applications.
While the pandemic is an extreme situation, it provides an example of how seriously consumers are taking data privacy — that they may refuse to use an application they distrust, even when that app is intended to serve the public good. A growing number of experts believe that the way forward is to integrate privacy into all technology, and that not doing so will create a ‘giant security vulnerability for the population.’ In addition, new technologies, such as artificial intelligence, can help to ensure that only essential data is collected and stored.
It’s understandable why data privacy has taken a backseat during the pandemic — many organisations have had to radically transform their practices and support a sudden shift to remote working. However, consumers are increasingly likely to cease doing business with a company that misuses data and/or doesn’t keep data safe. Failure to comply could result in both hefty fines and loss of business. With such clear consequences, it’s clear that organisations must prioritise data privacy before it’s too late.
IoT refers to the billions of physical devices (besides laptops and smart phones) that connect to the internet — and can include anything from smart lights to jet engines. These devices use sensors to monitor and communicate information, meaning complicated activities can be examined (and responded to) in real time. It’s one of the most highly anticipated trends of 2021, but will it deliver?
Iot has gained such momentum because of the accelerated digital transformation caused by the pandemic. According to the department for Digital, Culture, Media and Sport (DCMS), almost half (49%) of UK consumers have purchased one or more IoT devices since the pandemic began. Despite some IoT dependent industries being hit hard during the pandemic, (such as automotive) overall demand for IoT has increased. According to Microsoft’s IoT Signal’s Report, more companies are using IoT for the first time and more users are reliant on IoT as an essential part of their operations — 90% of enterprise leaders describe IoT as critical and 95% expected their reliance on IoT to increase. In addition, 1/3 of enterprises stated they were specifically increasing IoT investment due to the pandemic.
A big part of the increased industry need for the technology is driven by desire to increase efficiency and reduce costs. IoT can be used to gather huge amounts of data and monitor it in real time. Digitally-savvy organisations are using IoT alongside AI and other data analytics tools to create digital twins — a virtual representation of real-world observations — and using these insights for everything from streamlining manufacturing to improving customer service
While 2021 looks hopeful for IoT adoption, there are several challenges that could impede progress. Security concerns, lack of standardisation and dependence on infrastructure being among them.
Security concerns: security has been notoriously lax when it comes to IoT. IT teams have expressed the difficultly of keeping networks secure in the face of numerous IoT vulnerabilities. Digital infrastructure minister, Matt Warman also said, ‘Our phones and smart devices can be a gold mine for hackers looking to steal data, yet a great number still run older software with holes in their security systems.’
However, upcoming legislation may help with this. The UK’s ‘security by design’ law aims to create legally binding security requirements for almost all virtual smart devices. This should force manufacturers to improve security at the device level and may assuage concerns among those still hesitant to adopt IoT.
Infrastructure: IoT requires fast and reliable broadband to reach its full potential. Therefore, it is heavily reliant on existing infrastructure, such as 5G. The increased bandwidth and network speed of 5G would enable more IoT devices to be connected and to run efficiently. While 5G roll-out has been delayed by COVID-19, security issues and political factors, there has been renewed government commitment to establishing the UK as a leader in 5G technology. With the number of tech giants also supporting the roll-out, 5G could help to further boost IoT development during 2021.
Standardisation: IoT at its full potential should provide a seamless, interconnected experience for the user. However, the lack of industry standardisation on IoT features such as operating systems, development frameworks and architecture holds the dream of a fully integrated network back. Establishing legal security requirements is the first move towards solving this issue, but more regulation is needed so that all devices follow a set of agreed-upon standards.
To summarise, 2021 is looking positive for IoT. Many organisations are already capitalising on the opportunities the technology offers and are continuing to invest. However, the extent to which IoT emerges as the hero technology of 2021 depends upon how genuine concerns over security, infrastructure and standardisation are addressed.
How can you be sure the person accessing sensitive information is who they say they are? Authentication has become much more complicated during the pandemic. Having the correct account credentials is no longer enough to confirm a user’s identity. So, what are organisations doing to improve information security, and how is the field of authentication developing?
The threats to information security have surged during the pandemic; this coupled with the challenge of keeping remote workers secure has led to several high-profile breaches. While IT teams have been working hard to ensure that data is encrypted, anti-virus systems are installed and firewalls are configured, none of this really matters without effective authentication. It’s the information security equivalent of leaving the front door open.
Poor authentication processes make your organisation an easy target for hackers. A hacker can gain access to a username and password via social engineering, phishing, social engineering, malware and more — all of which have increased during the pandemic. 95% of all web application attacks are because of weak or stolen credentials.
While malicious attacks are the primary concern, authentication is also important to maintain compliance with data privacy regulations. With the boundary between home and work becoming blurred, more and more employees have been accessing sensitive information on personal devices and some have even been sharing devices with family members.
Throughout some sectors, such as finance, MFA is already widely used. With the need for secure authentication of remote workers, other industries are now following suit. The reason being that MFA is extremely effective — according to Microsoft, only 0.1% of comprised accounts were those using some form of MFA. MFA significantly increases the difficulty of a successful hack. Therefore, potential attackers are deterred unless the incentive is worth the time investment.
Most commonly, MFA works like this: you login to a system using your username and password, which triggers a second authentication process before you are granted access. The second process could be a text message to your mobile or a smart card/USB key.
According to the 2020 Verizon Data Breach Investigations report, 80% of data breaches were due to compromised or ineffective passwords. Therefore, the strongest MFA approaches have been removing passwords completely. This includes secondary authentication methods that rely on a ‘shared secret’ such as one-time passwords or SMS codes, as these can be vulnerable to channel-jacking (a hacker taking over the channel which authentication attempts are sent through). Given that strong, unique passwords are often difficult to remember and manage, removing passwords also improves usability.
Biometric authentication methods such as fingerprints, facial recognition and retina scanning are often held up as the gold standard. These are easy to use and cannot be stolen like a smart card or USB key can be (although biometrics are still vulnerable to coercion). However, some employees may object to sharing private biometric data and providing devices with biometric authenticators can be expensive.
Although MFA is very effective, authentication is typically conducted once at the start of a session. MFA assumes that the user remains the same throughout the session. Some organisations with particularly sensitive data are adopting continuous authentication, where a benign background software monitors for changes in location, device or behaviour to trigger further authentication processes. Also, MFA does not usually consider the device in the authentication process. Dynamic authentication is being used to verify device identity and health by looking for factors such as unexpected screen resolution, suspicious IP addresses and CPU speed. This information will be continuously combined with predictions about user behaviour risk (and organisations can have input into defining what constitutes risk).
Overall, it seems the simplest answer to the authentication challenge is implementing MFA. It’s an inexpensive method that radically improves security without significantly decreasing usability. While developments in continuous and dynamic authentication offer added protection, attacks on systems using MFA are rare, so this more advanced approach is only necessary for organisations at high-risk of a serious breach.
According to PWC’s 2021 survey, half of enterprise executives are now considering cybersecurity in every business decision. This security-first approach can include techniques such as multi-factor authentication, artificial intelligence and more — but DevSecOps is one of the most frequently talked about solutions to increased security threats. So, what is it and is it really the answer?
Development Security Operations (DevSecOps) is an approach to security that utilises an Agile framework — breaking down traditional silos to maximise speed and efficiency. Traditional silo-based approaches resulted in a production bottleneck once a completed software had been passed over to security teams, but DevSecOps overcomes this issue.
DevSecOps prioritises security from the start, continually testing for vulnerabilities throughout development and automating key security processes. Automated tools such as web application firewalls, open source software governance and intrusion detection systems are commonly used to streamline a DevSecOps approach, while cross-functional teams prevent a production bottleneck.
According to a 2020 survey by Sonatype, DevSecOps teams have fewer open source related breaches, and the majority deployed to production at least once a week. Almost half of DevSecOps teams said that they still didn’t have enough time to spend on security, but had built in more automation, meaning that security was being assessed throughout the process. Finally, the more evolved a DevSecOps team was, the higher their employee satisfaction.
As well as being faster and more secure, DevSecOps allowed for the sharing of multi-disciplinary knowledge. By working with development operations, security professionals can gain a better understanding of how the software works. This allows them to better understand the DevOps team’s priorities. For example, security professionals usually advocate for a thorough encryption approach, but DevOps are focused on software performance — which encryption can sometimes reduce. By working together, the teams can strike the perfect balance between contrasting needs.
DevSecOps offers huge potential, so why isn’t everyone using this approach? A commonly cited issue is the culture clash between a high-speed, pressurised DevOps approach and more traditional, cautious security practices. Most commonly, security is integrated with an existing DevOps team, meaning that security professionals are viewed as the outsiders. This, combined with entrenched preconceptions about security — that it is something that happens later or may even stifle innovation — can lead to conflict between teams.
Security Compass’s ‘2021 State of DevSecOps’ report also highlighted technical challenges, cost, lack of time and lack of education as issues holding back adoption of DevSecOps. In addition, while automation is an important aspect of the DevSecOps approach, the surveyed participants still felt implementation was insufficient and that they were being slowed down by manual security processes.
Clearly, successful DevSecOps adoption doesn’t just happen, it has to be nurtured. Cultural clash can be overcome by appointing a ‘security champion’ within the DevOps team who emphasises the importance of security and facilitates communication between teams. It can also help to have leadership explain the businesses case for improvement, so that teams understand the rewards for working through any teething difficulties.
It’s equally vital to invest in automation. It’s a cornerstone of DevSecOps and yet still under-utilised. No matter how hard you work to prevent potential cultural clash between teams, if integrating security into development really does slow down the process, then resentment is inevitable. Using automation to streamline security could make the difference between a successful or failed DevSecOps adoption. Finally, most developers are not taught how to write secure code — according to Forrester, even the top Computer Science courses feature little security training, so it’s important to support education in this area.
It’s clear that DevSecOps isn’t a panacea for the climate of increasing security threats. It comes with its own set of challenges and if implemented incorrectly, could end up causing more hassle than it’s worth. However, with the right attitude and commitment, you could end up with a well-oiled DevSecOps team who write secure code as second nature.
According to the BCS, neurodiversity remains an overlooked issue in the tech industry — employment rates for neurodiverse people remains low and stigma remains. However, a growing number of companies are recognising that it’s not only right to offer opportunities to all, but people who think differently provide a competitive advantage and help to create an inclusive environment for everyone. For example, both Microsoft and Dell have an established autism hiring programme. So, what are the barriers to a neurodiverse tech industry and how can organisations help?
Neurodiversity refers to the differences in thinking patterns, interests and motivations that naturally occur throughout the population. A neurotypical brain functions in the way that the majority expects. However, an estimated 15% of the UK population are neurodivergent. This is an umbrella term that refers to people who have Autism, ADHD, Dyspraxia, Dyslexia and other neurodevelopmental conditions.
Employment rates vary across conditions. For example, according to research conducted by the National Autistic Society, just 16% of autistic people are in full-time paid work and many are working in a job below their skill level. Worryingly, a recent study found that half of leaders and managers would be uncomfortable hiring a neurodivergent person. The highest level of bias was against people with Tourettes, ADHD or Autism. In addition, the majority of neurodivergent people surveyed felt their workplace was not inclusive to their needs. Up to 40% of employees in the tech industry have not disclosed their neurodivergent traits, meaning that their needs are unlikely to be supported.
It’s important to firstly point out that stereotypes around neurodivergent behaviour are unhelpful and often cause unrealistic expectations. For example, the idea that autistic people are maths or computer savants. However, there are many benefits that go beyond superficial abilities including:
Software and data quality engineering start-up, Ultranauts, is a fantastic demonstration of a company leveraging the power of a neurodiversity. 75% of the workforce are on the Autism spectrum. The small company is now winning contracts from Fortune 100 companies over established global IT consultancies. The company’s founder credits their success to their neurodiverse workforce, saying that, ‘with different learning styles and information processing models, to collaborate and focus on attacking the same problem, we’re just going to be better at it.’ Crucially, Ultranauts also worked hard to create an inclusive culture that supports neurodivergent people.
Importantly, hiring neurodivergent people has a positive effect on the entire workforce by fostering a culture of inclusion. Accommodating individual needs is a wonderful thing that everyone can benefit from by encouraging both innovation and empathy within the organisation.
Many neurodivergent people will require accommodations in their workspace. For example, Autistic people who suffer with sensory processing disorder may benefit from adjustments in lighting and noise (however, it’s important to highlight that variation exists — one autistic person could be over-sensitive and another under sensitive). ADHD people who experience periods of hyper-fixation accompanied by distractibility may benefit from a flexible schedule. In addition, making interviews neurodiverse friendly will support fair assessment practices and encourage hiring of neurodiverse candidates.
Finally, many neurotypical people overestimate their knowledge of conditions such as Autism and ADHD. Awareness training can help build understanding and avoid further workplace barriers being created for their neurodivergent peers.
Several major employers, such as Twitter and Facebook, have announced that remote work will continue indefinitely. For those who enjoy the flexibility and lack of commute that working from home offers, this will be welcome news. For others who thrive in an office environment or who lack a suitable home-working space, a remote future could be a nightmare. There are also growing concerns about what remote work will mean for training, teamwork and sustaining company culture.
The hybrid office is being touted as a solution, where employees split their week between their home and the physical office space. However, this comes with its own set of problems. For example, there is concern over a two-tier system arising between office and home workers, and a possible breakdown in communication as a result. Luckily, there are a number of innovative new technologies being designed — could they help build a hybrid office that people want to be part of?
One of these new technologies is Yonderdesk, a custom digital workspace. One of the main issues with a hybrid office is that it lacks the ‘sense of togetherness’ created by physically being in the same space. This means employees miss out on socialising and are less likely to ask their colleagues quick queries. Yonderdesk is a digital floor plan that can mimic the organisation’s actual office space. Employees are given an avatar and a desk, so that it’s easy to see where your colleagues are at (e.g., in meetings, available or working on a task). Digital floor plans have been a key element of online games, such as Habbo, for years because they are fun, engaging and make people feel like they are having a shared experience, so it will be fascinating to see whether ideas like Yonderdesk prove popular.
On a more tech-heavy futuristic note, there is plenty of development in virtual and augmented reality technology. Digital start-up, Spatial, are working on augmented reality filters that create the illusion that your co-worker is right in front of you (similar to Pokemon Go). The avatar has facial expressions and can even sit down on a chair. It also works on existing virtual reality headsets, but Spatial are particularly excited by the idea of lightweight glasses, which are likely to be far more practical for everyday use. In addition, Spatial allows your avatar to interact with virtual tools. In their words, ‘Your room is your monitor, your hands are the mouse.’ There are plenty of other virtual reality meeting applications, such as the ones on this list, but Spatial is one of the most immersive.
A more controversial development is the increase of monitoring software, sometimes known as ‘Tattleware’. Some of these products can be used without employee knowledge to spy on emails, software use and more, which can have serious data privacy implications and undermine trust. Given that, on average, people have been working longer hours during the pandemic, it seems unwise to use monitoring software in this way. However, when used ethically and transparently, such tools can provide a rich understanding of employee behaviour that can improve productivity, engagement and prevent fatigue and/or burnout. For example, software like Time Doctor has time-tracking features that can help employees and managers gain a better understanding of how long tasks actually take, which can be fed into future estimates and used to reshuffle schedules.
Last but not least, collaboration tools. If you haven’t done this already, finding and implementing effective collaboration tools is vital to successful remote and hybrid working. You are probably most familiar with services like Slack — instant messaging chat rooms are a great way for employees to show their availability and engage in more casual conversations. Take this further with tools like Donut, a slack channel that makes introductions with a random employee every couple of weeks and encourages virtual or in-person meet-ups. This helps build a cohesive company culture by structuring those random encounters from the pre-pandemic days.
Clearly, it will take time to build a hybrid office that suits your organisation. Exploring new tools is a great way to avoid complacency and ensure the hybrid office experience is something your employees want to be part of.
For many, the adaption to working from home has been a challenge. Maintaining productivity while also facing health, financial and family concerns can be stressful enough — so understandably many employees would rather not add information security to their list. However, you would have been hard pushed to have missed the sharp rise in data breaches last year. Under the GDPR and Data Protection Act of 2018, companies must protect data in a way that ensures ‘appropriate security’ by using ‘appropriate technical or organisational measures’ — and COVID-19 doesn’t provide an exemption. What can organisations do to keep data safe in such difficult circumstances?
Many organisations already have remote working policies in place (93% according to a study by OpenVPN), however, 25% of these companies have not updated these policies in over a year. Hackers will easily exploit out-of-date systems, so now is the ideal time to update policy, which will also provide the opportunity to remind employees on proper remote working procedure. Additionally, ensure that existing security measures are working as intended. For example, most organisations will use a virtual private network (VPN) for employees to access company data via an encrypted connection. However, many corporate VPN’s have vulnerabilities IT teams do not regularly patch or do not allow for constraints like lack of bandwidth, which may stop the VPN working properly. Many companies, including Dell, have said that evaluating their VPN was a top priority during the pandemic.
A recent study by IBM concluded that the current workforce, who have been rushed into remote work, poses a significant risk to information security. 52% of surveyed newly working-from-home employees reported using their personal devices for work (often without new tools to secure the device) and 45% have not received any new security training — yet 93% felt confident that their company would keep personal identifiable information safe. This suggests that employees are underestimating the security risks of working-from-home and IT teams may be overestimating employee knowledge of information security. Therefore, IT may be unaware of the risks employees are actually taking, such as sharing devices with family members, which means that data could be downloaded and unknown software installed with the employee’s company credentials entered. It’s important to both enforce regular training on how to keep data safe and repeatedly communicate the business consequences of failing to follow policy.
On a related note, being realistic about the risk employees pose to a security system means limiting the potential damage. Employing multiple layers of security, such as multi-factor authentication and encryption, will help businesses stay safe. Encryption is specifically mentioned by GDPR when outlining what constitutes appropriate technical and organisational security measures — the reason being is that even if a breach occurs, the data will be unreadable. It’s crucial that all devices used for work (including phones and tablets) are encrypted. Plenty of widely used software, such as Microsoft Office or Adobe Acrobat, also provides an option to encrypt files — it’s a good idea to get into the habit of encrypting everything. Then, in the potential situation that a device is remotely or physically accessed by an unknown person, the data stays safe.
While many businesses are juggling a number of concerns during the pandemic, it’s essential that information security remains a priority. GDPR means data must be kept safe at all times by evaluating security systems, understanding the risks your employees take in home-working situations, and responding to this with training and failsafe measures like encryption. Given the financial and reputational consequences of a data breach, it’s vital that businesses are proactive in ensuring information security.
Diversity remains a key issue for the technology industry. According to a recent BCS report, 18% of IT professionals have BAME backgrounds. BAME people are also less likely to hold senior positions — only 9% are directors and 32% are supervisors (for comparison 43% of white employees have a supervisory role). The lack of diversity becomes even clearer when considering specific ethnic groups. For example, black women make up just 0.7% of the technology industry — a representation rate that is 2.5 times lower than in other industries. Clearly, the technology industry is still struggling to achieve true diversity, so what can companies do about it?
It’s easy to say the right thing, harder to put this into action. Setting targets, continually measuring diversity and reviewing progress helps organisations to commit to change. For example, some big companies like Facebook and Pinterest have tried to use the ‘Rooney rule’ where at least one woman and one person of colour are interviewed for director positions within the company. However, progress has been limited and concerns about it being a ‘diversity tickbox’ exercise have been raised. More recently, it’s been emphasised that targets need to be set at all levels of seniority, and that there needs to be external accountability for failure to meet targets.
On the other hand, sometimes companies fail to say enough. Statements of diversity support are important to attract new staff and ensure existing employees are reassured by an inclusive company culture — both those with BAME backgrounds and beyond. For example, Unilever recently pledged their support for a campaign working to end discrimination against hairstyles associated with racial, ethnic and cultural identities. Given that this kind of discrimination often happens in the workplace, a major employer taking a stance sends out a powerful message.
Many people from under-represented groups have concerns that a career in tech is ‘not for them’. This can be reinforced by a lack of people who look like them in senior positions. In addition, some BAME communities prioritise traditional jobs such as medicine, law and finance over technology careers. Companies can participate in outreach in schools and other settings to expand on what a technology career looks like and address concerns someone might have about entering the world of technology. Outreach can help to shed a light on available opportunities while also sending a clear message about the company’s commitment to a diverse workforce.
There’s been a recent discussion about diversity training — particularly the low reliability of the implicit association test and its lack of impact on reducing real-world biases — to the extent that the civil service has stopped all unconscious bias training. However, while certain tools have been criticised, research shows that ongoing diversity training is successful when it combines a range of techniques and is complemented by other diversity initiatives. It’s clear that diversity training needs to be ongoing and not seen as a substitute for wider policy change.
After the Black Lives Matter movement put the spotlight on diversity in 2020, many companies turned to their staff for advice. There have been several instances of people from BAME backgrounds being asked to speak about and advise on diversity practices amidst a climate of emotional trauma and, in some cases, fear of later reprisals from the organisation they were asked to defend. It’s important not to place the burden of improving diversity on individuals — especially if they are unsure how to refuse and are not being compensated for their extra work. Diversity — like any other organisational strategy — should be managed by qualified professionals and engaged with by interested employees.
The technology industry’s track record when it comes to diversity is far from perfect. However, changes are being made. It’s clear that actionable, long-term strategies are needed to truly support organisational diversity in tech.
We surveyed 1,500 employers to gather data on current hiring trends, returning to the office, skills in demand and the impact the global pandemic is having on salaries and rates. We are pleased to be able to present the results below:
Working from home has been vital to slow transmission of the coronavirus. However, a new threat has emerged: increased online activity, use of new applications and less secure home networks are opening up individuals and organisations to a host of cyberattacks.
According to a recent Forbes article, in an analysis of the first 100 days of the COVID-19 crisis security firm Mimecast reported a 33% increase in detected cyberattacks – including spam (+26%), malware (+35%), impersonation (+30%) and blocked URL links (+56%). Certain industries are being particularly targeted, such as healthcare (e.g. The World Health Organisation have reported a fivefold increase in cyberattacks and PPE themed scams have increased) and banking (increased use of online banking presents many opportunities for hackers – such as exploiting new users who may not be familiar with the service).
A recent report from McKinsey highlighted the multitude of potential cybersecurity risks exacerbated by remote working. For example, changes in app-access rights (such as enabling off-site access and lack of multifactor authentication) and use of personal devices or tools (such as a laptop without central control or an unsecured network) increase the opportunities for cyberattacks. While technology was vital to navigate our way through the COVID-19 crisis, rapid adoption of new digital offerings has increased risk. New tools such as video-conferencing have been particularly affected, where an unauthorised person joins a call to steal information or cause disruption. There are also fake tech support scams – increasingly sophisticated attempts to manipulate remote workers (especially those who may be working from home for the first time) with fabricated access and other tech support issues.
The weakest point in any technical system is the person sitting behind the screen. The majority (at least half, according to Trustwave’s 2020 Global Security Report) of cyberattacks occur via social engineering, a psychological manipulation process using tactics such as sending a scam from a trusted source. As always, cyber-criminals know how to target human vulnerabilities, and the number of phishing scams capitalising on our fear of COVID-19 has significantly increased. In addition, we are more likely to fall for a scam when tired or stressed – given the change to working from home, where many are juggling a variety of stressors – we might be even more vulnerable to these kinds of attacks right now.
What can you do?
Given that the person behind the screen represents a security weak-point, they also represent an area of improvement. We will need to learn how to practise good cyber-hygiene, similar to how we adopted thorough hand-washing and social distancing to reduce the risk of the coronavirus.
There are several excellent resources on improving cybersecurity. For example, Siemens have provided their eight top tips for cybersecurity in the home office, including only bringing home essential devices, not mixing personal and business use of devices and ensuring all software is always up to date. The Electronic Frontier Foundation provide more in depth advice on how to spot a phishing scam.
However, while this information is useful, it can be more difficult to establish reliable cyber-security habits. A reported three in four remote workers have yet to receive cybersecurity training, despite the clear increase in risk. More importantly, remote workers are falling for these cyber-attacks. This was recently highlighted by software development company, Gitlab, who found that 1 out of 5 of their own remote-working staff exposed user credentials by replying to a fake phishing message. Regular testing of existing cybersecurity plans in this manner can help to identify areas for improvement.
While cyber-attacks are growing ever more sophisticated, so is cybersecurity. Gamification is one fresh approach to cybersecurity training. Reading through countless tips and the odd video on cybersecurity is unlikely to translate to robust cyber-hygiene habits. However, gamified training results in increased engagement, knowledge and information retention.
Increased investment in cybersecurity may provide us with a host of interesting ideas. Cheltenham Borough Council recently announced plans for a £400 million campus development, situated next door to GCHQ, said to be the ‘Silicon Valley of the UK’. The complex will help to bridge the current skills gap and enhance the UK’s cybersecurity capacity.
Clearly, the coronavirus has highlighted a variety of cybersecurity threats. With remote working expected to continue for the foreseeable future and beyond, it is vital to address current shortcomings in security. Looking forward, the industry is an exciting one, poised for innovation and development.
Our Technology Market Insights Report & Salary Guide 2020 provides the latest insights on the market collated by our Technology Recruitment Teams, and from data collected from surveying our clients and candidates.
Our Scotland Salary Guide 2019 provides the latest salary data collated by our specialist Recruitment Teams covering:
Our England Regions Salary Guide 2019 provides the latest salary data collated by our specialist Recruitment Teams covering:
Our Technology Market Insights Report & Salary Guide 2019 provides the latest insights on the market collated by our Technology Recruitment Teams, and from data collected from surveying our clients and candidates.