As someone who spends their working life protecting others from risk, you surely have the right to bullet-proof your own career by talking to a recruitment firm that has the experience and reach to put you in touch with organisations that have the need, appetite and budget to place information security right at the top of their priorities.
Most of our clients are operating within financial services. This places them right at the frontier of cyber risk. When a robber was asked why he robbed banks, he famously answered “because that’s where the money is”. Nothing’s changed. Whether it’s pounds, dollars or bitcoins, the organisations storing or transferring them have an acute need for cyber specialists to safeguard their operations.
So if you can demonstrate experience-hardened cybersecurity skills gained on a permanent or interim basis, we feel certain that our clients will want to talk to you.
International Professional Services Consultancy
The candidated was engaged on major a DLP programme with a leading UK building society.
Global Leader in Application Security Risk Management
Major Aviation Brand
International Professional Services Consultancy
The candidated was engaged on major a DLP programme with a leading UK building society.
Global Leader in Application Security Risk Management
Major Aviation Brand
We want to talk to you. Drop us a line and tell us about yourself!
The UK government has declared their intention to become a ‘global science and technology superpower’. As a result, there has been plenty of investment into research and development in recent years, and a number of exciting new projects in everything from education to military technology. Here is a quick overview of some of the technology that government departments are using, and some projects that have been carried out so far.
The NHS and Department of Health recently invested £10 million in two IoT led projects. One of which, referred to as Technology Integrated Health Management, involves a network of sensors, monitors, wearables and other devices. The technology is used to help those with dementia and other cognitive impairments to live safely and independently within their own homes. The IoT network monitors the user’s health and alerts health professionals to step in when required. A further project, the Diabetes Digital Coach, uses glucose monitoring devices to guide users through an e-learning process to better manage their condition.
These projects have tangible benefits for those taking part and could potentially reduce costs for healthcare services. However, they also advance IoT by providing an opportunity for a large scale study in an applied environment. The data collected will provide valuable insight into how IoT devices are used in the real-world and help further develop such platforms.
Blockchain is essentially a digital ledger that makes it near impossible to alter or hack records. It is already proving its worth in industry, particularly for banking, where it can help keep transaction records safe. However, it also has huge possibilities for government, given the implications for tackling fraud, error and inefficiency.
So far, the Food Standards Agency has trialled blockchain for livestock distribution, using the digital ledger to share information such as age and veterinary history. The Department for Work and Pensions have also tried blockchain to see whether a bespoke cryptocurrency, Govcoin, could make payments to benefit claimants easier and more secure.
The Ministry of Defence (MOD) and the Home Office have been funding research into autonomous robotics, with the goal of creating a machine that can investigate chemical and biological hazards. The Merlin Robot is the first prototype of this research, developed by industry partner, HORIBA-MIRA. In January 2021, the Merlin Robot successfully conducted chemical reconnaissance tasks over an area of 10,000 square metres. The robot uses artificial intelligence (AI) for object recognition and is equipped with autonomous search and mapping tools. It is a hugely exciting result, given the potentially life-saving capabilities of this technology.
Government has taken a special interest in AI, recently listing it as a critical element of driving economic growth and innovation in the UK. The government’s data science unit has previously used AI to analyse results and attendance records to see whether a school is at risk of ‘failing’ and requires special attention. The recent launch of the ‘National AI centre’ aims to take AI in education further. Although new, the initiative has been inspired by projects such as Bolton College’s digital assistant, Ada, which provides administrative support to students, freeing up staff to focus on teaching.
The HM Courts and Tribunals Service is currently undergoing a £1 billion transformation to modernise their services. One important part of this is user design and research. During COVID-19, many aspects of the legal system were kept going with remote hearings. Now, a new digital platform is being rolled out to deal with approximately 1.5 million criminal cases a year. User experience is a key aspect of this modernisation, ensuring that services are easy to navigate and that all aspects of virtual hearings are considered for those involved.
It is clear there is no shortage of ambitious government projects. This is great news for those across the technology industry, as successful projects can help stimulate demand for key digital skills. Finally, when government involvement is done well, it can increase public trust in emerging technologies, which is a good thing in all respects.
It is well documented that human error is a major weakness in information and cyber security — around 90% of UK data breaches are caused by this. As a result, organisations are increasingly turning to behavioural science to understand their vulnerabilities and improve defences. Cyber security company, CybSafe, raised $7.9 million in a recent funding round for their behavioural security platform. They already have an impressive list of clients, including HSBC and NHS trusts. It is clear startups like CybSafe are meeting a real gap in the market, but what is the science behind such companies, and how are these techniques being used to enhance security?
Generally, security awareness training is a task that employees dread. It takes time away from their core responsibilities, can be repetitive and, well… dull. Many organisations are now turning to gamification, a technique that utilises game-like features such as points, trophies and level progress to motivate employees. There is growing evidence that gamification really works — a large analysis of research in this area found that gamification significantly increased engagement in online programs.
Gamification works because it is both rewarding and immersive. Positive reinforcement is a key driver of behavioural change. Therefore, creating enjoyable experiences where employees can progress through a game, be rewarded for the progress and interact with their colleagues helps to make gamification a success. In addition, making the game interesting and challenging enough means employees remain engaged with the process without having to exert high levels of effort.
PwC’s Game of Threats ™ is a great example of gamification done well. Players have to either defend or attack their fictional organisation and make real-time decisions that affect the outcome. It has proved an excellent way to raise awareness about cyber security and increase understanding about real-world hacker behaviour.
Nudging is Richard Thaler’s Nobel prize winning technique that encourages a change in behaviour through positive, indirect suggestion. For example, healthy eating could be encouraged by filling the snack cupboard with fruit alternatives and hiding the chocolate in a difficult to reach spot. This doesn’t ban the unhealthy alternative (which could cause resentment and backlash) but gently encourages the desired behaviour by working with our human biases, and not against them.
A well-known example in cybersecurity is password strength feedback (i.e., encouraging a stronger password by having a rating system turn green). Other creative nudges include showing users examples of the data they will share when they accept privacy permissions (e.g., an image the application will have access to) and how many times their location has been shared with a certain organisation. However, the best nudges are often simple — it could be as basic as a printed sign that reminds employees how to store or dispose of sensitive documents once used.
It is easy to make mistakes when we have multiple demands on our attention. In behavioural science, this is referred to as cognitive load — a strain upon processing that impacts performance. You might have seen the famous ‘invisible gorilla’ video, where people watching a basketball game video fail to notice a person dressed in a gorilla costume casually walking through the game. This is called inattentional blindness and is one of the potentially alarming consequences of high cognitive load.
This psychological principle has important implications for the design of cybersecurity measures. Namely, that cognitive load should be kept as low as possible. Research into this area found that cybersecurity analysts were more likely to report missing a key piece of information when faced with a large amount of information from automated security measures.
The application of behavioural science to security is still relatively new. As such, there is a rich literature of existing psychological principles to explore. For example, the bystander effect (referring to our belief that someone else will deal with a problem) and optimism bias (our belief that we are the exception to the rule, causing us to underestimate risk). As the tech industry takes a greater interest in behavioural science, these terms may become increasingly familiar, and help turn the tide in the fight against rising cybersecurity threats.
Recite Me is innovative cloud-based software that lets visitors to our website view and use it in the way that works best for them.
We have added the Recite Me web accessibility and language toolbar to our website and Careers site to make it accessible and inclusive for as many people as possible.
It helps the one in five people in the UK who have a disability, including those with common conditions like sight loss and dyslexia, to access this website in the way that suits them best.
It also meets the needs of the one in ten people in the UK who don’t speak English as their first language, by being able to translate our web content into over 100 different languages.
You can open the Recite Me language and accessibility toolbar by clicking on the “Accessibility” button. This Accessibility button now appears in the top right on every page of our website.
After you click on Accessibility button the Recite Me toolbar opens and displays a range of different options for customising how the website looks and how you can access the content.
Recite Me helps people to access our website to do the things they need to do, like applying for our latest vacancies, finding out about life at McGregor Boyall using the tool bar on our dedicated Careers site and reading our news and insights.
The Recite Me toolbar has a unique range of functions. You can use it to:
You can find out more about how Recite Me works from the Recite Me user guide.
If you have any questions about Recite Me you can contact us by email at firstname.lastname@example.org or call us on +44 (0)20 7422 9000.
It’s no secret that the pandemic has increased the frequency and sophistication of cyber-attacks. And according to a recent government survey, they’re showing no signs of slowing down. Increased cross-sector collaboration has been suggested as a method of combatting the increased threat. The UK’s new self-regulatory body, the UK Cyber Security Council, and Microsoft’s new initiative, The Asia Pacific Public Sector Cyber Security Executive Council, are welcome steps in this direction. Why are these private-public partnerships so important, and can they help create more cyber-resilient organisations?
Cyber-attacks do not distinguish between sectors. One of the highest profile attacks of 2020, the SolarWinds breach, exploited a flaw in the Orion security tool to target industry and government agencies alike. BitSight has estimated the financial impact of the SolarWinds attack to be approximately $90,000,000 in insured losses. Not to mention the unquantifiable effect on national security.
Sometimes, lack of collaboration can result in the need for court-ordered cybersecurity intervention, as was the case with Microsoft — legal approval was required for the government to remove compromised web shells in Microsoft Exchange servers. These kinds of actions can cause unnecessary delays and substantial losses. In some time sensitive cyber-attacks, such as medical ransomware, the consequences could be devastating.
Given the significant consequences of a cybersecurity breach, many organisations are calling for greater collaboration — the benefits of which include greater intelligence sharing, a cohesive response to threats and robust international infrastructure.
According to a study by the Ponemon Institute, organisations with high cyber-resilience were more likely to participate in some form of threat-sharing program (e.g., open source, commercial sources, threat intelligence platforms). Sharing intelligence allows organisations to identify likely threats in their industry and develop appropriate responses based on what similar organisations have tried. Intelligence sharing between public and private sectors is vital because of the distinct perspectives each sector has. For example, government agencies can conduct cyber espionage operations and, therefore, have insight into adversary networks. In contrast, business providers often have greater understanding of cyber-attack victims.
Increased cross-sector talk could vastly improve cybersecurity responses, and even prevent attacks before they occur. Microsoft’s new initiative, The Asia Pacific Public Sector Cyber Security Executive Council, aims to facilitate private-public partnerships, to share information and strengthen government cyber defences. The council plans to meet quarterly going forward.
Having a clear response to cybersecurity incidents helps to protect organisations against cyber threats — particularly for smaller organisations that may lack expertise and/or resources. IBM have often emphasised the importance of having an incident response process that is consistent, repeatable and measurable, and has worked with organisations across sectors to help develop resilient solutions.
However, there is still remarkable variation in the cybersecurity industry because of the lack of professional regulation. The UK Cyber Security Council plans to correct this issue, bringing private and public sectors together to create regulatory standards in cybersecurity, similar to what already exists in industries such as accounting and finance. This hope is that this will create a set of standards that improves the quality of cyber defence strategies and the efficiency of incident responses.
Many organisations operate internationally and therefore, so are the attacks. For example, while the impact of the SolarWinds attack was the most severe in the US, at least seven additional countries were impacted (including the UK, Belgium, Spain, Canada, Mexico, Israel and the UAE). However, the response from US allies was far from cohesive, and none matched the impact of the sanctions the US imposed on Russia for their suspected role in the attack.
It’s crucial that private-public partnerships are not only encouraged on a national scale, but globally. Participating in global forums, sharing intelligence and developed global frameworks will inevitably improve cyber-resilience. Finally, co-ordinated global responses may deter nation state attacks, and increase trust between co-operating countries.
Clearly, many are working hard to facilitate cross-sector collaboration. However, there is much further to go. Cybersecurity is no longer optional — protected digital environments are crucial for organisations of all kinds, so they must work together to secure a cyber-resilient future.
GDPR is thought to be one of the most comprehensive data privacy regulations in the world. The consequences of breaching GDPR are harsh — with potential fines of up to 20 million euros or 4% of annual revenue (whichever is greater). And the incoming ePrivacy regulation will clamp down further on online privacy. However, data privacy compliance has slipped down the agenda during the pandemic, and we may soon see numerous high-profile violations come to light. What is the current situation, and how concerned should organisations be?
In order to survive the pandemic, many companies underwent significant digital transformation, rushing to get their services online to meet shifting customer needs. For example, in May 2020, a reported 40% of surveyed organisations were fast-tracking their move to the cloud in response to COVID-19.
With the focus elsewhere, there have been doubts over whether data privacy has been properly embedded into new digital processes. GDPR also sets out expectations about how data must be protected against cybersecurity risks and unauthorised access, but a recent survey found that 92% of respondents were worried about cloud misconfiguration breaches. The shift to remote work further increases vulnerability to both data privacy violations and cybersecurity breaches.
Regulation enforcement was relaxed in the early stages of the pandemic; however, companies are now expected to ensure compliance. Organisations that cannot do this will face financial and reputational consequences.
Recent security breaches and scandals have increased public awareness of data privacy. For example, political consultancy, Cambridge Analytica, caused outrage after it was revealed they had improperly obtained the private psychological profiles of Facebook users, and this data was then used for political gain.
The Information Commissioners Office (ICO) reported a large increase in helpline calls (24.1%) and live chat requests (31.5%) in 2018 compared to the previous year. The majority of these queries were regarding subject access, suggesting that members of the public want to know how their data is being handled. According to McKinsey, 87% of surveyed people said they would stop doing business with a company because of concerns over security, and 71% would stop if the company gave away sensitive data without permission. Therefore, not addressing data privacy concerns could have a severe and detrimental impact on business demand.
Physical safety has had to be prioritised over data privacy during the pandemic — and most regulation, such as GDPR, includes government exemptions for such situations. Contact-tracing and symptom-tracking apps have been launched to gather information on and reduce transmission of COVID-19. However, there have been concerns raised about unclear purpose specification, data minimisation, data sharing and risk of re-identification.
Such apps rely on mass download to be effective. In the UK, 1 in 8 people who test positive for COVID-19 are not reached by contact tracers. A further 18% provide no details for close contacts. While the reasons for this low co-operation are complex, lack of public trust is clearly a factor that can undermine the efficacy of such applications.
While the pandemic is an extreme situation, it provides an example of how seriously consumers are taking data privacy — that they may refuse to use an application they distrust, even when that app is intended to serve the public good. A growing number of experts believe that the way forward is to integrate privacy into all technology, and that not doing so will create a ‘giant security vulnerability for the population.’ In addition, new technologies, such as artificial intelligence, can help to ensure that only essential data is collected and stored.
It’s understandable why data privacy has taken a backseat during the pandemic — many organisations have had to radically transform their practices and support a sudden shift to remote working. However, consumers are increasingly likely to cease doing business with a company that misuses data and/or doesn’t keep data safe. Failure to comply could result in both hefty fines and loss of business. With such clear consequences, it’s clear that organisations must prioritise data privacy before it’s too late.
IoT refers to the billions of physical devices (besides laptops and smart phones) that connect to the internet — and can include anything from smart lights to jet engines. These devices use sensors to monitor and communicate information, meaning complicated activities can be examined (and responded to) in real time. It’s one of the most highly anticipated trends of 2021, but will it deliver?
Iot has gained such momentum because of the accelerated digital transformation caused by the pandemic. According to the department for Digital, Culture, Media and Sport (DCMS), almost half (49%) of UK consumers have purchased one or more IoT devices since the pandemic began. Despite some IoT dependent industries being hit hard during the pandemic, (such as automotive) overall demand for IoT has increased. According to Microsoft’s IoT Signal’s Report, more companies are using IoT for the first time and more users are reliant on IoT as an essential part of their operations — 90% of enterprise leaders describe IoT as critical and 95% expected their reliance on IoT to increase. In addition, 1/3 of enterprises stated they were specifically increasing IoT investment due to the pandemic.
A big part of the increased industry need for the technology is driven by desire to increase efficiency and reduce costs. IoT can be used to gather huge amounts of data and monitor it in real time. Digitally-savvy organisations are using IoT alongside AI and other data analytics tools to create digital twins — a virtual representation of real-world observations — and using these insights for everything from streamlining manufacturing to improving customer service
While 2021 looks hopeful for IoT adoption, there are several challenges that could impede progress. Security concerns, lack of standardisation and dependence on infrastructure being among them.
Security concerns: security has been notoriously lax when it comes to IoT. IT teams have expressed the difficultly of keeping networks secure in the face of numerous IoT vulnerabilities. Digital infrastructure minister, Matt Warman also said, ‘Our phones and smart devices can be a gold mine for hackers looking to steal data, yet a great number still run older software with holes in their security systems.’
However, upcoming legislation may help with this. The UK’s ‘security by design’ law aims to create legally binding security requirements for almost all virtual smart devices. This should force manufacturers to improve security at the device level and may assuage concerns among those still hesitant to adopt IoT.
Infrastructure: IoT requires fast and reliable broadband to reach its full potential. Therefore, it is heavily reliant on existing infrastructure, such as 5G. The increased bandwidth and network speed of 5G would enable more IoT devices to be connected and to run efficiently. While 5G roll-out has been delayed by COVID-19, security issues and political factors, there has been renewed government commitment to establishing the UK as a leader in 5G technology. With the number of tech giants also supporting the roll-out, 5G could help to further boost IoT development during 2021.
Standardisation: IoT at its full potential should provide a seamless, interconnected experience for the user. However, the lack of industry standardisation on IoT features such as operating systems, development frameworks and architecture holds the dream of a fully integrated network back. Establishing legal security requirements is the first move towards solving this issue, but more regulation is needed so that all devices follow a set of agreed-upon standards.
To summarise, 2021 is looking positive for IoT. Many organisations are already capitalising on the opportunities the technology offers and are continuing to invest. However, the extent to which IoT emerges as the hero technology of 2021 depends upon how genuine concerns over security, infrastructure and standardisation are addressed.
How can you be sure the person accessing sensitive information is who they say they are? Authentication has become much more complicated during the pandemic. Having the correct account credentials is no longer enough to confirm a user’s identity. So, what are organisations doing to improve information security, and how is the field of authentication developing?
The threats to information security have surged during the pandemic; this coupled with the challenge of keeping remote workers secure has led to several high-profile breaches. While IT teams have been working hard to ensure that data is encrypted, anti-virus systems are installed and firewalls are configured, none of this really matters without effective authentication. It’s the information security equivalent of leaving the front door open.
Poor authentication processes make your organisation an easy target for hackers. A hacker can gain access to a username and password via social engineering, phishing, social engineering, malware and more — all of which have increased during the pandemic. 95% of all web application attacks are because of weak or stolen credentials.
While malicious attacks are the primary concern, authentication is also important to maintain compliance with data privacy regulations. With the boundary between home and work becoming blurred, more and more employees have been accessing sensitive information on personal devices and some have even been sharing devices with family members.
Throughout some sectors, such as finance, MFA is already widely used. With the need for secure authentication of remote workers, other industries are now following suit. The reason being that MFA is extremely effective — according to Microsoft, only 0.1% of comprised accounts were those using some form of MFA. MFA significantly increases the difficulty of a successful hack. Therefore, potential attackers are deterred unless the incentive is worth the time investment.
Most commonly, MFA works like this: you login to a system using your username and password, which triggers a second authentication process before you are granted access. The second process could be a text message to your mobile or a smart card/USB key.
According to the 2020 Verizon Data Breach Investigations report, 80% of data breaches were due to compromised or ineffective passwords. Therefore, the strongest MFA approaches have been removing passwords completely. This includes secondary authentication methods that rely on a ‘shared secret’ such as one-time passwords or SMS codes, as these can be vulnerable to channel-jacking (a hacker taking over the channel which authentication attempts are sent through). Given that strong, unique passwords are often difficult to remember and manage, removing passwords also improves usability.
Biometric authentication methods such as fingerprints, facial recognition and retina scanning are often held up as the gold standard. These are easy to use and cannot be stolen like a smart card or USB key can be (although biometrics are still vulnerable to coercion). However, some employees may object to sharing private biometric data and providing devices with biometric authenticators can be expensive.
Although MFA is very effective, authentication is typically conducted once at the start of a session. MFA assumes that the user remains the same throughout the session. Some organisations with particularly sensitive data are adopting continuous authentication, where a benign background software monitors for changes in location, device or behaviour to trigger further authentication processes. Also, MFA does not usually consider the device in the authentication process. Dynamic authentication is being used to verify device identity and health by looking for factors such as unexpected screen resolution, suspicious IP addresses and CPU speed. This information will be continuously combined with predictions about user behaviour risk (and organisations can have input into defining what constitutes risk).
Overall, it seems the simplest answer to the authentication challenge is implementing MFA. It’s an inexpensive method that radically improves security without significantly decreasing usability. While developments in continuous and dynamic authentication offer added protection, attacks on systems using MFA are rare, so this more advanced approach is only necessary for organisations at high-risk of a serious breach.
According to PWC’s 2021 survey, half of enterprise executives are now considering cybersecurity in every business decision. This security-first approach can include techniques such as multi-factor authentication, artificial intelligence and more — but DevSecOps is one of the most frequently talked about solutions to increased security threats. So, what is it and is it really the answer?
Development Security Operations (DevSecOps) is an approach to security that utilises an Agile framework — breaking down traditional silos to maximise speed and efficiency. Traditional silo-based approaches resulted in a production bottleneck once a completed software had been passed over to security teams, but DevSecOps overcomes this issue.
DevSecOps prioritises security from the start, continually testing for vulnerabilities throughout development and automating key security processes. Automated tools such as web application firewalls, open source software governance and intrusion detection systems are commonly used to streamline a DevSecOps approach, while cross-functional teams prevent a production bottleneck.
According to a 2020 survey by Sonatype, DevSecOps teams have fewer open source related breaches, and the majority deployed to production at least once a week. Almost half of DevSecOps teams said that they still didn’t have enough time to spend on security, but had built in more automation, meaning that security was being assessed throughout the process. Finally, the more evolved a DevSecOps team was, the higher their employee satisfaction.
As well as being faster and more secure, DevSecOps allowed for the sharing of multi-disciplinary knowledge. By working with development operations, security professionals can gain a better understanding of how the software works. This allows them to better understand the DevOps team’s priorities. For example, security professionals usually advocate for a thorough encryption approach, but DevOps are focused on software performance — which encryption can sometimes reduce. By working together, the teams can strike the perfect balance between contrasting needs.
DevSecOps offers huge potential, so why isn’t everyone using this approach? A commonly cited issue is the culture clash between a high-speed, pressurised DevOps approach and more traditional, cautious security practices. Most commonly, security is integrated with an existing DevOps team, meaning that security professionals are viewed as the outsiders. This, combined with entrenched preconceptions about security — that it is something that happens later or may even stifle innovation — can lead to conflict between teams.
Security Compass’s ‘2021 State of DevSecOps’ report also highlighted technical challenges, cost, lack of time and lack of education as issues holding back adoption of DevSecOps. In addition, while automation is an important aspect of the DevSecOps approach, the surveyed participants still felt implementation was insufficient and that they were being slowed down by manual security processes.
Clearly, successful DevSecOps adoption doesn’t just happen, it has to be nurtured. Cultural clash can be overcome by appointing a ‘security champion’ within the DevOps team who emphasises the importance of security and facilitates communication between teams. It can also help to have leadership explain the businesses case for improvement, so that teams understand the rewards for working through any teething difficulties.
It’s equally vital to invest in automation. It’s a cornerstone of DevSecOps and yet still under-utilised. No matter how hard you work to prevent potential cultural clash between teams, if integrating security into development really does slow down the process, then resentment is inevitable. Using automation to streamline security could make the difference between a successful or failed DevSecOps adoption. Finally, most developers are not taught how to write secure code — according to Forrester, even the top Computer Science courses feature little security training, so it’s important to support education in this area.
It’s clear that DevSecOps isn’t a panacea for the climate of increasing security threats. It comes with its own set of challenges and if implemented incorrectly, could end up causing more hassle than it’s worth. However, with the right attitude and commitment, you could end up with a well-oiled DevSecOps team who write secure code as second nature.
According to the BCS, neurodiversity remains an overlooked issue in the tech industry — employment rates for neurodiverse people remains low and stigma remains. However, a growing number of companies are recognising that it’s not only right to offer opportunities to all, but people who think differently provide a competitive advantage and help to create an inclusive environment for everyone. For example, both Microsoft and Dell have an established autism hiring programme. So, what are the barriers to a neurodiverse tech industry and how can organisations help?
Neurodiversity refers to the differences in thinking patterns, interests and motivations that naturally occur throughout the population. A neurotypical brain functions in the way that the majority expects. However, an estimated 15% of the UK population are neurodivergent. This is an umbrella term that refers to people who have Autism, ADHD, Dyspraxia, Dyslexia and other neurodevelopmental conditions.
Employment rates vary across conditions. For example, according to research conducted by the National Autistic Society, just 16% of autistic people are in full-time paid work and many are working in a job below their skill level. Worryingly, a recent study found that half of leaders and managers would be uncomfortable hiring a neurodivergent person. The highest level of bias was against people with Tourettes, ADHD or Autism. In addition, the majority of neurodivergent people surveyed felt their workplace was not inclusive to their needs. Up to 40% of employees in the tech industry have not disclosed their neurodivergent traits, meaning that their needs are unlikely to be supported.
It’s important to firstly point out that stereotypes around neurodivergent behaviour are unhelpful and often cause unrealistic expectations. For example, the idea that autistic people are maths or computer savants. However, there are many benefits that go beyond superficial abilities including:
Software and data quality engineering start-up, Ultranauts, is a fantastic demonstration of a company leveraging the power of a neurodiversity. 75% of the workforce are on the Autism spectrum. The small company is now winning contracts from Fortune 100 companies over established global IT consultancies. The company’s founder credits their success to their neurodiverse workforce, saying that, ‘with different learning styles and information processing models, to collaborate and focus on attacking the same problem, we’re just going to be better at it.’ Crucially, Ultranauts also worked hard to create an inclusive culture that supports neurodivergent people.
Importantly, hiring neurodivergent people has a positive effect on the entire workforce by fostering a culture of inclusion. Accommodating individual needs is a wonderful thing that everyone can benefit from by encouraging both innovation and empathy within the organisation.
Many neurodivergent people will require accommodations in their workspace. For example, Autistic people who suffer with sensory processing disorder may benefit from adjustments in lighting and noise (however, it’s important to highlight that variation exists — one autistic person could be over-sensitive and another under sensitive). ADHD people who experience periods of hyper-fixation accompanied by distractibility may benefit from a flexible schedule. In addition, making interviews neurodiverse friendly will support fair assessment practices and encourage hiring of neurodiverse candidates.
Finally, many neurotypical people overestimate their knowledge of conditions such as Autism and ADHD. Awareness training can help build understanding and avoid further workplace barriers being created for their neurodivergent peers.
Several major employers, such as Twitter and Facebook, have announced that remote work will continue indefinitely. For those who enjoy the flexibility and lack of commute that working from home offers, this will be welcome news. For others who thrive in an office environment or who lack a suitable home-working space, a remote future could be a nightmare. There are also growing concerns about what remote work will mean for training, teamwork and sustaining company culture.
The hybrid office is being touted as a solution, where employees split their week between their home and the physical office space. However, this comes with its own set of problems. For example, there is concern over a two-tier system arising between office and home workers, and a possible breakdown in communication as a result. Luckily, there are a number of innovative new technologies being designed — could they help build a hybrid office that people want to be part of?
One of these new technologies is Yonderdesk, a custom digital workspace. One of the main issues with a hybrid office is that it lacks the ‘sense of togetherness’ created by physically being in the same space. This means employees miss out on socialising and are less likely to ask their colleagues quick queries. Yonderdesk is a digital floor plan that can mimic the organisation’s actual office space. Employees are given an avatar and a desk, so that it’s easy to see where your colleagues are at (e.g., in meetings, available or working on a task). Digital floor plans have been a key element of online games, such as Habbo, for years because they are fun, engaging and make people feel like they are having a shared experience, so it will be fascinating to see whether ideas like Yonderdesk prove popular.
On a more tech-heavy futuristic note, there is plenty of development in virtual and augmented reality technology. Digital start-up, Spatial, are working on augmented reality filters that create the illusion that your co-worker is right in front of you (similar to Pokemon Go). The avatar has facial expressions and can even sit down on a chair. It also works on existing virtual reality headsets, but Spatial are particularly excited by the idea of lightweight glasses, which are likely to be far more practical for everyday use. In addition, Spatial allows your avatar to interact with virtual tools. In their words, ‘Your room is your monitor, your hands are the mouse.’ There are plenty of other virtual reality meeting applications, such as the ones on this list, but Spatial is one of the most immersive.
A more controversial development is the increase of monitoring software, sometimes known as ‘Tattleware’. Some of these products can be used without employee knowledge to spy on emails, software use and more, which can have serious data privacy implications and undermine trust. Given that, on average, people have been working longer hours during the pandemic, it seems unwise to use monitoring software in this way. However, when used ethically and transparently, such tools can provide a rich understanding of employee behaviour that can improve productivity, engagement and prevent fatigue and/or burnout. For example, software like Time Doctor has time-tracking features that can help employees and managers gain a better understanding of how long tasks actually take, which can be fed into future estimates and used to reshuffle schedules.
Last but not least, collaboration tools. If you haven’t done this already, finding and implementing effective collaboration tools is vital to successful remote and hybrid working. You are probably most familiar with services like Slack — instant messaging chat rooms are a great way for employees to show their availability and engage in more casual conversations. Take this further with tools like Donut, a slack channel that makes introductions with a random employee every couple of weeks and encourages virtual or in-person meet-ups. This helps build a cohesive company culture by structuring those random encounters from the pre-pandemic days.
Clearly, it will take time to build a hybrid office that suits your organisation. Exploring new tools is a great way to avoid complacency and ensure the hybrid office experience is something your employees want to be part of.
For many, the adaption to working from home has been a challenge. Maintaining productivity while also facing health, financial and family concerns can be stressful enough — so understandably many employees would rather not add information security to their list. However, you would have been hard pushed to have missed the sharp rise in data breaches last year. Under the GDPR and Data Protection Act of 2018, companies must protect data in a way that ensures ‘appropriate security’ by using ‘appropriate technical or organisational measures’ — and COVID-19 doesn’t provide an exemption. What can organisations do to keep data safe in such difficult circumstances?
Many organisations already have remote working policies in place (93% according to a study by OpenVPN), however, 25% of these companies have not updated these policies in over a year. Hackers will easily exploit out-of-date systems, so now is the ideal time to update policy, which will also provide the opportunity to remind employees on proper remote working procedure. Additionally, ensure that existing security measures are working as intended. For example, most organisations will use a virtual private network (VPN) for employees to access company data via an encrypted connection. However, many corporate VPN’s have vulnerabilities IT teams do not regularly patch or do not allow for constraints like lack of bandwidth, which may stop the VPN working properly. Many companies, including Dell, have said that evaluating their VPN was a top priority during the pandemic.
A recent study by IBM concluded that the current workforce, who have been rushed into remote work, poses a significant risk to information security. 52% of surveyed newly working-from-home employees reported using their personal devices for work (often without new tools to secure the device) and 45% have not received any new security training — yet 93% felt confident that their company would keep personal identifiable information safe. This suggests that employees are underestimating the security risks of working-from-home and IT teams may be overestimating employee knowledge of information security. Therefore, IT may be unaware of the risks employees are actually taking, such as sharing devices with family members, which means that data could be downloaded and unknown software installed with the employee’s company credentials entered. It’s important to both enforce regular training on how to keep data safe and repeatedly communicate the business consequences of failing to follow policy.
On a related note, being realistic about the risk employees pose to a security system means limiting the potential damage. Employing multiple layers of security, such as multi-factor authentication and encryption, will help businesses stay safe. Encryption is specifically mentioned by GDPR when outlining what constitutes appropriate technical and organisational security measures — the reason being is that even if a breach occurs, the data will be unreadable. It’s crucial that all devices used for work (including phones and tablets) are encrypted. Plenty of widely used software, such as Microsoft Office or Adobe Acrobat, also provides an option to encrypt files — it’s a good idea to get into the habit of encrypting everything. Then, in the potential situation that a device is remotely or physically accessed by an unknown person, the data stays safe.
While many businesses are juggling a number of concerns during the pandemic, it’s essential that information security remains a priority. GDPR means data must be kept safe at all times by evaluating security systems, understanding the risks your employees take in home-working situations, and responding to this with training and failsafe measures like encryption. Given the financial and reputational consequences of a data breach, it’s vital that businesses are proactive in ensuring information security.
Diversity remains a key issue for the technology industry. According to a recent BCS report, 18% of IT professionals have BAME backgrounds. BAME people are also less likely to hold senior positions — only 9% are directors and 32% are supervisors (for comparison 43% of white employees have a supervisory role). The lack of diversity becomes even clearer when considering specific ethnic groups. For example, black women make up just 0.7% of the technology industry — a representation rate that is 2.5 times lower than in other industries. Clearly, the technology industry is still struggling to achieve true diversity, so what can companies do about it?
It’s easy to say the right thing, harder to put this into action. Setting targets, continually measuring diversity and reviewing progress helps organisations to commit to change. For example, some big companies like Facebook and Pinterest have tried to use the ‘Rooney rule’ where at least one woman and one person of colour are interviewed for director positions within the company. However, progress has been limited and concerns about it being a ‘diversity tickbox’ exercise have been raised. More recently, it’s been emphasised that targets need to be set at all levels of seniority, and that there needs to be external accountability for failure to meet targets.
On the other hand, sometimes companies fail to say enough. Statements of diversity support are important to attract new staff and ensure existing employees are reassured by an inclusive company culture — both those with BAME backgrounds and beyond. For example, Unilever recently pledged their support for a campaign working to end discrimination against hairstyles associated with racial, ethnic and cultural identities. Given that this kind of discrimination often happens in the workplace, a major employer taking a stance sends out a powerful message.
Many people from under-represented groups have concerns that a career in tech is ‘not for them’. This can be reinforced by a lack of people who look like them in senior positions. In addition, some BAME communities prioritise traditional jobs such as medicine, law and finance over technology careers. Companies can participate in outreach in schools and other settings to expand on what a technology career looks like and address concerns someone might have about entering the world of technology. Outreach can help to shed a light on available opportunities while also sending a clear message about the company’s commitment to a diverse workforce.
There’s been a recent discussion about diversity training — particularly the low reliability of the implicit association test and its lack of impact on reducing real-world biases — to the extent that the civil service has stopped all unconscious bias training. However, while certain tools have been criticised, research shows that ongoing diversity training is successful when it combines a range of techniques and is complemented by other diversity initiatives. It’s clear that diversity training needs to be ongoing and not seen as a substitute for wider policy change.
After the Black Lives Matter movement put the spotlight on diversity in 2020, many companies turned to their staff for advice. There have been several instances of people from BAME backgrounds being asked to speak about and advise on diversity practices amidst a climate of emotional trauma and, in some cases, fear of later reprisals from the organisation they were asked to defend. It’s important not to place the burden of improving diversity on individuals — especially if they are unsure how to refuse and are not being compensated for their extra work. Diversity — like any other organisational strategy — should be managed by qualified professionals and engaged with by interested employees.
The technology industry’s track record when it comes to diversity is far from perfect. However, changes are being made. It’s clear that actionable, long-term strategies are needed to truly support organisational diversity in tech.
We surveyed 1,500 employers to gather data on current hiring trends, returning to the office, skills in demand and the impact the global pandemic is having on salaries and rates. We are pleased to be able to present the results below:
Working from home has been vital to slow transmission of the coronavirus. However, a new threat has emerged: increased online activity, use of new applications and less secure home networks are opening up individuals and organisations to a host of cyberattacks.
According to a recent Forbes article, in an analysis of the first 100 days of the COVID-19 crisis security firm Mimecast reported a 33% increase in detected cyberattacks – including spam (+26%), malware (+35%), impersonation (+30%) and blocked URL links (+56%). Certain industries are being particularly targeted, such as healthcare (e.g. The World Health Organisation have reported a fivefold increase in cyberattacks and PPE themed scams have increased) and banking (increased use of online banking presents many opportunities for hackers – such as exploiting new users who may not be familiar with the service).
A recent report from McKinsey highlighted the multitude of potential cybersecurity risks exacerbated by remote working. For example, changes in app-access rights (such as enabling off-site access and lack of multifactor authentication) and use of personal devices or tools (such as a laptop without central control or an unsecured network) increase the opportunities for cyberattacks. While technology was vital to navigate our way through the COVID-19 crisis, rapid adoption of new digital offerings has increased risk. New tools such as video-conferencing have been particularly affected, where an unauthorised person joins a call to steal information or cause disruption. There are also fake tech support scams – increasingly sophisticated attempts to manipulate remote workers (especially those who may be working from home for the first time) with fabricated access and other tech support issues.
The weakest point in any technical system is the person sitting behind the screen. The majority (at least half, according to Trustwave’s 2020 Global Security Report) of cyberattacks occur via social engineering, a psychological manipulation process using tactics such as sending a scam from a trusted source. As always, cyber-criminals know how to target human vulnerabilities, and the number of phishing scams capitalising on our fear of COVID-19 has significantly increased. In addition, we are more likely to fall for a scam when tired or stressed – given the change to working from home, where many are juggling a variety of stressors – we might be even more vulnerable to these kinds of attacks right now.
What can you do?
Given that the person behind the screen represents a security weak-point, they also represent an area of improvement. We will need to learn how to practise good cyber-hygiene, similar to how we adopted thorough hand-washing and social distancing to reduce the risk of the coronavirus.
There are several excellent resources on improving cybersecurity. For example, Siemens have provided their eight top tips for cybersecurity in the home office, including only bringing home essential devices, not mixing personal and business use of devices and ensuring all software is always up to date. The Electronic Frontier Foundation provide more in depth advice on how to spot a phishing scam.
However, while this information is useful, it can be more difficult to establish reliable cyber-security habits. A reported three in four remote workers have yet to receive cybersecurity training, despite the clear increase in risk. More importantly, remote workers are falling for these cyber-attacks. This was recently highlighted by software development company, Gitlab, who found that 1 out of 5 of their own remote-working staff exposed user credentials by replying to a fake phishing message. Regular testing of existing cybersecurity plans in this manner can help to identify areas for improvement.
While cyber-attacks are growing ever more sophisticated, so is cybersecurity. Gamification is one fresh approach to cybersecurity training. Reading through countless tips and the odd video on cybersecurity is unlikely to translate to robust cyber-hygiene habits. However, gamified training results in increased engagement, knowledge and information retention.
Increased investment in cybersecurity may provide us with a host of interesting ideas. Cheltenham Borough Council recently announced plans for a £400 million campus development, situated next door to GCHQ, said to be the ‘Silicon Valley of the UK’. The complex will help to bridge the current skills gap and enhance the UK’s cybersecurity capacity.
Clearly, the coronavirus has highlighted a variety of cybersecurity threats. With remote working expected to continue for the foreseeable future and beyond, it is vital to address current shortcomings in security. Looking forward, the industry is an exciting one, poised for innovation and development.
Our Technology Market Insights Report & Salary Guide 2020 provides the latest insights on the market collated by our Technology Recruitment Teams, and from data collected from surveying our clients and candidates.
Our Scotland Salary Guide 2019 provides the latest salary data collated by our specialist Recruitment Teams covering:
Our England Regions Salary Guide 2019 provides the latest salary data collated by our specialist Recruitment Teams covering:
Our Technology Market Insights Report & Salary Guide 2019 provides the latest insights on the market collated by our Technology Recruitment Teams, and from data collected from surveying our clients and candidates.