Ep.2 - Our Lawyer Made Us Change The Terms Of This Contract So We Don't Get Sued [Supplier Security]

Updated: Mar 13

In episode 2 of Artificial Chaos, Holly and Morgan discuss Cybersecurity Supplier Questionnaires - and Holly explains why they make her angry.




...and one of the things I find is an email with a encrypted email attachment - it was a spreadsheet, and then the next email was the password for the spreadsheet. I think everybody knew where I was going with that, but the reason I wanted to mention it was, in the body of the email, it said; "Don't worry, I checked, this doesn't breach GDPR."


*sigh* Was the spreadsheet full of passwords?


Passwords. Yes. Because the , the thing that I did, of course, it's the only thing a rational human being could do. I screenshotted the email and that is the screenshot that made it to the report.


I love that. I also love that Excel is probably the most commonly used password manager in the world.


*pained noise*.




Yeah, yeah. Levels, levels.




So for this episode, I have just written a load of random notes that are things that I wanted to talk about that maybe weren't an episode in themselves, but either things that had come up in previous episodes or things that I just kind of wanted to talk about, and one of them, in our show notes document, I've just written "Holly fills in a supplier security questionnaire. Why do these make Holly so angry?" So how do you feel about supplier security questionnaires?


I don't feel positively about supplier security questionnaires, but that's probably because I've read hundreds of them. They're usually terrible. They don't actually give you any assurance and people don't fill them in properly.


I think the thing that frustrates me with supplier security questionnaires , and yes, for the pedant in the audience we're going to generalize for this episode, it's never too early to over-generalize. They just, they just like strike me in almost every instance as security by checkbox. Like the company is something like an ISO 27000 , uh , company, and they need to have a security process. So they have developed a security process without necessarily putting any effort or , um, without it necessarily delivering any value to them. Many, many of the questionnaires I come across, strike me like that, where either the questions are just about fundamental security stuff that doesn't really have any meaning behind it, or where if you give a bad response, or what might be perceived as a bad response to the questionnaire, nothing ever comes from it. And it's just kind of entirely disregarded. So you know, they're capturing this information, then not doing anything with it, is that the kind of thing that you come across for supplier security questionnaires. Or do you have a different experience?


No, absolutely. So I've worked in a few different risk functions and have seen different approaches to approaching supplier security and third party vendor security, definitely. So there are a good approaches where, you know, the , the security risk function is engaged early on in the selection process for a supplier and may have specific criteria for a vendor or a supplier that they're engaging with in order to approve the onboarding of that, that party. In, in those sorts of situations, you've usually got a lot of manual review work involved. Um , you might have a few different suppliers that you need to review, responses to the same questions in order to select your preferred supplier. Um, but it does mean that you actually can do something with the output of these questionnaires. In other circumstances, you might get a response from a supplier that's already been selected and pretty much has been onboarded. And you actually can't give any guidance. There's no like advisories or anything. You are just kind of given some information that, you know, it doesn't really bear any resemblance to your risk management approach at an organization. There's nothing you can do about it. You'll just kind of find out that, you know, maybe a vendor is like holding your, a copy of your core customer database in an un-encrypted SAN or something as an example. And they have like removable media enabled for a staff working in satellite offices and so on. I think it's a really bad way to gauge the security of a supplier.


I understand where they come from. And I know that some people will be listening to this and, you know , hear us say things like; "it's a bad way to gauge suppliers" and might be thinking well, it's one of the only ways that you can get through that, you know, trying to balance effectiveness just with efficiency and actually being able to like gather the information and take any kind of action on it. So I'm not necessarily of the mind that all supplier security questionnaires are bad, but I'm definitely of the mind that many of them that I have to deal with are bad. I can give you the example that led me to, um led me to writing on the show notes that supplier security questionnaires make me angry. And the reason for this was , uh , the customer that we shared a , uh, architectural diagram with that broke down how our system is , is , um , set up and a part of our system is serverless, but we still have those functions represented on the diagram of , uh , effectively what data those functions , uh , use and how we manage that, how authentication works in that area, those kinds of things, it's all documented. And as part of our discussions to complete the supply security questionnaire for this company they pointed at the serverless functions and said, "Do these run antivirus?" And that absolutely stumped me because what they were looking for and what , in fact, they mandated by the end of the discussion, it was a yes or a no for; " Do your serverless functions run antivirus?" And of course the answer is no, but of course the answer is no, because that question is dumb. I absolutely didn't know how to approach it in terms of, well, where do I start? It's like , um , are you familiar with serverless? Because, you know, as, as IT professionals, we shouldn't necessarily presume that everybody we're speaking to is familiar in detail with every service that we might use, containerization, for example, could be something that you strongly experienced in, or maybe have never gotten knee deep into before. Um, so, so we shouldn't have that kind of elitist view, but when they're pushing you for an answer and you have to say, well, no, there isn't antivirus in those systems . And then they're documenting within their supplier security questionnaire , some systems don't run antivirus and you're getting scored negatively for that that's really frustrating. Following on from that, what was even more frustrating was they seemed quite happy to write down that some of our systems just don't run antivirus and then never took any further action. There's no follow-up , there's nothing to say. Um, oh, you need to or , um , what controls do you have in place to compensate for the lack of antivirus? There was no follow to say, oh, sorry, on, on review, we've realized that antivirus isn't an appropriate control here. That was it. They're just, we got some reds for those boxes instead of greens. And then we moved on. So it seemed like a complete waste of energy. It seemed just like they had to do a supplier security questionnaire. And really, it didn't matter what we said.


Yeah. Um , the other part of it for me is that in a lot of cases, these supplier security questionnaires, or the output of those, if there is actually any output, any advisories, any recommendations or actions that you need to take will be really hypocritical. So there'll be organizations that are still running TLS v1 or 1.1 who will mark you down for, you know , still using 1.2, when you should have moved to 1.3. The other thing that you've got to take into account is what it is that the supplier is actually providing you with. So it could be like a stationery company or something like how much effort is it worth for you to, in-depth kind of assess the security posture of a supplier when all they're doing is kind of providing you with like pens and notebooks and things?


I have a really good example of that. One of the companies that , that we were working with previously , uh , delivering training , uh, required us to fill in a full supplier security questionnaire. So something like 140 questions about our systems. Now, the vast majority of our systems are for delivering our SaaS platform, it's for our vulnerability management platform. And in this context, wholly irrelevant. They wanted to know about the local networks that we have here. So the wifi in the office, which is almost entirely irrelevant for, for the delivery. One of the interesting things for , for that is the only actual data we hold about that company is our primary contacts, contact details - so their email address and full name - and then the information for their accounts department for how we invoice them. And that's it because what they're purchasing from us is an off the shelf training course. And we could absolutely deliver that to them without any information technology that could ring us up book a day, we'd go in, we'd deliver the training. And yeah, there's certainly no , um , interconnection of systems. There's certainly no meaningful data sharing, or certainly not significant data sharing, but we still had to fill in the full security questionnaire asking us about, you know, all, all of the setup for all of our systems. And we did it and the, of course we scored really well on that because the platform is set up well, it was just irrelevant.


Yeah. So something that I've seen work well in reducing kind of the amount of manual overhead that you've got when you're assessing the security of supplier is, if you have a kind of lightweight triaging process where you assess what it is that the supplier is actually delivering and what kind of data they're going to have access to, where they're located, the size of the organization and so on, that will give you a reasonably quickly a picture of whether or not you need to worry about the fine details of the inner workings of their network or how and where they store data and so on. Um , cause , you know, say if it is like a stationery company, you probably don't need to worry about that, because they will just have kind of information that's available in the public domain or, you know, the , the details for invoicing your finance department or something. Whereas if it's, you know, like your third party, IT like network provider or hosting solution or something, then that's something you need to be concerned about.


Yeah. I also find with some of these forms very often they mandate controls in them. So the question within the supplier security questionnaire, it might be something like; "Do you have Cyber Essentials IUI ?" So 27000, something like that, where that is a required question, or one that I saw recently, it was effectively you either attach your Cyber Essentials certificate or you sign to say that you will achieve Cyber Essentials within six months. And that's fine because I mean, we have Cyber Essentials already, but even so it's not a particularly burdensome thing, but in many instances, again, it was just like not relevant to, you know , what , what it is that we're providing for that company. It can be very frustrating. And also where you're mandating a specific compliance requirement that might not be a good fit for that company, or they might have done something else. So yes, they might not be ISO 27000, but they might be compliant to another standard or they might just be approaching it in a different way.


What was it specifically about the most recent one that you filled in, that was the most cumbersome? What , what felt irritating to you about it?


Mandating controls in locations where the controls were irrelevant ? So the, the two big ones that came up were the antivirus example that I mentioned, and the second one was being , um , intimately interested in our office network. Now the interesting thing about our office network is we don't consider it trusted. So , um, all of our data is in the cloud. All of our systems are cloud based. So all of our controls are set up in such a way that you , um, connect to the cloud systems to achieve anything. What I mean by this is from a security point of view, we don't care if you're at home, if you're on the road or you're in our office, the technology will treat you the same. We have the same level of trust across the board. And they were very interested in network diagrams and layouts of the office network, but didn't care at all for staff who were on the road or staff who work from home when, from the technology point of view, it's all the same anyway.


That seems like quite traditional approach and way of-


Very, very much. And in fact with that same company, we also stumbled on the fact that we use VDI. So for those who haven't come across VDI, virtual desktops, where , um , some of our devices you would remote into a virtual desktop and then the controls would be applied in the virtual desktop. We don't necessarily do that for , um, a security point of view, although it definitely can be easier to manage where it is, like everything is standardized cause everyone gets the same desktop interface and things like pushing out updates can be simplified and things like that. So VDI are very cool. It's just sometimes easy, you know, it's sometimes an easier way of , uh , enabling staff to access systems. They did not have a questionnaire that was set up for handling VDI, they did not have a questionnaire that was set up for handling things like serverless , which is fine. If you take it the way that cyber essentials do some have some certain technologies or some certain setups within cyber centrals are just out of scope. We'll just pretend that those don't exist for the sake of that assessment. That's fine. But they were trying to apply a security questionnaire to an environment that it just didn't, it just didn't fit.


Yeah. The relevancy question has got me a few times when I've kind of been on the other end of reviewing a lot of kind of questionnaires and things that we've had back from suppliers. Um , I've worked in a couple of different internal risk functions , um, and something that I really dislike asking third parties is; "What is your business continuity or disaster recovery strategy?" Because there are so few situations like that that's actually relevant to, if it is a critical supplier or service, then that should be assessed early on. And that should be kind of written into contracts. It's, it shouldn't be covered in a security questionnaire as an example, in the vast majority of cases where you're kind of auditing a supplier or reviewing the security posture of a supplier, it's not going to be those kind of tier one crucial third parties that you're kind of engaging with where you need to worry about that. It's going to be, you know, sort of, oh, this, this company does , uh , rewards or something for like our HR system or for people who've worked here for like 5, 10, 15 years or something. You don't really need to worry about that as a business continuity perspective, because like in the event of them kind of having downtime or something like it , it's a slow service. It doesn't materially impact any customer journeys and so on.


If you, if you can, you know, be down for a week before the other company notices the impact of your business continuity disaster recovery is going to be pretty minimal. Um, this reminds me of though isn't one that, that I've stumbled across, but by talking to people recently, of this problem where companies never review the supply security questionnaire. So you fill in a security questionnaire when you become a supplier to that company, but then there's no periodic re like periodic re uh , check of , of what's happening. And the reason that it came up for them was , um, project scope creep. They originally engaged with the company for a very, very minor project where most of the security questionnaire was disregarded because it wasn't relevant in that instance. And then that was it. There were then on the supplier list and they could sell them any number of things and the project could get much bigger and they could start sharing much more data, they could start being much more critical to this company. But the company that we're working with never reviewed that stance . There was no periodic; "okay for critical suppliers every six months, we'll check if anything's changed, for medium suppliers every year", that kind of thing, there's nothing like that. So they had a really easy time. They would just like, you know , four or five questions they treated us as a low risk supplier. And then over time they became much more critical.


That's quite dangerous, really, because not only can the services that are being offered changed drastically, you know, you could kind of onboard someone as a small supplier and then begin to leverage more services from them, especially with security consultancy, you might kind of onboard them for some initial kind of pentesting and then decide to additionally use them for red teaming exercises, like on a more ongoing basis, and then also to deliver kind of training and compliance training and so on. But also the security posture of a supplier can change massively in a really short space of time. Really, you know, like all it takes is like a couple of missed security patches in a cycle. Um , and then they have like a compromise or something or they move their infrastructure to be in kind of hosted or in the cloud or begin to leverage a, a new kind of core system. Yeah, that's, that's quite a funny one. So I guess in addition to the initial triage process where you assess the materiality and the criticality of a supplier, you'll also want to note to flag if they're being leveraged for other services and to still review them every so often, depending on what the tier is.


It just some system for handling that problem, isn't it, it's like, you know, I'm not, I'm not here to tell companies how to manage their risk, but just have some system of noticing that the relationship has changed. Here's an interesting one that I see a lot of companies get caught out by is things like where cyber essentials or something similar to that is mandated as part of a supplier security questionnaire or as part of a relationship with another company. And then that company just by accident, because, you know, we're all humans and people make mistakes. Maybe they don't realize the lead time on recertifying for cyber essentials or something like that. In particular, I saw this last year with people getting caught out by the pandemic. So maybe they had some staff furloughed and it took them longer to get through these processes than they were anticipating, or maybe their suppliers had less days available. And the result of that for some companies was maybe their cyber essentials would expire before they could recertify, but maybe for like seven days. And so many companies absolutely freaked out by the fact that their cyber essentials was um, expiring either because they considered it a big problem or their, their third parties considered it a big problem. But I'm going to go out on a limb here and say that for the most part, a company's security posture is not meaningfully going to change between week 54 and week 55 of their cyber essentials. Either you've kept it up or you haven't. So if you miss it by a week, that shouldn't be that big a deal.


Yeah. There's an element of humanity involved here as well. Like you need to consider a sort of "Act of God" clauses, I suppose they would write into a contract, and like the pandemic was massively out of anybody's control. Um, so there's going to be kind of things that slip within that timescale. So I suppose it's a bit boring actually, anyway to be honest, personally, reading all of these, like endless questionnaires about the security posture of a third party supplier, and it's likely never going to materially impact you. Although I understand that the reason we do that is because supply chain security and kind of third party compromises are a huge vector for data breaches. So I do understand the need for it, but I just think that there should be an alternative approach that gives us more assurance, isn't quite so manual and so kind of paper-based, and is actually relevant and like proportionate to what it is that has been provided by the third party.


Yeah. I think that's the important detail of this is like, let's rant about supplier security questionnaires, but we're not saying that supplier security questionnaires have no value or they shouldn't be used or that we have some better method of doing this. We're just saying that so many companies do it poorly to the point where it's almost valueless. Yeah, absolutely. You need to handle the risk of the supply chain and a supplier security questionnaire, can form part of that, but you need to actually be doing it meaningfully and you need to be aware that different companies handle things in different ways. I give you an example of this. The vast majority of our company policies are just publicly available. They're on our website, if a customer or a third party that we work with requests a policy, for any reason, the vast majority of them are public because the vast majority of them actually don't hold confidential information. But many, many, if not, most of the companies that I work with consider all policies, all company policies, internal-confidential. And it's very funny, in fact, cause you can still find many of them online. You can find them online. Marked as internal-only-confidential, but when you read them there isn't anything sensitive in that. I'm sure you could cherry pick an example of one that is sensitive, like particular aspects of security people might consider sensitive, generally speaking, Kerckhoff's Principle. I don't think that they are, I should be able to tell you the way that our system is secured and it shouldn't impact the security of that system in any way. Secrecy and obfuscation is not what makes our systems secure. And also many of them like our environmental statement... That, that's not confidential. We publish that.


Yeah. Um, I've actually been on the receiving end of that a few times. And it is massively frustrating when you get a response from a vendor or a third party and they say all of our security policies are confidential. Um , can't be shared outside of the company, um, internal use only, and that sort of thing like, well, you could just be telling us now and literally taking that box and saying; "We have an information security policy. We have a business continuity policy," but you've not provided any evidence of that. So I don't actually know that that is true. You have to take them at kind of their word. There's no way auditing this. There's no way of kind of providing evidential support of what it is that they're claiming. So you're basically saying are you secure, and they're saying yes, and you're saying, okay, then that's fine. And that doesn't actually mitigate it . It doesn't even give you a shape or a size of any risk that might be associated with that supplier. So they are effectively useless if you don't get any evidence and you don't get any kind of policies or anything to read through, you have literally no idea about what's going on behind closed doors at that organization.


Not only that, but one company I was working with a little while ago considered all of their policies confidential. They wouldn't, they wouldn't publish any of them. They're all marked internal only. And one of them, for example, their password policy, you could work out by registering an account on their website where it would pop up and say, your password must be eight characters long and have an upper case, lower case, number and symbol. So yeah, they, they had just blanket confidential. And I know why some companies do that because it's easier. It's easier to not disclose the information than it is to go through the process of determining whether we should be able to disclose that information. You know, if, if you make everything top secret, then you don't have to worry about anything leaking. The problem with that is it restricts your agility. One of the really good things about having your policies published is if , uh , if a third party ever requests a detail or requests further information, if you can just send them a link and say, here you go, here's the full document, check anything you need, then that's much easier. You can move much faster if you don't have to go through a gate or get approval for, can I send this document to this company? It's just much easier. You can move much faster.


Yeah, sure. There's an element of transparency associated with that as well I think, and it really demonstrates kind of internal attitudes to risk management and accountability. I think there's a couple of start-ups that approach security and transparency in that sort of way, where everything is available. You can see everything on their sort of public facing website. You can go kind of to that careers or corporate website and see literally everything from all of their policies, their tone of voice guidelines, internal pay bandings what their security guidelines and posture and things like that are, any architectural principles they've got, it's all readily available. And I guess in very few circumstances that might be useful to an attacker. Um, but the amount of work that you'd have to put in, I guess, to leverage any of that-


Just in most instances, fundamentally don't believe that it is, is interesting to an attacker for the most part, because a lot of it, you could work out anyway. And as I said earlier, the system should be secure regardless of whether that information's available or not. Talking about, you know , the things outside of, of security is actually just another good example of where being transparent with these things can be really useful. So for example, our brand guide and logo pack are also just online. So if we engage with journalists or something like that, they ring us up for comment and they want a copy of our brand guide or company logo. Yeah, great. It's just online. And when you have that transparency first, unless there is a legitimate reason to not be transparent it's much. And under example might be something like an organization's cryptographic policy where , "uh , oh, we , we can't tell you what encryption algorithms we are using". It's like fundamentally misunderstanding how cryptography works. And also is it AES? It's always AES.


If you can't tell somebody what cryptographic algorithms you're using, it's probably because they're broken and you probably shouldn't be using them. If they're contemporary cryptographic algorithms that have been tested, um, and standardized, then it's fine to tell people what you're using because they're secure.*


Also , um , can I not just, you know, scan your web server and find out which algorithms you're comfortable with using anyway. And I know there's a difference between like internal and external systems and things like that. But in many cases there isn't and, in many cases they're probably just using defaults.


Yeah. But I guess , um, there's only so much information that you can get as an outside, well, perspective.


But yeah, I think the stance is the same with the supplier security questionnaire when it comes to publishing policies, review the content and see if there's anything in that that is useful in that context. And if there's not remove it. Um , if you do have genuinely, you know, inside only information , um, in policy documents, you should probably be controlling that in a way that is more rigorous than just in the header, write "Internal only" because that's what I mean about like so many of these documents are available online anyway , either because staff aren't aware of these things they get shared accidentally, or they're not actually protecting them other than document marking.


Is there's something ironic in suppliers who won't publicly share their information security policies, completing a supplier review, like a security questionnaire, and not having a suitable enough data loss prevention solution in place to prevent the leakage of those policies anyway?


Such as the militaries security manual, JSP 440 being on WikiLeaks, which includes a section on what to do about document leaks?


That's quite funny, but I don't think we should discuss WikiLeaks today. *laughs*


But, but yeah, absolutely where , um, it can show poor hygiene where those documents are handled in that way. Either that company isn't actually having risk-based decisions on how to handle documents and they're just marking everything secret so that they don't have to worry about it all. Or you say, it's just evidence that their data loss is not good. If all of these documents that they're marking as sensitive are getting disclosed anyway, be that staff security awareness issues where they don't understand what you know , confidential means for that organization, or be it that nobody actually cares and everybody shares it anyway. I worked with a company a little while ago who considered their IP address space internally - so RFC 1918 addresses - considered those confidential and they would keep them in a safe, we had to sign an NDA before they would tell us what address space they were using. For contractual reasons, I can't tell you, but... 10 slash eight , um.




That, that kind of thing where, where again, there is no actual thought process or actual risk management process behind that. They're taking the stance of, if we make everything secret, then we'll be more secure. And I just fundamentally don't agree with that.


One of the other frustrating things that I've seen on the receiving end of reviewing supplier questionnaires are suppliers who just flat out, refuse to engage with you. So I've seen kind of responses where, you know, they they're quite blunt, very, very high level, not much information won't share policies or documents with you at all. And then there are suppliers who just do not respond, at all. And in a lot of kind of procurement , um , processes, while you're engaging with a third party , it's mandatory that you complete an information security questionnaire or a security supplier review before you're onboarded. But what do you do when it comes to re-review and a supplier refuses to engage with you? And doesn't give you an update on what their posture is or , um , if it's not mandatory in your process that that is completed and you don't get any engagement from a supplier, you then have no idea about their security posture whatsoever. There might be a breach at that supplier that concerns your data and you didn't have enough oversight of that. It's like a complete blind spot in your third party risk management profile that you couldn't account for.


Yeah. This is an interesting thing, actually, because most of the time when I get frustrated with supplier security questionnaires , it is purely just how long they take to fill in when there's 150 questions about security basics to cover. And I was talking to somebody , um , previously about this, and they're kind of put the onus on us and said, if it's difficult for you to fill out security questionnaires, then you should look to do something about that. And you should look to not necessarily document your systems better, but to make that documentation more available. And I do concede the point that there is a degree of that you should be looking to be as efficient as possible, for one thing, agility is always good for business, but it can also be a differentiator. If a company is looking to engage a supplier and maybe they're doing security reviews of three different companies, and you're the only one who responds, you're responding faster than everyone else, you're responding with more detail than everyone else then absolutely that, you know, that could be something that should be noted by that company of; "Hey, you know what? This company was just easy to work with." I think that not enough effort is put onto that, in the same way that what you mentioned there when a company is certainly on like a , I guess, recertification, what would that the word be for that , for the review, when a company just doesn't engage with that review? I think the , uh , the company is engaging that supplier should be prepared to pull the plug and should operate in situation where they can say, look, if you don't follow our policies, and if you don't , um, go through these things, if you don't share this information in a timely manner to the detail that we require, we will just stop working with you. I know why companies don't do that because of cost . They're not optimizing for security, they're optimizing for revenue or company value, or all of those other things. Yeah, let's, let's not get into that, but if you don't have a stick, then a lot of companies can just say; "Nah we're not going to bother because what are you going to do about it?"


I think it's a cultural issue, really. It's a wider problem about how an organization approaches security, how much , um , weight or backing their security department has, power they have to dictate whether or not a particular supplier should be used, and if that is appropriate. But like you say , like a lot of companies don't look at supplier questionnaires when they get them back, they just kind of sit there. So what use is it , I guess, to some of these third-parties and these vendors and spending all of this time, completing these questionnaires, if they're not even sure that there's going to be a response from it, it's just going to sit in a file for four years or something until they're re-reviewed?


That, that was in fact, one of the reasons that we moved to , to be transparent with documents where we could, so that we can move faster so we can present ourselves better to customers and suppliers. And also just because filling in those customers , is a pain. And if I, if I don't want to have to dig through our internal document management system to find the policy that they're after, it it's much easier for me to just say, they're all here, grab whatever you need, kind of thing, that is better. But I have definitely filled in security questionnaires and submitted it to companies and felt like nobody has read this, because we have given answers where you would expect somebody should follow up on that, you know?


Yeah. And I think, especially for you as a startup , that's even worse because your head count is small and you have so many things that you need to deliver. And so many areas that you can add value and, you know, completing, like massive potentially hundred page documents, from people that you're looking to sell a service or a product to is a complete waste of your time. Especially if they're not even going to do anything with that information when they have it.


I'll give you , give you an example of one of the questions that we filled in expecting them to like, come back to us for more information. One of the systems that we use has passwordless authentication. We don't use passwords, we authenticate in a different way. So when it came to the question in the context of that system, there's something like enforcing complexity, you know, there'll be, do you have a minimum password length? And do you enforce complexity? You shouldn't enforce complexity. Please read the NCSC's guidance updating your approach on why complexity is maybe not the best idea, but they have that question of , do you enforce password complexity and our response is: We don't use passwords. You would think they would follow up on that and be like, is this one big shared account? Or you know, in actuality where we're just doing something , something different, one of the systems you authenticate with digital certificates, for example. So it's secure, it's well-documented we can share all of that information, but directly speaking, we don't use passwords.


Something that I'm really, really excited to see in the future is broader and more varied authentication standards. It's like this is a side note, not completely relevant to supplier security at all, but , um , I've a background working in the finance sector and when Payment Services Directive 2 came out a few years ago now, I did some work looking at the Regulatory Technical Standard, and then producing sort of an internal authentication matrix where I kind of took what was in the regulatory technical standard and created an internal policy document or standard with like various levels of authentication assurance and mandated that specific customer systems or banking systems or something needed to be a particular level of assurance. So you can say you can have single factor, which is a password of a particular complexity level. You can have multifactor , which might be a one-time pin, or, you know , like , uh , another factor of, of security , um, another authentication mechanism, or you can have kind of a third level above that , um, where you start to rely on kind of biometric authentication mechanisms as a second factor. But that would be true multifactor if you're on kind of a device that has , um, biometrics enabled and also is delivering a one-time pass .


So one of the things that I see an awful lot is I differentiate between two factor authentication and two step authentication, where in , in my opinion, and everybody should agree with me. It's not two factor if the factors are the same. So if the system asks you for your password and your secret word, I would call that two step authentication. That's not as good. It's also interesting to hear you though, say single factor where you're using a password, which again is like defaulting to the fact that we're , we're a password first society. Aren't we , and , um , some of our systems, we don't use passwords. They are single-factor authentication on some systems, but they using digital certificates or something like that, or with , um , some systems where they're passwordless, but again still single factor, where they're doing something like an on device prompt to log you in, but it's still only single factor in terms of, you know, you're not supplying two separate types of authentication data.


No sure. But then there's other things that you can do, like device fingerprinting, or if you're using say biometric mechanisms typically , um, although it might be one factor in , you know, that it's face ID or touch ID or something like that. If it's only set up for use on that device, then technically there is an element of multi-factor involved in that because you have to physically have possession of the device and also, you know, the biometric factor that's involved with that.


Yeah. I think that the big thing that a lot of people don't realize with , um, things like face ID and touch ID is if you log in to a system with touch ID, your fingerprint is not authenticating to the system, your fingerprint is an on device authentication mechanism to unlock a key, which is authenticated against the target system. Um, because there are some problems with , with biometrics, of course, one of the examples would be you can't change your fingerprints, right? But with systems like touch ID, whether the authentication is happening on device, while then effectively you can, right ? Because the key that it is unlocking can change. So it's not quite the same as where your fingerprint data is being sent to the server, it's the device is handling it. And then it's sending authentication, a separate piece of authentication data. Um, but yeah, I think just the the thing to drag us back to supplier security questionnaires is where the security questionnaire presumes something about the system. So the example that we've just covered here is where it presumes you're using passwords and then asks you a whole bunch of information about your passwords. And do you have password complexity enforced? And you're like, well, we're not using passwords. So is that a no?


You may be using, you know, NCSC guidance, as you mentioned, where the latest advice is not to enforce password complexity it is to use passphrases or , or passwords of a particular length rather than particular character combinations or types, that might have lower entropy than something that's like three keywords or longer or something.


Yeah. So if people haven't come across , um, sometimes compliance requirements, mandate that you use things like password complexity. If you're hearing us say that and you've not heard that term before, Microsoft would define complexity as three of the four; upper case, lower case, numbers , symbols. That isn't a standard PCI DSS, for example, 3.2.1 is , um , alphabetic and numeric characters. So it requires and things like PCI DSS, 3.2.1 mandates password rotation as well. That's under the, under the password control. So all passwords must be changed every 90 days, but then you look at something like the NCSC guidance, or NIST-863B, both of those documents, very clearly state do not enforce complexity, do not enforce password rotation. So you could have competing best practice there where maybe you're mandated to follow PCI. And then you've got this best practice document from the NCSC, who's saying, don't do those things. Now we're glossing over some of the specifics there cause the NCSC in their updating your approach blogposts do go into the detail of why they say those things. But the reason that I wanted to add that detail is the NCSC has been advising against password rotation since at least 2015. So whilst very often we think of this, oh, this must be a new change or something like that, in actuality has been advised for years.


A lot of security guidance can actually be counter-intuitive to what it is that you're trying to achieve. So say enforcing complexity and rotation in tandem forces users to set passwords that they likely won't remember, they'll get locked out of their accounts, which, you know, kind of floods like your service desk with calls to unlock their accounts or reset passwords and so on, or they will write their passwords down or store them insecurely , which then, you know , is likely to lead to a breach. It might contravene your clear desk policy and other things like that. Whereas if you, for example, you can enforce a minimum length of say like 15 characters, but not enforce complexity that might encourage users to create like longer passwords with greater entropy that are considered secure still by modern standards. And if you change how often you expect users to rotate passwords or completely do away with password rotation, that likely puts you in a bad position. And it also builds good will with users so that they will approach you if there is a situation where they need to reset their password, if it has been in a breach.


Uh , not , not only that, but I think people make the presumption that complexity and password length help. A password length has an enforcing like an eight character minimum or something like that. And very often just simply doesn't. So, you know, if you were to say, okay, we enforce an eight character minimum and we enforce complexity, great, Password1 with a capital P, Lockdown2020, welcome123, let me in with a , with a one and an exclamation mark at the end, those are terrible passwords. Those are passwords that are going to be broken through password cracking very easily. Those passwords , that are uh , likely to be guessed very easily. And even if you have account lockout where you're say looking after five invalid attempts or something like that, password1, it's probably going to be the first one they choose anyway. So your account lockout policy is not protecting you. I absolutely do agree with the points that you're saying around trying to prevent users, writing passwords down by making the system easier to use and kind of giving a little convenience for a little security like that. That is absolutely true. But also the point of the fact that just enforcing complexity doesn't mean that the password choices will be good. It just means the password choices will include a mixed character set, they could still be terrible passwords.


Yeah. And they're likely, still going to be things that you would see in a dictionary attack, or credential stuffing or password reuse from another breach.


Or rotation, where you're saying you must change your password every 90 days. And then you get Summer2020, Autumn2020, Winter2020.


Yeah, I think , um , the way that a lot of companies deal with that one is remembering a certain number of iterations of previous passwords. It's usually like 12 or something, if they have a 30 day rotation policy-


Password history wouldn't help against that where you're changing the year though. So if I do summer2020 autumn2020, winter2020, spring2021. Even if you have history enabled it wouldn't protect against that.


No but I guess it depends how you're storing the passwords . I've seen password policies where they won't allow you to reuse part of your current password in , um , the password that you change it to, or it's not allowed to be the same as your username and so on.


Yeah. Azure's method. Hey, I'm going to say something nice about Azure. Yeah. I'm sure other providers do this, but Azure ID , um, you can supply a list of base words that aren't allowed and then it will do permutations to those. So if you say you can't use the word password in your password, it can do things such as realize that you've mixed the case or realize that you've added a suffix and things like that. So instead of treating passwords as a string literal where it's like winter 2020, winter2021, as you say, you could catch it on the fact that winter has been used twice. So yeah , there, there is , um, technical ways around it. Absolutely. The point that I was just trying to make is companies should look at those things. Companies should look at what is the control and does it actually lead to better security? What is the question in the security questionnaire? And does it actually measure the security of the provider? And also, does it presume some information about the provider? Does it presume they're using , um, passwords as the first factor? Does it presume them not using technologies like serverless? Is it compatible with things like containers?


So for things like online banking where you have a particular piece of information, it might be your password, or it might be a secret answer, which is, you know, the same kind of factor when you're looking at authentication and it asks you for specific characters from that piece of information, that's usually an indication that they're not storing that correctly, or they're not storing it securely. Right?


Oh, it , it can be. There's a couple of things I want to add to that though. Yeah. So let's talk about Facebook then. You can do interesting things with passwords without necessarily storing them in bad ways. An interesting example of this, I'm not necessarily going to say good, I'm not going to give my opinion on this, but , but , uh , an example of this in the wild is Facebook stores , multiple versions of your password. So for example, if you type your password with caps lock on, it will still log you in. Now that does reduce the key space, but they have decided within their risk management system, that it doesn't meaningfully reduce the key space enough, but it significantly increases convenience for the user. If the user makes that very simple usability mistake of having caps lock on or having them lock on or something like that, that will, you know, it'll kind of ignore that problem. There's a number of time attached , a small number of permutations that they store to allow those kinds of common mistakes when typing the password. That doesn't mean they're doing something bad, like storing the password in plain text. What they're likely doing is when they receive the password hashing several permutations of it. So they can still be storing it securely. But I would concede the point that a lot of companies still do store passwords badly, be that using outdated hashing algorithms and that kind of thing, not salting passwords, or just storing them in plain text. I still occasionally see companies who are able to email you your password when you forget it. And things like that. And it's like, you shouldn't know what my password is. That's not how authentication should work. Yeah. There's an interesting thing with email as well. Um, some email systems are horrendous and some email systems do have things like , um, encryption in transit. Transport layer security is a thing, SMTPS is a thing. I think a lot of people presume that email services are horrendously and secure and it doesn't have to be the case. I'm quite happy for people to think email services are horrendously insecure though, so that they don't send sensitive things over emails. I have a good example of this couple of years ago, I was inside an organization's email inboxes doing a pentest, just poking around in emails. You find wonderful things in emails. And one of the things I find is an email with a encrypted email attachment. If I remember correctly, it was a spreadsheet that was , was encrypted. And then the next email was the password for the spreadsheet. I think everybody knew I was going with that. But the reason that I wanted to mention it was in the body of the email, it said; "Don't worry I checked, this doesn't breach GDPR."


*sigh* Was the spreadsheet full of passwords?


It was something sensitive. I think it , was um , passwords. Yes. But I can't remember because the , the thing that I did, of course was the only thing a rational human being could do. Screenshotted the email and that is the screenshot that made it to the report.


I love that. I also love that Excel is probably the most commonly used password manager in the world.


*pained noise*.




Yeah, yeah. So, yeah, I think that that pretty much goes over my hatred for cyber security supplier questionnaires . And we didn't even necessarily dig too much into ISO 27000 and cyber essentials. So it's not like we're bullying compliance standards or something like that. I think the lesson here is just whatever you're doing from a security point of view, please think why are you doing this? If you're marking documents as confidential, what in those documents is confidential, is it maybe a better approach to separate that out and have the sensitive data actually protected elsewhere and the public data, be releasable? If you're looking at things like asking suppliers security information, what are you then doing? Are you reviewing those supplier security questionnaires periodically? Are you actioning any of the responses that they give you? Do you have a process for following up for more information? Do your questions presume some information about the company that might not be true and therefore might impact their ability to respond. And my absolute favorite one is, is it even relevant to the service or the risk level that they present to you? If you're just using them to buy pens, you probably don't need to read their business continuity plan, or if you do, you should write down why you do. I think I'd like to summarize my views on cybersecurity supplier questionnaires as; have data driven arguments. Have a justification or reason behind everything. And especially when it comes to some of the things that we've thrown in here, as just like value added points in terms of things like if you are enforcing complexity, you should really think about why you're doing that and what the impact on usability is, what the impact on accessibility is, those kinds of things. And also just one thing to close off is just because a bank is doing it doesn't necessarily mean that it's a good thing to do. I saw a bank recently that had a maximum character length on the password of 15 characters. Your password must be between six and 15 characters and you weren't allowed to use symbols.


As someone who's spent a few years working in security in the finance sector, I would potentially posit that if a bank is doing it, you probably shouldn't.


Yeah. Maximum password length, they're, they're probably not a good thing. There, there is some technical limitations that you should consider, but if your password policy is "Your password must be between six and 15 characters", you're probably doing it wrong. I saw the best example recently, though actually a security conference of all places had a password policy. Your password must be eight characters, long. Eight characters long.


Exactly eight characters?




That's really-


I think somebody misread the password policy. I think in fact, maybe they might have read it. Literally the password policy may have said your password must be eight characters long, meaning at least, but they had taken it literally. If your supplier security questionnaire is dumb, please don't send it to me. In fact, please don't email me.


*laughs* She doesn't read her emails anyway. It's fine.


Please. Don't email me.


Recent Posts

See All