Do Managed Security Services Elevate Overall Security Posture?

Does a managed security service enhance overall security posture? Usually No! 

Managed security services are highly built on customer expectation instead of precise protocols to build a security barrier for client.

There are many factors involved in quality of security services after migration to a managed service but most effective one is “client expectations”, or better say, client understanding of cybersecurity realm. That’s why most of the companies downgrade by migrating to managed service because they think best way to manage an unknown and scary world is to bring someone else to take care of it for them, but if one’s did not understand the challenges of Cybersecurity and was not able to manage it before then there is no way expecting an MSSP can manage it. 

Providing managed security services is a market highly built on customer expectation versus definite and precise protocols to build a security barrier for client. This is currently happening with cheapest analysts you could imagine. There is no way to set up a SOC for any number of clients and dedicate analysts for them with less price than having same workforce in-house. So obviously quality of security services is affected, not considering securing an unknown entity where all the objects and workflows are unknown is way more than a tune-up sessions, months and even years of understanding a system. 

Imagine one brings a firm to secure their house and let them watch video cameras 24/7, even if privacy is not a concern which it has to be, we usually bring people to “set-up” things, not to watch them for us. 

However, there are pieces one could outsource and utilize managed security services for areas that are meant to be managed by third-party. There are tasks we could pass to a 3rd-party security provider which I am going to cover later on “how and where to refer to a managed security services provider”? 

Does Cloud Guarantees Security?

There is a wrong perception of Cloud security among consumers of the Cloud solutions and platforms. Actually, classic Clouds are more insecure than traditional computing even though it is set on stone for most people even many “IT professionals” that Cloud computing is natively more secure, or by default it is at least more secure than on-prem software. 

Classical Could Computing is more insecure than traditional computing!

A couple of big downsides to Cloud are: 

  • Relying on default configurations 
  • Formulating based on the platform 
  • Prone to more frequent and targeted attacks 

Cloud users are tending to accept and apply default configurations to their environment. It does not matter if this is because of lack of knowledge of the Cloud platform or just an over-trust relationship. The result is insecure at least due to unjustified configuration and settings which are supposed to be highly customizable which they are actually not in a real-world scenario. 

Both Cloud services and platform providers and consumers formulate workflows and force systems and presume functionality based on a given criteria not what different businesses demand. This is not the default behavior or a native flaw in Cloud computing but because the platforms are not abstract enough then software becomes highly dependent on the original ideas and formulas. The immediate sign of this is more and more seeing software products being so similar in architecture, design, implementation or even application. 

The last one is an inevitable result of Cloud computing. We experience this every single moment online. Remember what made Microsoft Windows with thousands of flaws more comparing to Linus or Mac was not really because Windows is more insecure, but because it is more targeted based on the market share. Now, as a cybercriminal, would you target Amazon web services or a proprietary software sitting somewhere secluded from the Clouds?! 

Why trying to compromise and find flaws within a tiny piece of proprietary software instead of Microsoft Azure platform? 

Why Folks Are Not Able to Secure Their Network?

The question simply is: Why we do not feel insecure even spending a lot, giant teams of professional and bunch of fancy tools? 

And the answer simply is: Wrong Direction! 

As long as one’s going wrong direction, we certainly cannot even imagine being able to reach the destination. How it is possible to reach the goal when going opposite direction?! 

Direction for securing networks and all other sort of cyber entities is wrong and that is why we won’t be able to reach a favorable level of security, no matter how much we spend or try. Actually, sometimes we even get more far behind it because we are spending on a wrong set of subjects, so we even get more insecure over time. 

And if you ask why I am so sure that direction is wrong, rather than trying to prove with providing right logical approach, I simply ask you: would not we be secure if the (popular) approach was right? 

You could have at least one online firm who is feeling good about their security, only if the direction, approach and methodology was right. Instead, you can find millions of cyber firms struggling more and more every day. 

Choosing the right direction to secure your online assets is the first fundamental step. Without that, you will be lost like most of the community. Sounds naïve? But then why despite thousands of tools and professionals to set them up, still community is not certain even for a fraction of a second feeling that countermeasures are enough strong to secure an online asset? 

What is the right methodology to secure computer assets? The answer would be shocking after you realize how it is simple, cheap and easy to accomplish.

Is Whitelisting a Good Security Practice?

Whitelisting has been for sure a relatively standard and sometimes as a hardening security measure but it depends how we implement and maintain it and where it is initially enforced. 

Whitelisting could be against you if setup at the wrong spot or with inadequate supportive elements. I highly recommend whitelisting behavior rather than whitelisting elements like applications, IP addresses, emails, domains, users… 

One of the most obvious negative usage of whitelisting is where we unintentionally give more opportunity to file-less malware attacks and all sort of insecurities around anything whitelisted among operating system without being supported by enough factors and elements of validation. This is simply when we rather focus on behavior than solely origination of a file for example. 

Blind whitelisting, that what I call when we just filter based on one factor, is highly prone to be defeated. It is vulnerable to forgery and easily bypassed because there is no support. File-less malware heaven is actually a traditional whitelisting approach. 

What is so effective and almost undefeatable is behavioral whitelisting where we filter a set of elements even considering order of execution. For your information, almost all EDR solutions in the market currently either lacking behavioral whitelisting, or they solely rely on traditional one-stop whitelisting which is really dangerous and totally against the nature of an EDR.

Why Common Vulnerability Scanning Practice is Useless?

I hope you will find this so obvious but unfortunately security community is highly relied on vulnerability scanning in a way which makes it totally useless or even harmful! 

Vulnerability assessment is evaluating of a System against known and potential security flaws. A System is simply a collection of processes, workflows, people, nodes, software…but traditional vulnerability scanning only focuses on individual nodes and software rather than seeing them as a whole equation. 

Today’s common vulnerability scanning which is believed to be so effective and is the center of attention for almost all type of manages security services, is actually harmful in a way that completely ignores the attack vectors coming and result from presence of link and connection and relation between many (all) components of a system, not just computers, webservers and software applications.

Penetration Testing vs. Secure Code Review

What is the best way to make sure a software product is secure? 

The easiest way is to roll out to the market and see what is going to happen and hope everything does well…no kidding, that is what most software developers do! 

Let’s forget about what majority of software community do and see what are other ways: 

1- penetration testing: for decades this was the best resort. If you are a software developer and you test your software then you are good. Of course, and I do not want to get into the type and quality and result of multiple ways of penetration testing and even talk about how business on relying on tools which are natively incapable of finding security flaws…I have been doing pen-test for more than 2 decades and never used a particular ‘tool’ out of the box for that purpose. 

But regardless of what is right and what is wrong in terms of penetration testing, introducing a software to this stage before or even after market presence is smart, but it is so expensive also: developer needs to test every time software changes, technically with each new version and of course they can normalize based on changes. 

2- secure code review: when 2 decades ago I suggested to one of my clients to migrate to a system of constant check and verification “before” even compiling the software, they though I am that much crazy to ignore great money paid for penetration testing but I just wanted to make sure a software that I am putting my verification stamp on it has the better, easier, faster and cheaper way I finding security flaws and is more reliable, more control and in one sentence, has the right way of finding and mitigating security flaws. 

Penetration testing is so good but it is after the fact. I have seen many software products where fixing discovered vulnerabilities takes a long time, expensive and in many situations even impossible to fix so then why not finding those flaws before final stages of development or even rolling out to market? 

When to do secure code review? 

  • When compliance is a factor 
  • When the budget and other resources are limited 
  • When dealing with time-sensitive software projects 
  • When releasing hundreds of new versions annually 

When to do penetration testing? 

  • Always!
  • When testing only Software is not an option and System and/or Process are targeted 
  • When Settings, Configurations and Workflows are important

Does any of them dilute another? In other words, can I skip secure code review just because I will have a comprehensive penetration testing or vice versa? 

No! Skipping any of them means skipping any of important phases of a software and corrupting a SDLC. 

Does Internet Act As A Valid Source Of Information?

Internet was built with the initial goal of providing the most validated data to the corresponding party. Today we are so far away from that mindset but still, how much we can rely on the data provided via the Net?

The answer is simply depends on the source of data. People usually believe what they read and see on the Net, especially if it’s Wikipedia, Google reviews…but in fact those are not valid source of information. Most of the times the information is not even accurate, regardless of the fact that information providers are prone to be censored in a very tricky way if what they say is not favorable.

As an example which does apply to this weblog, if I wanted to participate in Google AdSense program, I would not be able to question Google services, or criticize Wikipedia because based on Google policy, you are not eligible if you are targeting an specific group of people, companies or society.

Valid sources of information are currently so hidden and inaccessible mainly because people use portals to direct them and we know that web portal are not neutral, clearly search neutrality is a joke when more than half of a Google search result (for lack of a better word!) page is paid advertising and 80% of remaining content is repeated and duplicated. One reason is the way portals (not all of them) filter information though their broken ranking system with an algorithm which its job is not really to fidn you the best match to your search but it is mostly to find the best match to firm advertising and marketing policy.

Internet should act as a valid source of information if we could reach main source of science and pure piece of knowledge without a proxy named search engine. Sorry I meant to say Google because people even do not believe in other search engines like Bing or Yahoo!

What was the last time you searched (or Googled) and the result was from a university or a valid article from a scientist? Probably 1 in 10000 or actually never! Because the whole system targets only one thing: advertising and data brokery. “Web Monsters” which is literally only Google, push content providers like web loggers to comply with their searching system which is tied to ad systems which means authors are creating crap rather than what they really believe. Again consider my weblog, I won’t be able to reach a rank with a major search engine because from robots AI POV, my content is not readable. Is a good dental clinic necessary the cleanest one? Is a good restaurant necessary what Reviews say? Is a good person necessary white?!

In order to reach the valid information on the web, first we need to change our habits and look for validated sources rather than solely relying on popular web portals. Remember, in the best case possible, popular search engines are not able to crawl and index more than 25% of the Net. That means at least 75% of the content is hidden (not considering darkweb or underground) and roughly 90% of valuable and valid information.

No Silver Bullet in Computer Security

There is no silver bullet in any aspect of information security. All the answers like EDR, MFA, SIEM… might get you in a better or worse security posture, it all depends to how you implement and manage but none of them are silver bullet in their area (malware protection, authentication, monitoring…). It is all about how market is pushing the community to handle the panic attack!

The only fundamental approach, still not such a silver bullet, is Least Privilege, Least Service concept which has been saved hundreds of smart companies from spending lots of money and effort to secure their assets.

Silver bullet approach will eventually end a firm cyber security team into a dead loop where there is no end to purchase, worry, fire fighting and still more insecurities and more uncertainties.

Accurate Vendor Risk Assessment

How to have an accurate vendor risk assessment? 

Assessing your vendors, suppliers, business associates…or any other term you give to who is providing services to your firm is crucial and even might be required from a regulatory stand point (i.e. like in HIPAA). I do not want to get into detail of what would be the best questionnaire and what logic you should follow to get the best result without a spreadsheet with hundreds or thousands of questions. But I would like to emphasis is key element which could be apply to almost every single question in your RFI. 

When asking questions about security policies and procedures, “effective date” is usually overlooked during assessment or completely missed from questionnaire. However, ‘time’ in general is the most important factor in any security policy: effective date, and duration of enforcing a policy. 

Experience shows when suppliers face a VRA, the main concern and strategy is to avoid potentially negative answers to questions by fixing things overnight. This means in reality, if vendor is capable of fixing an issue, they prefer to mitigate before answering the questionnaire, so they go and put new policies, new procedures… and the fact is sometime those actions are really effective but the result won’t be an accurate risk assessment because effective date of a policy, a new security measure, even the perfect secure settings is important, and duration of an active policy is crucial. 

An analogy would be the effectiveness of vitamins consumption; while consuming vitamin might be helpful, nothing is going to change overnight even with the best multi-vitamin and we always need to give time to body to refine the equation by introducing vitamin in a regular basis for a minimum of time before seeing any benefit, and longer for eliminating all the negative effects of vitamin deficiency. 

Changing a policy or setting up a security rule would not mitigate a risk right away and for that, asking for effective dates and duration is so useful. For example: 

Do you have a policy for passwords? If yes please describe the policy briefly indicating effective date (evidence may be requested)

Vendor Risk Assessment: Hassle or Blessing?!

A Security Questionnaire, RFI, VRA (Vendor Risk Assessment), VR Management…helps customers identify and evaluate the risks of using a vendor’s product or service. Performing such a review is sometimes mandatory based on the industry (e.g. healthcare). During this standard business process, customer collects written information about security capabilities of a supplier and you could barely find suppliers or vendors or business associated that are interested to interact with this naturally revealing practice because they refuse to learn from it and ease the process so it remains stressful, time consuming and potentially exposing them to other risks like losing a contract. 

But How to learn from vendor risk assessment and turn it to a tool to improve the business relation? The first step is to have a system to handle request, write a policy and come up with a strategy. A well-defined system will automatically lead you to better interaction and improve itself over time. Study questionnaires and normalize questions, find your flaws and rather than rush to fix them look for root causes and address them accordingly. Remember, all you have to do is managing risks, not necessary mitigating, so expectation of a full green 100% risk free business partnership is showing lack of understanding how risk works.