Don’t be a Bad Neighbor

This last Tuesday has come and gone and we are left with another high ranking vulnerability being patched by Microsoft during their monthly upkeep. CVE-2020-16898, aka “Bad Neighbor,” discloses an IPv6 vulnerability “which allows an attacker to send maliciously crafted packets to potentially execute arbitrary code on a remote system” according to Steve Povolny and Mark Bereza in a post at McAfee Labs.

Apparently the Windows TCP/IP stack has trouble when handling ICMPv6 Router Advertisement packets that make use of the Recursive DNS Server (RDNSS) Option. The Length field of this option needs to be not equal to a factor of 2. In other words it should be of value 3 or greater and be odd. If this is not the case, unpatched systems could result in a buffer overflow as the value mismatch is not compliant with RFC 8106. This is just a way of saying that data or instruction sets could be written into memory for execution.

Buffer overflow’s can lead to the creation of shell code to be executed by the target computer. This shell code could then be used to send malcrafted ICMPv6 data to adjacent unpatched computers within the network, turning this into a worm-able code. This can be subverted by updating to the latest patch from Microsoft, disabling IPv6, or disabling the RDNSS feature for IPv6. Even if you think that you are not proactively using IPv6 in your environment, it is often turned on automatically and remains this way until it is turned off.

ZeroLogon Required


Secura’s Tom Tervoort recently revealed the details for why you should have zero tolerance when patching ZeroLogon available in this white paper. There is also a proof of concept (POC) exploit now available on github. This vulnerability takes advantage of what is referred to as “a flaw in a cryptographic authentication scheme used by the Netlogon Remote Protocol” in Secura’s summary.

So what does this mean and why is it important? While the vulnerability was disclosed previously and subsequentially patched by Microsoft, the release for the POC on September 11th, means that the attack is now easier to carry out. It requires less skill, and the vulnerability increases in risk because of the lack of complexity for the attack. It was already classified a 10.0 on a scale from 1 (lowest priority) to 10 (highest priority.) This type of attack can give threat actors access to the computer that is the controller for all the computers in a Windows domain (domain controller) resulting in the compromise of all associated accounts.

This isn’t the first disclosure of a bug in Netlogon by Tervoort. Much like previous SMB, Intel, RDP, Citrix, or other vulnerabilities, there has been a progression over time to dig around a little more and find that there are new problems with the same technology. Hopefully the evolution of DevSecOps can help with it’s “Shift Left” mentality to work on securing applications and protocols during the development phases. These problems may be much cheaper to fix in the beginning, even if it does result in companies shelling out more money for software in the long run.

The “R” Word

The very definition of ransomware is misleading. The use of ransomware is not necessarily for relieving an organization of money, and is often just a tool for leveraging a position in a complicated game of cat and mouse. Ransomware has made its way through government institutions, and is back to declaring unfathomable bounties as it debilitates the private industry. Prevention is favored over the cure in this case, and often is overlooked by the short sightedness of those in charge of budgets.

There is very little to be done during a hostage situation when your data is being held captive. People will spend much more than annual IT budgets to recover data they believe is gone. If you are facing an enemy that is already demanding money from you, it is probably already too late. Not all malware results in a ransom as seen by the ‘Meow’ attack.


With the introduciton of Lockheed Martin’s Cyber-Kill-Chain, a group published the “taxonomy of crypto-ransomware features” that illustrates the subversion techniques for avoiding this pitfall. The scholarly article is freely available here. This focused research pertains to personal computing devices, but similarities can be drawn to begin talks on future cybersecurity taxonomies relating to devices such as those found in mobile, or IoT. Interestingly, this group lists timing-based evasion techniques as one of the most common. This may indicate that stricter control policies based on behavioral characteristics of user logons and computer services may prove effective when combined with detection and automation. The stigma for automation is still present for early adopters though, because of the dynamic environments present in computing.

Lockheed-Taxonomy of Crypto-Ransomware

It is important to know how this taxonomy relates to real-world application and why ransomware is so prevalent. While security controls are very important, the fact remains that social engineering, especially phishing, has proven that humans are the weakest point of the architecture time and again. Susan Bradley covered this in her 2016 paper titled “Ransomware.” This SANS paper is not without or apart from providing analysis and remediation techniques with a general approach using current methodologies to recover or even prevent this from happening. With the taxonomy building a shell or framework, and using the paper for actionable steps, workplaces can begin to comfortably approach this topic instead of not talking about it because they think that will help them avoid it.

Did Intel Just Get the Axe?

Link to Paper

Intel could probably start causing fires with their processors and still be the number one provider of silicon in the world. They are not likely to find themselves filing bankrupcy because a research team has continued to develop an exploit disclosed in January. While the modification for use of processors may reduce chipset features, Intel has provided a superior product for a significant duration. Cancel culture should not creep into decisions based on logic. I have reached out to these researchers about a possible interview.

Link to ZECOPS 3 Part Write Up

With the development of SMBGhost and SMBleed attacking the vector that is SMB compression in Windows, the CacheOut and SGAxe team has continued the trend for maintaining and growing a documented vulnerability with expertise in both marketing and technical aptitude. It is apparent the CVE chain will likely give way to the gamification of vulnerability disclosure. That is not to say CVE will no longer be used, but that the impact of vulnerability disclosure may give precedence to those able to market their wares accordingly.


Does anyone find it strange that VMWARE has not had any vulnerabilities published in what looks like six months? I was reviewing some of the documentation and there appears to be a configuration for a NFS share that seems a little sub-par. I know, misconfigurations are different than vulnerabilities. That being said, for those of you who are misconfiguring your NFS shares through sharing via IP address for read/write access, I can assure you that setting an IP address and using your NFS shares directory to then compromise your VM’s and datastores would have a severe impact. Especially if it is done over a length of time longer than you use to incrementally backup or snapshot systems.

Verizon’s 2020 Data Breach Investigation Report

2020 DBIR

While it comes as no surprise that phishing attempts are going unreported in the Educational Services section of DBIR, the disproportionate amount of credential stuffing attempts indicates that this sector is behind the times on the enforcement of security best practices for AAA policies. An alarming increase in ransomware related malware attacks might be telling of either a weakness within the data storage redundancy, or a willingness to shell out the dough required to unlock files.

This last week, Verizon released its annual Data Breach Investigation Report for those that are interested. With a statistical analysis of trends in 16 different industries, it is evident that Manufacturing still holds the top spot for Cyber-Espionage. Given the historical significance within the intelligence realm, mis-information campaigns filled with tactfully engineered and flawed processes may prove fruitful in this arena. It is notable that this year’s numbers have decreased for this category.

Attack paths in incidents p31

While the portrayal of masterminds within the hacking movies makes for great films, the complication of these studied attacks does not vary with a great magnitude of order. A large majority of the security incidents remained at or below 7 steps. This coupled with the increase in DDOS and Web Application attacks might be indicative of unpatched systems. While it may be difficult to correlate the use of standard container images and readily available orchestration systems, the burden of configuration still lies on product owners within organizations instead of providers of resources. There must be an urgency to change how default applications and containers are being deployed coupled with a standardized timely update methodology if organizations want to change these annual traditions.

Connection attempts by port Figure 22

With honeypots picking up similar patterns for Telnet and SSH, it is clear that there is still a reason for people to scan these ports. The use of standardized ports in internet facing traffic should only be done as required for legacy software, and probably not at all. There are about 65,000 reasons not to be using these if you know what I mean.

Overall the tone of this report was very informative. There is much more in it than what was covered in this short blog. The speculation found within this writing is just that, speculation. It does not mean that it is right or wrong, but an estimation of a valid possibility that might fill the gap of solid data as it is presented. There may be further analysis with a more academic approach coming, this was just for the shell of it.

Setting the T.R.A.P.


Sometimes it takes a cybersecurity incident for a company to start moving resources into securing information within an organization. Such incidents can be handled with proven incident response methodologies similar to the PICERL model as documented by Patrick Kral. Ultimately, there will be iterations of process improvement that help to shore up the security policies for the organization. Addressing the middle ground will help to provide a stop-gap between the two using a method called T.R.A.P.

T.R.A.P. is a simple list of steps that immature and mature cybersecurity programs can use to take up slack that may be present during transitionary periods. Triage and Resolution, Assessment, and Process Improvement make up the proposed methodology. It should be noted that this is a generalistic approach at providing a structured process for organizations that may be looking to move past acute symptom management and into a more mature security framework. By keeping a simple approach in mind, stakeholders and operators can work from within a conducive atmosphere.

Triage and Resolution are dependent on the ability of a team to work on consice and immergent threats to information security. The previously mentioned PICERL model as outlined in “The Indicent Handlers Handbook” is an industry standard for handling incidents that arise. This should be considered as the authority for information protection.

The Assessment phase is one in which the team can explore luxuries such as Risk Analysis, and the Quantification\Qualification of threats as they relate to the vulnerabilities that assets face. Depending on the maturity of the cybersecurity program, this Risk Analysis can get very complex. Threat modeling may be introduced as said program develops.

The ultimate goal for the T.R.A.P. method results in Process Improvement. This is not to say that the entire methodology is complete after a single iteration. Instead this phase allows for the creation of policies and modifications in the form of Risk Mitigation. The continual improvement of processes can and should be done with project management methodologies. Care should be taken for the proper amount of resources assigned to this phase as traits such as cost and scope creep might de-rail improvements.

When applied as a stop-gap, or a tool for communicating to upper management the T.R.A.P. methodology can be as complex as the situation calls for. Simplicity of a methodology or process can often be over-looked for feature rich solutions. Attempting to cater to the middle ground with this solution should help to ensure it’s success.

Exposure on the Homefront


The evolution of risk to corporate infrastructure has been augmented by the COVID-19 pandemic over the last week. Previous exposures to low value targets have grown into a risk that should be accounted for as people begin to transition into their homes to work remotely. Pressure applied to Internet Service Providers to fix these vulnerabilities is now becoming the responsibilities of the corporations that own the risk. Employees that are continuing to protect revenue streams as they work from the homefront are entitled to better protection.


Cable Haunt is a vulnerability in Broadcom chip spectrum analyzers that allows for DNS rebinding attacks and uses a default credential. A whitepaper published by researchers is available outlining the details and this is accompanied by the site cablehaunt.com. This is one exposure in a series of flaws that consumer equipment faces. To increase this liability, there is an aging WPA2 standard that has multiple problems.


More than a billion devices are susceptible to Kr00K, and this is an entrypoint for the execution of attacks similar to Cable Haunt. In layman’s terms, Kr00k is the door that can be used to allow access to the network that Cable Haunt causes susceptibility for. These attacks are not complicated, as can be seen when applying the MITRE ATT&CK framework as you would for any other corporate network. That is what your home network has become if you have begun to use company resources at home.


The threat to information at home is imminent. With low-hanging fruit available, the risk to both the worker and the company has increased as a result of measures to counter COVID-19. While the targets of each may not typically fall under the same style of attacker, the resultant opportunity will be an opportunistic approach allowing for the compromise of both corporate and personal data. Managing risks to our workforce is a necessary step in defending our enterprises.


CPA Journal has neatly bundled information on how to deal with the risk that organizations face. The recommendations found on this site are staples for the cybersecurity diet and should be followed by those in charge of securing corporate networks. Industry standard courses are available from companies such as SANS as well as formal institutions. These typically require significant resources, and it would be prudent to outsource any of the risk management changes you are considering. As with all business needs, establishing a relationship with a security professional should be accompanied by a sufficient level of insurance, experience, and aptitude.

After a recent conversation, it was brought to my attention that these vulnerabilities do not necessarily qualify for remediation on their own. Home networks are compromised of many devices these days including jailbroken phones, IoT devices, unpatched systems, Smart TV’s, and the list goes on. If you are going to still work from home without pushing for the security of your cable modem and WiFi appliances, you can still segregate your network with different subnets and even VLAN tagging. Working at home from standard networks is irresponsible (,,, While security through obscurity is not the best practice, using non-standard network subnetting and VLAN while you come up with a RAP solution is better than nothing.

Microsoft’s Chromium Shell

Whether it is the start of a powerhouse relationship or the beginning of a feud, it is clear that something isn’t working. While some will say that something was Microsoft’s failed replacement for Internet Explorer, Edge is being updated with a new Google flavor. It is easy to wonder if Microsoft’s move to open-source powershell in recent years is any indication of the direction that operating systems are headed. The Edge Chromium download is available here https://support.microsoft.com/en-us/help/4501095/download-the-new-microsoft-edge-based-on-chromium.

Edge is not the first browser to use Chromium, but outside of Chrome they will probably be the biggest. Windows operating systems are still the most common in corporate environments, and corporate environments account for the largest share in the market. Corporations are so large, that AntiVirus companies don’t even support home users. The products they make for the home market were the butt of a joke for a rep at a luncheon earlier this month. When asked about the bifurcation of services for the products they delivered, they literally thought a joke was being made.

To make sure that Microsoft knew they were in for a battle, Chrome has been fighting back by tightening its own security standards and demonstrating the insecurities of the Edge Chromium implementation. Google has employed the power of the pop up to warn that they are the browser to go to for security. Recent demonstrations at bolstering privacy through the development of DNS-over-TLS (RFC7858) and DNS-over-HTTPS (RFC8484) are indicative of a problem they have been causing with the tracking of US Citizen data since their inception. They have started to tighten down the Google Partners program with the 50% rule as well.

If you are looking to stay on the Google side of things, there is a solution for running traditionally Microsoft based applications that were tied to Internet Explorer. The IE Tab extension will allow you to run IE securely within Chrome. It is able to run legacy web applications and also has full support for GPO deployments. When you want to launch ActiveX virtual consoles to manage those blade servers and you don’t want use a browser that has had as many problems as IE.

Overall the move to replace IE with Edge via Chromium will be interesting. Watching the forking of software applications is not novel, but it sometimes leads to mismatched security updates. Citrix’s recent vulnerabilities might be attributable to maintaining a forked Linux distribution, and updating a maze of code can be a challenge. Palo Alto’s silent fix for Global Protect went unspoken for about 6 months last year as they did not have a responsible disclosure. Edge updates will probably come frequently if they are not automated, so it will be important to keep an eye on this software if it is to be used.

Intel ATM Chipset Vulnerability Chain

As a fan of Intel’s, one might find it difficult to remain with the industry leader in processor manufacturing. There have been a series of events leading up to the release of the CacheOut (or L1DES) vulnerability that was disclosed by a research team from the University of Michigan and the University of Adelaide. While Intel claims that CVE-2020-0549 has medium severity, it is more likely that the words “little to no” apply to the amount of people who have proceeded in disabling hyperthreading or applied L1 terminal fault mitigations.

Virtualization has become the way for computing over the last decade. It allows for the deployment of a diverse environment using minimal resources. The author of this post has been researching virtualization technologies over the last 3 years, and deploying test environments for cybersecurity training and research. The impact for recommendations mitigating the vulnerability chains come at a significant cost to performance.

For details surrounding CacheOut the whitepaper released on Monday, January 27th, is available here: https://cacheoutattack.com/CacheOut.pdf. The authors of this paper go in great detail to describe aspects of the attack and why Intel’s patchwork mitigations have not been succesful to this point. They also cover the impacts that this type of exploit have on virtualized processes including the inherint risks in sharing resources within a hypervisor.

The likelihood of symptoms being compromised by these vulnerabilities depends on the controls that are in place within the systems being used. The severity for the impact that can be caused by the exploit once realized should be considered moderatley-high. Risk analysis for the vulnerability chain itself should be conducted by professionals that are familiar with the systems architecture and the exploit methodologies.

This was labeled “Intel ATM Chipset Vulnerability Chain” because of the frequent distributions of Cache from the exploits. The likelihood of organizaitons being able to switch to another manufacturer is not significantly high because of the lack for corporate level hardware bearing Ryzen processors. The good news would be that Intel will issue a patch soon, and will probably continue to do so until they posess one of the most secure chips available in the market. Organizations should look for these patches, and apply the mitigations already available as soon as possible. If your organizaiton is still employing a perimeter/edge defense strategy, this might be a reason to consider alternate methods.

The Triad of Security

People have used models to create works and demonstrate consistency of creations for a very long time.  The use of model’s within security helps to characterize standards and promote efficiency when dealing with complex technologies such as integrated ownership and classification of data.  As with many tools, finding the model that is suitable for the purpose being applied will help to achieve desirable results.

            The Bell-Lapdula Model is derived from four technical reports issued between 1972 and 1974.  These reports cover three aspects with the fourth resulting in a summarization including an interpretation of the model itself.  This model is considered by many as a state-machine model and it can be classified further as and information-flow model.  Bell-Lapdula uses three properties to provide a security model that can be applied to complex systems.

            The first property is the Simple Security Property.  The intention of this property is such that there are categories of secrecy that ascend in confidentiality with the highest levels being the most protected.  For this property the subject at one level cannot read information at a higher level, but they can read information at a lower level. 

            The second property is referred to as the Star (*) Property.  The main idea behind this property is that a subject cannot write down to a lower classification level of confidentiality.  The subject can write at or above the level they are currently at.

            The third property is known as the Strong Star (*) Property.  This property dictates that a subject cannot write higher or lower.  This third property can be seen as an integration of principles used for the integrity of data.  The Biba Model (a.k.a. Biba Integrity Model), developed in 1975, is the purveyor of properties in concern with integrity.

            With the Biba model we see that there are again three governing principles that are used in an integrity-centric format for hierarchal classification of systems within a state-machine model.  Accordingly, we see that this flow of information follows a succinct order reflective of a chain of command with information being read from above, and being written at or below the subjects level.

            The first property of Biba is the Simple Integrity Property.  Integrity is preserved here by not allowing the reading of data at a lower integrity level.  This helps prevent lower levels from writing to a higher integrity level, providing the primary control for data integrity within a system.

            The second property of Biba is the Star (*) Integrity Property.  This is the property that dictates that a subject with a given authority cannot write to the level above.  This results in aiding with preventing the higher level from reading below, providing a second control for the process of this integrity preservation.

            The Invocation Property asserts that lower processes cannot request an elevated level of access.  This helps to ensure that access is only granted at or below the integrity level in relation to other subjects in the system.  The Biba Model helps to demonstrate integrity of data, while the the Bell-Lapdula model preserves confidentiality.

            These two models are an integral part of cybersecurity.  There use helps to translate security policies from organizations such as NIST, ISO, and FIPS.  The implementation of the policies through models such as Bell-Lapdula and Biba still require interpretation and implementation.  This is where experience and training help cybersecurity professionals to implement controls that are the foundation for this industry.

            Writing about Confidentiality and Integrity without mentioning Availability would be irresponsible.  When attackers cannot disrupt the confidentiality and integrity of systems and data, they turn to disturb the availability.  Availability is also compromised as these first two legs of the CIA triad are conceded.  A model for availability should revolve around the relation between the subject and the object as opposed to the relation between the subject and the level of confidentiality or integrity.