JUSTFORTHESHELLOFIT

Did Intel Just Get the Axe?

Link to Paper

Intel could probably start causing fires with their processors and still be the number one provider of silicon in the world. They are not likely to find themselves filing bankrupcy because a research team has continued to develop an exploit disclosed in January. While the modification for use of processors may reduce chipset features, Intel has provided a superior product for a significant duration. Cancel culture should not creep into decisions based on logic. I have reached out to these researchers about a possible interview.

Link to ZECOPS 3 Part Write Up

With the development of SMBGhost and SMBleed attacking the vector that is SMB compression in Windows, the CacheOut and SGAxe team has continued the trend for maintaining and growing a documented vulnerability with expertise in both marketing and technical aptitude. It is apparent the CVE chain will likely give way to the gamification of vulnerability disclosure. That is not to say CVE will no longer be used, but that the impact of vulnerability disclosure may give precedence to those able to market their wares accordingly.

CVEdetails.com

Does anyone find it strange that VMWARE has not had any vulnerabilities published in what looks like six months? I was reviewing some of the documentation and there appears to be a configuration for a NFS share that seems a little sub-par. I know, misconfigurations are different than vulnerabilities. That being said, for those of you who are misconfiguring your NFS shares through sharing via IP address for read/write access, I can assure you that setting an IP address and using your NFS shares directory to then compromise your VM’s and datastores would have a severe impact. Especially if it is done over a length of time longer than you use to incrementally backup or snapshot systems.

Verizon’s 2020 Data Breach Investigation Report

2020 DBIR

While it comes as no surprise that phishing attempts are going unreported in the Educational Services section of DBIR, the disproportionate amount of credential stuffing attempts indicates that this sector is behind the times on the enforcement of security best practices for AAA policies. An alarming increase in ransomware related malware attacks might be telling of either a weakness within the data storage redundancy, or a willingness to shell out the dough required to unlock files.

This last week, Verizon released its annual Data Breach Investigation Report for those that are interested. With a statistical analysis of trends in 16 different industries, it is evident that Manufacturing still holds the top spot for Cyber-Espionage. Given the historical significance within the intelligence realm, mis-information campaigns filled with tactfully engineered and flawed processes may prove fruitful in this arena. It is notable that this year’s numbers have decreased for this category.

Attack paths in incidents p31

While the portrayal of masterminds within the hacking movies makes for great films, the complication of these studied attacks does not vary with a great magnitude of order. A large majority of the security incidents remained at or below 7 steps. This coupled with the increase in DDOS and Web Application attacks might be indicative of unpatched systems. While it may be difficult to correlate the use of standard container images and readily available orchestration systems, the burden of configuration still lies on product owners within organizations instead of providers of resources. There must be an urgency to change how default applications and containers are being deployed coupled with a standardized timely update methodology if organizations want to change these annual traditions.

Connection attempts by port Figure 22

With honeypots picking up similar patterns for Telnet and SSH, it is clear that there is still a reason for people to scan these ports. The use of standardized ports in internet facing traffic should only be done as required for legacy software, and probably not at all. There are about 65,000 reasons not to be using these if you know what I mean.

Overall the tone of this report was very informative. There is much more in it than what was covered in this short blog. The speculation found within this writing is just that, speculation. It does not mean that it is right or wrong, but an estimation of a valid possibility that might fill the gap of solid data as it is presented. There may be further analysis with a more academic approach coming, this was just for the shell of it.

Setting the T.R.A.P.

https://www.qsl.net/2e0waw/gintrap/150517-run-traps-hi.jpg

Sometimes it takes a cybersecurity incident for a company to start moving resources into securing information within an organization. Such incidents can be handled with proven incident response methodologies similar to the PICERL model as documented by Patrick Kral. Ultimately, there will be iterations of process improvement that help to shore up the security policies for the organization. Addressing the middle ground will help to provide a stop-gap between the two using a method called T.R.A.P.

T.R.A.P. is a simple list of steps that immature and mature cybersecurity programs can use to take up slack that may be present during transitionary periods. Triage and Resolution, Assessment, and Process Improvement make up the proposed methodology. It should be noted that this is a generalistic approach at providing a structured process for organizations that may be looking to move past acute symptom management and into a more mature security framework. By keeping a simple approach in mind, stakeholders and operators can work from within a conducive atmosphere.

Triage and Resolution are dependent on the ability of a team to work on consice and immergent threats to information security. The previously mentioned PICERL model as outlined in “The Indicent Handlers Handbook” is an industry standard for handling incidents that arise. This should be considered as the authority for information protection.

The Assessment phase is one in which the team can explore luxuries such as Risk Analysis, and the Quantification\Qualification of threats as they relate to the vulnerabilities that assets face. Depending on the maturity of the cybersecurity program, this Risk Analysis can get very complex. Threat modeling may be introduced as said program develops.

The ultimate goal for the T.R.A.P. method results in Process Improvement. This is not to say that the entire methodology is complete after a single iteration. Instead this phase allows for the creation of policies and modifications in the form of Risk Mitigation. The continual improvement of processes can and should be done with project management methodologies. Care should be taken for the proper amount of resources assigned to this phase as traits such as cost and scope creep might de-rail improvements.

When applied as a stop-gap, or a tool for communicating to upper management the T.R.A.P. methodology can be as complex as the situation calls for. Simplicity of a methodology or process can often be over-looked for feature rich solutions. Attempting to cater to the middle ground with this solution should help to ensure it’s success.

Exposure on the Homefront

http://fresnostate.edu/president/coronavirus/

The evolution of risk to corporate infrastructure has been augmented by the COVID-19 pandemic over the last week. Previous exposures to low value targets have grown into a risk that should be accounted for as people begin to transition into their homes to work remotely. Pressure applied to Internet Service Providers to fix these vulnerabilities is now becoming the responsibilities of the corporations that own the risk. Employees that are continuing to protect revenue streams as they work from the homefront are entitled to better protection.

https://cablehaunt.com/

Cable Haunt is a vulnerability in Broadcom chip spectrum analyzers that allows for DNS rebinding attacks and uses a default credential. A whitepaper published by researchers is available outlining the details and this is accompanied by the site cablehaunt.com. This is one exposure in a series of flaws that consumer equipment faces. To increase this liability, there is an aging WPA2 standard that has multiple problems.

https://www.eset.com/int/kr00k/

More than a billion devices are susceptible to Kr00K, and this is an entrypoint for the execution of attacks similar to Cable Haunt. In layman’s terms, Kr00k is the door that can be used to allow access to the network that Cable Haunt causes susceptibility for. These attacks are not complicated, as can be seen when applying the MITRE ATT&CK framework as you would for any other corporate network. That is what your home network has become if you have begun to use company resources at home.

https://www.enisa.europa.eu/news/enisa-news/cybersecurity-in-the-home-ecsm-week-3

The threat to information at home is imminent. With low-hanging fruit available, the risk to both the worker and the company has increased as a result of measures to counter COVID-19. While the targets of each may not typically fall under the same style of attacker, the resultant opportunity will be an opportunistic approach allowing for the compromise of both corporate and personal data. Managing risks to our workforce is a necessary step in defending our enterprises.

https://www.cpajournal.com/2019/06/19/auditing-for-cybersecurity-risk/

CPA Journal has neatly bundled information on how to deal with the risk that organizations face. The recommendations found on this site are staples for the cybersecurity diet and should be followed by those in charge of securing corporate networks. Industry standard courses are available from companies such as SANS as well as formal institutions. These typically require significant resources, and it would be prudent to outsource any of the risk management changes you are considering. As with all business needs, establishing a relationship with a security professional should be accompanied by a sufficient level of insurance, experience, and aptitude.

After a recent conversation, it was brought to my attention that these vulnerabilities do not necessarily qualify for remediation on their own. Home networks are compromised of many devices these days including jailbroken phones, IoT devices, unpatched systems, Smart TV’s, and the list goes on. If you are going to still work from home without pushing for the security of your cable modem and WiFi appliances, you can still segregate your network with different subnets and even VLAN tagging. Working at home from standard networks is irresponsible (10.0.0.0/24, 172.16.0.0/24, 192.168.0.0/24, 192.168.1.1/24). While security through obscurity is not the best practice, using non-standard network subnetting and VLAN while you come up with a RAP solution is better than nothing.

Microsoft’s Chromium Shell

Whether it is the start of a powerhouse relationship or the beginning of a feud, it is clear that something isn’t working. While some will say that something was Microsoft’s failed replacement for Internet Explorer, Edge is being updated with a new Google flavor. It is easy to wonder if Microsoft’s move to open-source powershell in recent years is any indication of the direction that operating systems are headed. The Edge Chromium download is available here https://support.microsoft.com/en-us/help/4501095/download-the-new-microsoft-edge-based-on-chromium.

Edge is not the first browser to use Chromium, but outside of Chrome they will probably be the biggest. Windows operating systems are still the most common in corporate environments, and corporate environments account for the largest share in the market. Corporations are so large, that AntiVirus companies don’t even support home users. The products they make for the home market were the butt of a joke for a rep at a luncheon earlier this month. When asked about the bifurcation of services for the products they delivered, they literally thought a joke was being made.

To make sure that Microsoft knew they were in for a battle, Chrome has been fighting back by tightening its own security standards and demonstrating the insecurities of the Edge Chromium implementation. Google has employed the power of the pop up to warn that they are the browser to go to for security. Recent demonstrations at bolstering privacy through the development of DNS-over-TLS (RFC7858) and DNS-over-HTTPS (RFC8484) are indicative of a problem they have been causing with the tracking of US Citizen data since their inception. They have started to tighten down the Google Partners program with the 50% rule as well.

If you are looking to stay on the Google side of things, there is a solution for running traditionally Microsoft based applications that were tied to Internet Explorer. The IE Tab extension will allow you to run IE securely within Chrome. It is able to run legacy web applications and also has full support for GPO deployments. When you want to launch ActiveX virtual consoles to manage those blade servers and you don’t want use a browser that has had as many problems as IE.

Overall the move to replace IE with Edge via Chromium will be interesting. Watching the forking of software applications is not novel, but it sometimes leads to mismatched security updates. Citrix’s recent vulnerabilities might be attributable to maintaining a forked Linux distribution, and updating a maze of code can be a challenge. Palo Alto’s silent fix for Global Protect went unspoken for about 6 months last year as they did not have a responsible disclosure. Edge updates will probably come frequently if they are not automated, so it will be important to keep an eye on this software if it is to be used.

Intel ATM Chipset Vulnerability Chain

As a fan of Intel’s, one might find it difficult to remain with the industry leader in processor manufacturing. There have been a series of events leading up to the release of the CacheOut (or L1DES) vulnerability that was disclosed by a research team from the University of Michigan and the University of Adelaide. While Intel claims that CVE-2020-0549 has medium severity, it is more likely that the words “little to no” apply to the amount of people who have proceeded in disabling hyperthreading or applied L1 terminal fault mitigations.

Virtualization has become the way for computing over the last decade. It allows for the deployment of a diverse environment using minimal resources. The author of this post has been researching virtualization technologies over the last 3 years, and deploying test environments for cybersecurity training and research. The impact for recommendations mitigating the vulnerability chains come at a significant cost to performance.

For details surrounding CacheOut the whitepaper released on Monday, January 27th, is available here: https://cacheoutattack.com/CacheOut.pdf. The authors of this paper go in great detail to describe aspects of the attack and why Intel’s patchwork mitigations have not been succesful to this point. They also cover the impacts that this type of exploit have on virtualized processes including the inherint risks in sharing resources within a hypervisor.

The likelihood of symptoms being compromised by these vulnerabilities depends on the controls that are in place within the systems being used. The severity for the impact that can be caused by the exploit once realized should be considered moderatley-high. Risk analysis for the vulnerability chain itself should be conducted by professionals that are familiar with the systems architecture and the exploit methodologies.

This was labeled “Intel ATM Chipset Vulnerability Chain” because of the frequent distributions of Cache from the exploits. The likelihood of organizaitons being able to switch to another manufacturer is not significantly high because of the lack for corporate level hardware bearing Ryzen processors. The good news would be that Intel will issue a patch soon, and will probably continue to do so until they posess one of the most secure chips available in the market. Organizations should look for these patches, and apply the mitigations already available as soon as possible. If your organizaiton is still employing a perimeter/edge defense strategy, this might be a reason to consider alternate methods.

The Triad of Security

People have used models to create works and demonstrate consistency of creations for a very long time.  The use of model’s within security helps to characterize standards and promote efficiency when dealing with complex technologies such as integrated ownership and classification of data.  As with many tools, finding the model that is suitable for the purpose being applied will help to achieve desirable results.

            The Bell-Lapdula Model is derived from four technical reports issued between 1972 and 1974.  These reports cover three aspects with the fourth resulting in a summarization including an interpretation of the model itself.  This model is considered by many as a state-machine model and it can be classified further as and information-flow model.  Bell-Lapdula uses three properties to provide a security model that can be applied to complex systems.

            The first property is the Simple Security Property.  The intention of this property is such that there are categories of secrecy that ascend in confidentiality with the highest levels being the most protected.  For this property the subject at one level cannot read information at a higher level, but they can read information at a lower level. 

            The second property is referred to as the Star (*) Property.  The main idea behind this property is that a subject cannot write down to a lower classification level of confidentiality.  The subject can write at or above the level they are currently at.

            The third property is known as the Strong Star (*) Property.  This property dictates that a subject cannot write higher or lower.  This third property can be seen as an integration of principles used for the integrity of data.  The Biba Model (a.k.a. Biba Integrity Model), developed in 1975, is the purveyor of properties in concern with integrity.

            With the Biba model we see that there are again three governing principles that are used in an integrity-centric format for hierarchal classification of systems within a state-machine model.  Accordingly, we see that this flow of information follows a succinct order reflective of a chain of command with information being read from above, and being written at or below the subjects level.

            The first property of Biba is the Simple Integrity Property.  Integrity is preserved here by not allowing the reading of data at a lower integrity level.  This helps prevent lower levels from writing to a higher integrity level, providing the primary control for data integrity within a system.

            The second property of Biba is the Star (*) Integrity Property.  This is the property that dictates that a subject with a given authority cannot write to the level above.  This results in aiding with preventing the higher level from reading below, providing a second control for the process of this integrity preservation.

            The Invocation Property asserts that lower processes cannot request an elevated level of access.  This helps to ensure that access is only granted at or below the integrity level in relation to other subjects in the system.  The Biba Model helps to demonstrate integrity of data, while the the Bell-Lapdula model preserves confidentiality.

            These two models are an integral part of cybersecurity.  There use helps to translate security policies from organizations such as NIST, ISO, and FIPS.  The implementation of the policies through models such as Bell-Lapdula and Biba still require interpretation and implementation.  This is where experience and training help cybersecurity professionals to implement controls that are the foundation for this industry.

            Writing about Confidentiality and Integrity without mentioning Availability would be irresponsible.  When attackers cannot disrupt the confidentiality and integrity of systems and data, they turn to disturb the availability.  Availability is also compromised as these first two legs of the CIA triad are conceded.  A model for availability should revolve around the relation between the subject and the object as opposed to the relation between the subject and the level of confidentiality or integrity. 

Privilege Escalation

When I hear this phrase as a cybersecurity professional, I tend to think of misconfigured permissions and unpatched software running as a system account. As a penetration tester you may think of terms like foothold and lateral movement. Most recently these two words have taken on another meaning for myself as the holiday season is in full swing at the moment.

Working with others while doing something you enjoy is a privilege in itself. What you choose to do with that privilege is reflective of the efforts you put forth. There are probably not a lot of succesful people that were able to accomplish great things by themselves. Currently I have the privilege of being surrounded by people that are talented, smart, and caring.

With that being said, I feel this obligation to constantly grow and improve my personal standards. I have been on the other side of this scenario as well, working in an environment where you feel like your walking on eggshells. If you find yourself in that situation, exercise your privilege escalation abilities and find an environment that suits you.

So you want to learn Powershell

Powershell is a cross-platform functional programming language that is also used for scripting.  It can be used for a wide variety of tasks and the support available for it is very suitable.  You can begin learning it by diving headfirst into the deep end, but you will get more use out of it if you understand a few concepts first. 

The first thing you should familiarize yourself with is the command structure.  It follows a verb-noun pair that makes it easier to work with.  For instance, if you are trying to figure out the name of a command you can list them all with “Get-Command”.  This will list all of the commands available to you. 

While you may see all of the commands, you probably don’t need all of them.  Sometimes you know what you want to do, and this is where understanding the verb in the verb-noun pair comes into play.  A list of the verbs can be found here: https://docs.microsoft.com/en-us/powershell/scripting/developer/cmdlet/approved-verbs-for-windows-powershell-commands?view=powershell-6

Using these verbs you can sort through the Get-Command output using a pipe character: “|” .  The pipe after a command feeds the output of the command into the next script.  So if you issue a command such as “Get-Command | findstr Get” you will see that Powershell applied “findstr” (find string) to the results of Get-Command and it only shows you the commands with Get in them.  This is case sensitive, and there are more ways to manipulate these outputs.

Since we are talking about manipulating the output of commands, now would be a good time to cover some other commands that can be useful with piping.  After a pipe, you can use “less” or “more” to change how the data is presented ie: “Get-Command | less” or “Get-Command | more”.   You can even combine these last two ideas as : “Get-Command | findstr Get | less”.

Sometimes you don’t want to see the output at all.  Instead you can redirect it using “>” and “>>”.  Be careful when using these because they look similar and they can confuse you .  The “>” symbol is used to pipe data to the Success Stream, and it can be used to write to a file in the directory of your choosing.  If no directory is used, then the file will be placed in the directory you are currently in like so: “ls > .\directory.txt”.  This will create a file in your current directory containing a list of the files from your current directory.

If you want to add the files from another directory to your list, you can do so with “>>.”  If you are are trying to append a list of the parent directory to your previous command, you could use: “ls ..\ >> .\directory.txt”.  And here you will find that you have added the parent directory to the end of the file directory.txt.  The “cat” command can then be used to display the file.  If you only want to see the end of the file you could issue: “cat .\directory.txt | tail”.  To help you remember the tail command over using tails, remember you are only using one cat.     

These commands I have gone over are not just for Powershell.  Well, the Get-Command might be, but you will find piping and redirection are used for a lot of different shells.  Understanding how to control what the output of the command is doing is part of stream-lining the process.  I hope this helps you on your Powershell (or any other shell) journey.

Risky Business

Management of risk for federal compliance is intrinsically linked to the National Institute of Standards and Technology (NIST) Special Publications. Even if you are not mandated to follow these guidelines, they do provide a starting point from which you can find structure for advancing the maturity of your cybersecurity program. These publications take time to go through, but let’s cover the use of four that pertain to risk management. This is an introduction to NIST publications, not an exhaustive description of all that are involved in the process.

These are going to be laid out in numerical order, not necessarily based on significance. NIST SP 800-30 Rev 1 (and SP 800-39) provide guidance on risk assessments for federal information systems.  There are a few key takeaways here including the framing of the risk assessment. Particularly the Generic Risk Model with Key Risk Factors (Figure 1) should be understood as it is a point of focus for the conversation moving forward as seen below.

Figure 1 Generic Risk Model with Key
Risk Factors (NIST SP 800-30 Rev 1, p12)

Moving on to the next special publication we have NIST SP 800-37 Rev 2 that provides the actual framework for managing risk of federal information systems as well as providing guidance for the Federal Information Security Modernization Act of 2014 (FISMA) and the Privacy Act of 1974 (PRIVACT.)  The Risk Management Framework embraces the idea for incremental improvement as it demonstrates the need for a continual process as depicted below (Figure 2.)  This publication references numerous other publications throughout the industry, but for the sake of brevity, we will cover two more for this article.

Figure 2 Risk Management Framework
(NIST SP 800-37 Rev 2 p9)

In numerical order this leads us to NIST SP 800-53 Rev 5. This particular publication is used to implement controls in correlation with the RMF from the previous paragraph and in concert with the classification from the next one. Security controls tend to be the meat of the conversation for mitigating risk and even can be used for parts of incident response such as planning, identification, containment, eradication, recovery, and lessons learned (The PICERL format is more SANS and less NIST, but they do line up pretty well.) These controls undoubtedly will incorporate more characteristics from the CIS 20 in the future as they already are leaning that way. (CIS CSC 20 used to be the SANS 20 until branding was done in 2015.) My favorite part about these controls is the listing of the honeypot and the honeyclient. The honeyclient translates into a control that is a computer used to search out malicious activity on the internet (Figure 3.) That’s right, when Bob down the hall is researching elite hacks he is really just implementing a security control for the company!

Figure 3 Honeyclient Description
from SC-35 (NIST SP 800-53 REV 5 p252)

NIST SP 800-60 is the Guide for Mapping Types of Information and Information
Systems to Security Categories
.  The Risk Management Framework is used to quantify the impact and severity for the risk which an organization may face, and to help manage that risk through the use of controls.  The need for classification of systems is met by allowing the mapping of data and systems to security categories.  This qualitative analysis is a precursor and requirement for the quantitative practice that is generally outlined in this article.

This is just a quick overview of some of the NIST publications that you should
be using for federal information systems and compliance.  It is not comprehensive, but allows insight for the correlation between documents and the references they provide.  There are those in the private sector that will point to other frameworks that are just as good if not better.  The idea here is that if you do not need to comply with federal standards, you can still use the best parts of all frameworks to create your own security program.  This should be done with great scrutiny however; as you may find that certain frameworks require a structure that is not compatible with what you are trying to implement.  While you may be able to swap a corvette engine into the shell of a sub-compact car you will probably need to change the transmission at a minimum.  Frankensteining together a security program can lead to dire consequences.

Links:

https://nvlpubs.nist.gov/nistpubs/Legacy/SP/nistspecialpublication800-30r1.pdf

https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-37r2.pdf

https://csrc.nist.gov/csrc/media/publications/sp/800-53/rev-5/draft/documents/sp800-53r5-draft.pdf

https://nvlpubs.nist.gov/nistpubs/Legacy/SP/nistspecialpublication800-60v1r1.pdf