JUSTFORTHESHELLOFIT

Cybersecurity Weapon Control

While gun control in the United States is a very passionate topic for some, cybersecurity weapons are freely available to those that have the inclination to obtain them. With the recent disclosure of several cybersecurity tools (including the paid for Cobalt Strike) this may spark another conversation of regulation of software. Should we be required to register and license cybersecurity weapons in the modern era?

The open-source nature of collaborative software development can lead to greater access for enthusiasts, professionals, and criminals alike. With some features being granted on a pay-to-play basis, there are also other software packages that require an outright purchase and license to use. We see that eco-systems developed around Linux, Mac, and Windows are prolific with free software that is written for the communities, albeit closed source at times.

This freedom to obtain and use software may find itself regulated in the near future. There are accountability issues that arise from allowing cyber-weapons to fall into the hands of threat actors. If software engineers could find a way to create dependance for an online library or function in regards to registration, there may be a security control that could be applied.

Without advocating for controlling what is perceived as a open and free resource, it might be time to consider the registration of cyberweapons and their use online. When clients such as the U.S. Government become part of an attack from an Advanced Persistent Threat, it creates a window of opportunity to impart influence based on the open-mindedness of the affected. Not that drastic measures are warranted, but this could be time to construct the shell of the conversation.

Supply Chain Attacks

A supply chain attack is an indirect attack that originates from an organization that provides a good or service to the company being attacked. The idea here is that while the primary organization (US Government) will have strict security controls, it is not likely that all of the supplying vendors have the same controls.

We can see that the trust relationship, or relational boundary, between the primary organization and the vendor are what is truly being compromised. When the primary organization develops any outside relationships without requiring the same set of controls that they use internally, they will be susceptible to this type of attack.

The US Government typically relies on practices and control standards that are guided by a series of publications referred to as NIST Special Publications. While there are many different publications, NIST Special Publication 800-53 Rev 4 (Security and Privacy Controls for Federal Information Systems and Organizations) is of particular note concerning the management of internal systems and can be found here: https://nvd.nist.gov/800-53/Rev4/impact/high.

For agencies within the US Government that work with other companies, NIST 800-171 Rev 2 and the burgeoning CMMC (Cybersecurity Maturity Model Certification) provide guidance on how business should be conducted. Of course, just informing you that these standards and certifications exist is not enough to satisfy are need to understand the complexities of what has gone on.

For complexity sake, lets just say a man named Adam runs an organization named ACME. He has to manage all of the computers and he doesn’t have time to do it himself. Instead, he looks to industry leading software to manage his assets last March, and he is happily doing business for the rest of the year.

In December he finds out that the software he was using has been compromised, even though he has the best security around. He doesn’t have log retention for the last nine months because there were no indicators that he was compromised. Now Adam has to assume that everything in his company could have been compromised, and this incident now costs Acme more money than would have been saved by the management software.

That is what we are looking at here. And if you take this example, and then you apply it to every possible customer using the Solar Winds (orion.dll file) you will find that the problem is systemic and has grown out of control.

The interesting part about all of this, is that the threat actor for the attack is supposed to be an APT (Advanced Persistent Threat.) When you look at the big picture, it seems that an APT would have patched all systems after obtaining access in order to prevent other APT’s from conducting similar attacks. Being discovered this late into a hack may be an indicator of greed or laziness for the attackers.

Security Responsibilities that are a Bit Cloudy

When it comes to securing data in a cloud environment, the responsibility for security can be a bit cloudy. While cloud providers do clearly state who is responsible depending on the level of service, ultimately the responsibility should be shared by all parties involved. Albeit in storage, transfer, or process, data security should be managed with a holistic approach with the understanding that safeguarding of sensitive data is a primary function, not a secondary afterthought.

Recently in a conversation with AWS certified Bruce Elgort, the thought process for using auditing tools provided by Amazon as being sufficient was revealed. This train of thought puts the responsibility on the team configuring the S3 buckets, shifting responsibility of risk away from the vendor. A point was raised in response, indicating that it may be the governing bodies responsibility to safeguard data of its citizens.

When looking at the bigger picture it is revealed that many different parties share different parts of the responsibilities being discussed here. In cybersecurity it is well known that compliance drives spending for regulatory controls, however; compliance and security are not necessarily a tandem achieved when either one is carried out. Ultimately, the sector of business dictates what compliance standards are applied. Is it possible that more regulation is needed for cloud vendors?

BYON: The Next Big Security Risk

Bring Your Own Networking (BYON) appears to be the newest “Bring Your Own” fad given the drastic increase in remote work.  When one looks around there is not a lot of information out there. It is no wonder when considering how similar BYON and BYOD (Bring Your Own Device) are. They both can boost productivity, cut cost, and spread the need for network resources out to include outside networks. Just as BYOD has its own unique challenges, so does BYON.  NIST SP 800-124, section 2.2.3 indicates that “…organizations should plan their mobile device security on the assumption that the networks between the mobile device and the organization cannot be trusted.”

BYON can expose an enterprise network to risks that it would not face otherwise. Let’s go over an example of one situation a company could face.  Employees are working from home and can connect to corporate resources using multiple connections. This could be a home broadband network, a company VPN connection, or a mobile hotspot. What this allows an employee to do is work in three different realms at once.  While this is allows for greater productivity, Michael Tucker believes that it may be exposing companies to new risks. An employee can open a document on one connection, work with a database on another connection, and be manipulating cloud data on the other. The problem with this scenario is that external networks with limited controls are difficult to secure.

By using multiple connections, a security incident is of higher likelihood when network traffic and computing resources are not properly secured.  Through PT Network Attack Discovery, Positive Technologies disclosed that 97% of sample networks showed suspicious activities and 94% of networks were out of compliance with IS policies.  Imagine if an employee or vendor is downloading confidential data over an insecure network. There is a possibility that someone unauthorized is listening to your traffic and could steal or alter the data in transit. The corporate network is also more susceptible to viruses and malware that might be contracted during communications on an external network. This could spread the malware from all devices connected to the unsecure network to the enterprise network itself.

This all sounds scary and perhaps insurmountable, but it is not. According to a Tech Republic interview with SysAid CEO Sarah Lahav, the best defense is a good BYOD policy. Now there is a lot of information about that!

According Chris Witeck, senior director of product marketing at remote access provider iPass, there are many steps that can be taken to help secure this fast-growing trend, among them not allowing unauthorized access. This can be done by creating policy using a mobile device management (MDM) software like Citrix Endpoint Manager. This solution allows a company to secure endpoints while providing a centralized computing experience.

Out of some of the more popular articles regarding this subject, the most common and effective solution is end-user education.  Educating users will instill and awareness of proper security practices. There can be consequences for breaking these security practices as well, which might also serve as a good deterrent for improper behavior.

In the end, there are a lot of good things about BYON.  It provides greater employee satisfaction and lower corporate costs to name a couple. There are also significant security threats.  Using proper security policies and end-user education, the threat of a data breach is greatly reduced.

Don’t be a Bad Neighbor

This last Tuesday has come and gone and we are left with another high ranking vulnerability being patched by Microsoft during their monthly upkeep. CVE-2020-16898, aka “Bad Neighbor,” discloses an IPv6 vulnerability “which allows an attacker to send maliciously crafted packets to potentially execute arbitrary code on a remote system” according to Steve Povolny and Mark Bereza in a post at McAfee Labs.

Apparently the Windows TCP/IP stack has trouble when handling ICMPv6 Router Advertisement packets that make use of the Recursive DNS Server (RDNSS) Option. The Length field of this option needs to be not equal to a factor of 2. In other words it should be of value 3 or greater and be odd. If this is not the case, unpatched systems could result in a buffer overflow as the value mismatch is not compliant with RFC 8106. This is just a way of saying that data or instruction sets could be written into memory for execution.

Buffer overflow’s can lead to the creation of shell code to be executed by the target computer. This shell code could then be used to send malcrafted ICMPv6 data to adjacent unpatched computers within the network, turning this into a worm-able code. This can be subverted by updating to the latest patch from Microsoft, disabling IPv6, or disabling the RDNSS feature for IPv6. Even if you think that you are not proactively using IPv6 in your environment, it is often turned on automatically and remains this way until it is turned off.

ZeroLogon Required

T

Secura’s Tom Tervoort recently revealed the details for why you should have zero tolerance when patching ZeroLogon available in this white paper. There is also a proof of concept (POC) exploit now available on github. This vulnerability takes advantage of what is referred to as “a flaw in a cryptographic authentication scheme used by the Netlogon Remote Protocol” in Secura’s summary.

So what does this mean and why is it important? While the vulnerability was disclosed previously and subsequentially patched by Microsoft, the release for the POC on September 11th, means that the attack is now easier to carry out. It requires less skill, and the vulnerability increases in risk because of the lack of complexity for the attack. It was already classified a 10.0 on a scale from 1 (lowest priority) to 10 (highest priority.) This type of attack can give threat actors access to the computer that is the controller for all the computers in a Windows domain (domain controller) resulting in the compromise of all associated accounts.

This isn’t the first disclosure of a bug in Netlogon by Tervoort. Much like previous SMB, Intel, RDP, Citrix, or other vulnerabilities, there has been a progression over time to dig around a little more and find that there are new problems with the same technology. Hopefully the evolution of DevSecOps can help with it’s “Shift Left” mentality to work on securing applications and protocols during the development phases. These problems may be much cheaper to fix in the beginning, even if it does result in companies shelling out more money for software in the long run.

The “R” Word

The very definition of ransomware is misleading. The use of ransomware is not necessarily for relieving an organization of money, and is often just a tool for leveraging a position in a complicated game of cat and mouse. Ransomware has made its way through government institutions, and is back to declaring unfathomable bounties as it debilitates the private industry. Prevention is favored over the cure in this case, and often is overlooked by the short sightedness of those in charge of budgets.

There is very little to be done during a hostage situation when your data is being held captive. People will spend much more than annual IT budgets to recover data they believe is gone. If you are facing an enemy that is already demanding money from you, it is probably already too late. Not all malware results in a ransom as seen by the ‘Meow’ attack.

BleepingComputer.com

With the introduciton of Lockheed Martin’s Cyber-Kill-Chain, a group published the “taxonomy of crypto-ransomware features” that illustrates the subversion techniques for avoiding this pitfall. The scholarly article is freely available here. This focused research pertains to personal computing devices, but similarities can be drawn to begin talks on future cybersecurity taxonomies relating to devices such as those found in mobile, or IoT. Interestingly, this group lists timing-based evasion techniques as one of the most common. This may indicate that stricter control policies based on behavioral characteristics of user logons and computer services may prove effective when combined with detection and automation. The stigma for automation is still present for early adopters though, because of the dynamic environments present in computing.

Lockheed-Taxonomy of Crypto-Ransomware

It is important to know how this taxonomy relates to real-world application and why ransomware is so prevalent. While security controls are very important, the fact remains that social engineering, especially phishing, has proven that humans are the weakest point of the architecture time and again. Susan Bradley covered this in her 2016 paper titled “Ransomware.” This SANS paper is not without or apart from providing analysis and remediation techniques with a general approach using current methodologies to recover or even prevent this from happening. With the taxonomy building a shell or framework, and using the paper for actionable steps, workplaces can begin to comfortably approach this topic instead of not talking about it because they think that will help them avoid it.

Did Intel Just Get the Axe?

Link to Paper

Intel could probably start causing fires with their processors and still be the number one provider of silicon in the world. They are not likely to find themselves filing bankrupcy because a research team has continued to develop an exploit disclosed in January. While the modification for use of processors may reduce chipset features, Intel has provided a superior product for a significant duration. Cancel culture should not creep into decisions based on logic. I have reached out to these researchers about a possible interview.

Link to ZECOPS 3 Part Write Up

With the development of SMBGhost and SMBleed attacking the vector that is SMB compression in Windows, the CacheOut and SGAxe team has continued the trend for maintaining and growing a documented vulnerability with expertise in both marketing and technical aptitude. It is apparent the CVE chain will likely give way to the gamification of vulnerability disclosure. That is not to say CVE will no longer be used, but that the impact of vulnerability disclosure may give precedence to those able to market their wares accordingly.

CVEdetails.com

Does anyone find it strange that VMWARE has not had any vulnerabilities published in what looks like six months? I was reviewing some of the documentation and there appears to be a configuration for a NFS share that seems a little sub-par. I know, misconfigurations are different than vulnerabilities. That being said, for those of you who are misconfiguring your NFS shares through sharing via IP address for read/write access, I can assure you that setting an IP address and using your NFS shares directory to then compromise your VM’s and datastores would have a severe impact. Especially if it is done over a length of time longer than you use to incrementally backup or snapshot systems.

Verizon’s 2020 Data Breach Investigation Report

2020 DBIR

While it comes as no surprise that phishing attempts are going unreported in the Educational Services section of DBIR, the disproportionate amount of credential stuffing attempts indicates that this sector is behind the times on the enforcement of security best practices for AAA policies. An alarming increase in ransomware related malware attacks might be telling of either a weakness within the data storage redundancy, or a willingness to shell out the dough required to unlock files.

This last week, Verizon released its annual Data Breach Investigation Report for those that are interested. With a statistical analysis of trends in 16 different industries, it is evident that Manufacturing still holds the top spot for Cyber-Espionage. Given the historical significance within the intelligence realm, mis-information campaigns filled with tactfully engineered and flawed processes may prove fruitful in this arena. It is notable that this year’s numbers have decreased for this category.

Attack paths in incidents p31

While the portrayal of masterminds within the hacking movies makes for great films, the complication of these studied attacks does not vary with a great magnitude of order. A large majority of the security incidents remained at or below 7 steps. This coupled with the increase in DDOS and Web Application attacks might be indicative of unpatched systems. While it may be difficult to correlate the use of standard container images and readily available orchestration systems, the burden of configuration still lies on product owners within organizations instead of providers of resources. There must be an urgency to change how default applications and containers are being deployed coupled with a standardized timely update methodology if organizations want to change these annual traditions.

Connection attempts by port Figure 22

With honeypots picking up similar patterns for Telnet and SSH, it is clear that there is still a reason for people to scan these ports. The use of standardized ports in internet facing traffic should only be done as required for legacy software, and probably not at all. There are about 65,000 reasons not to be using these if you know what I mean.

Overall the tone of this report was very informative. There is much more in it than what was covered in this short blog. The speculation found within this writing is just that, speculation. It does not mean that it is right or wrong, but an estimation of a valid possibility that might fill the gap of solid data as it is presented. There may be further analysis with a more academic approach coming, this was just for the shell of it.

Setting the T.R.A.P.

https://www.qsl.net/2e0waw/gintrap/150517-run-traps-hi.jpg

Sometimes it takes a cybersecurity incident for a company to start moving resources into securing information within an organization. Such incidents can be handled with proven incident response methodologies similar to the PICERL model as documented by Patrick Kral. Ultimately, there will be iterations of process improvement that help to shore up the security policies for the organization. Addressing the middle ground will help to provide a stop-gap between the two using a method called T.R.A.P.

T.R.A.P. is a simple list of steps that immature and mature cybersecurity programs can use to take up slack that may be present during transitionary periods. Triage and Resolution, Assessment, and Process Improvement make up the proposed methodology. It should be noted that this is a generalistic approach at providing a structured process for organizations that may be looking to move past acute symptom management and into a more mature security framework. By keeping a simple approach in mind, stakeholders and operators can work from within a conducive atmosphere.

Triage and Resolution are dependent on the ability of a team to work on consice and immergent threats to information security. The previously mentioned PICERL model as outlined in “The Indicent Handlers Handbook” is an industry standard for handling incidents that arise. This should be considered as the authority for information protection.

The Assessment phase is one in which the team can explore luxuries such as Risk Analysis, and the Quantification\Qualification of threats as they relate to the vulnerabilities that assets face. Depending on the maturity of the cybersecurity program, this Risk Analysis can get very complex. Threat modeling may be introduced as said program develops.

The ultimate goal for the T.R.A.P. method results in Process Improvement. This is not to say that the entire methodology is complete after a single iteration. Instead this phase allows for the creation of policies and modifications in the form of Risk Mitigation. The continual improvement of processes can and should be done with project management methodologies. Care should be taken for the proper amount of resources assigned to this phase as traits such as cost and scope creep might de-rail improvements.

When applied as a stop-gap, or a tool for communicating to upper management the T.R.A.P. methodology can be as complex as the situation calls for. Simplicity of a methodology or process can often be over-looked for feature rich solutions. Attempting to cater to the middle ground with this solution should help to ensure it’s success.