When I hear this phrase as a cybersecurity professional, I tend to think of misconfigured permissions and unpatched software running as a system account. As a penetration tester you may think of terms like foothold and lateral movement. Most recently these two words have taken on another meaning for myself as the holiday season is in full swing at the moment.
Working with others while doing something you enjoy is a privilege in itself. What you choose to do with that privilege is reflective of the efforts you put forth. There are probably not a lot of succesful people that were able to accomplish great things by themselves. Currently I have the privilege of being surrounded by people that are talented, smart, and caring.
With that being said, I feel this obligation to constantly grow and improve my personal standards. I have been on the other side of this scenario as well, working in an environment where you feel like your walking on eggshells. If you find yourself in that situation, exercise your privilege escalation abilities and find an environment that suits you.
Powershell is a cross-platform functional programming language that is also used for scripting. It can be used for a wide variety of tasks and the support available for it is very suitable. You can begin learning it by diving headfirst into the deep end, but you will get more use out of it if you understand a few concepts first.
The first thing you should familiarize yourself with is the command structure. It follows a verb-noun pair that makes it easier to work with. For instance, if you are trying to figure out the name of a command you can list them all with “Get-Command”. This will list all of the commands available to you.
Using these verbs you can sort through the Get-Command output using a pipe character: “|” . The pipe after a command feeds the output of the command into the next script. So if you issue a command such as “Get-Command | findstr Get” you will see that Powershell applied “findstr” (find string) to the results of Get-Command and it only shows you the commands with Get in them. This is case sensitive, and there are more ways to manipulate these outputs.
Since we are talking about manipulating the output of commands, now would be a good time to cover some other commands that can be useful with piping. After a pipe, you can use “less” or “more” to change how the data is presented ie: “Get-Command | less” or “Get-Command | more”. You can even combine these last two ideas as : “Get-Command | findstr Get | less”.
Sometimes you don’t want to see the output at all. Instead you can redirect it using “>” and “>>”. Be careful when using these because they look similar and they can confuse you . The “>” symbol is used to pipe data to the Success Stream, and it can be used to write to a file in the directory of your choosing. If no directory is used, then the file will be placed in the directory you are currently in like so: “ls > .\directory.txt”. This will create a file in your current directory containing a list of the files from your current directory.
If you want to add the files from another directory to your list, you can do so with “>>.” If you are are trying to append a list of the parent directory to your previous command, you could use: “ls ..\ >> .\directory.txt”. And here you will find that you have added the parent directory to the end of the file directory.txt. The “cat” command can then be used to display the file. If you only want to see the end of the file you could issue: “cat .\directory.txt | tail”. To help you remember the tail command over using tails, remember you are only using one cat.
These commands I have gone over are not just for Powershell. Well, the Get-Command might be, but you will find piping and redirection are used for a lot of different shells. Understanding how to control what the output of the command is doing is part of stream-lining the process. I hope this helps you on your Powershell (or any other shell) journey.
Management of risk for federal compliance is intrinsically linked to the National Institute of Standards and Technology (NIST) Special Publications. Even if you are not mandated to follow these guidelines, they do provide a starting point from which you can find structure for advancing the maturity of your cybersecurity program. These publications take time to go through, but let’s cover the use of four that pertain to risk management. This is an introduction to NIST publications, not an exhaustive description of all that are involved in the process.
These are going to be laid out in numerical order, not necessarily based on significance. NIST SP 800-30 Rev 1 (and SP 800-39) provide guidance on risk assessments for federal information systems. There are a few key takeaways here including the framing of the risk assessment. Particularly the Generic Risk Model with Key Risk Factors (Figure 1) should be understood as it is a point of focus for the conversation moving forward as seen below.
Moving on to the next special publication we have NIST SP 800-37 Rev 2 that provides the actual framework for managing risk of federal information systems as well as providing guidance for the Federal Information Security Modernization Act of 2014 (FISMA) and the Privacy Act of 1974 (PRIVACT.) The Risk Management Framework embraces the idea for incremental improvement as it demonstrates the need for a continual process as depicted below (Figure 2.) This publication references numerous other publications throughout the industry, but for the sake of brevity, we will cover two more for this article.
In numerical order this leads us to NIST SP 800-53 Rev 5. This particular publication is used to implement controls in correlation with the RMF from the previous paragraph and in concert with the classification from the next one. Security controls tend to be the meat of the conversation for mitigating risk and even can be used for parts of incident response such as planning, identification, containment, eradication, recovery, and lessons learned (The PICERL format is more SANS and less NIST, but they do line up pretty well.) These controls undoubtedly will incorporate more characteristics from the CIS 20 in the future as they already are leaning that way. (CIS CSC 20 used to be the SANS 20 until branding was done in 2015.) My favorite part about these controls is the listing of the honeypot and the honeyclient. The honeyclient translates into a control that is a computer used to search out malicious activity on the internet (Figure 3.) That’s right, when Bob down the hall is researching elite hacks he is really just implementing a security control for the company!
NIST SP 800-60 is the Guide for Mapping Types of Information and Information Systems to Security Categories. The Risk Management Framework is used to quantify the impact and severity for the risk which an organization may face, and to help manage that risk through the use of controls. The need for classification of systems is met by allowing the mapping of data and systems to security categories. This qualitative analysis is a precursor and requirement for the quantitative practice that is generally outlined in this article.
This is just a quick overview of some of the NIST publications that you should be using for federal information systems and compliance. It is not comprehensive, but allows insight for the correlation between documents and the references they provide. There are those in the private sector that will point to other frameworks that are just as good if not better. The idea here is that if you do not need to comply with federal standards, you can still use the best parts of all frameworks to create your own security program. This should be done with great scrutiny however; as you may find that certain frameworks require a structure that is not compatible with what you are trying to implement. While you may be able to swap a corvette engine into the shell of a sub-compact car you will probably need to change the transmission at a minimum. Frankensteining together a security program can lead to dire consequences.
Phishing campaigns are still going strong as a method to gain access to systems and networks. Specially crafted emails can be sent to unsuspecting users rendering defenses useless at the click of a mouse. While there are many different controls to help combat the diverse attacks brought on by phishing, end user education is a necessary piece of this puzzle.
Anti-Phishing campaigns are primed with materials before and
after the education of the end user.
Berkeley offers some free tools that help with the process found at the
links below. End user education is often
followed by testing through targeted attacks by the cybersecurity department
from external emails. Those not passing
the tests are then required to go through the training again.
The reason phishing is effective is typically because of Social Engineering according to a SANS paper from 2004. The reason phishing is still effective today is probably because of Social Engineering. While technology has changed in the last 15 years, people are still susceptable to the confidence building hoaxes that perpetrate these hacks.
The training process is just one part of an entire campaign. It should be done in conjunction with adding headers to external emails, filtering file types from inbound emails, and eliminating HTML from the email altogether. There are also services and hardware that can be purchased, among other controls that can be found to be effective.
Dealing with this type of an attack can be devastating to small and medium sized businesses. Further controls to mitigate losses include changes in how the business operates when dealing with wire transfers. Finding the equilibrium to balance the way you do business can take time and guidance.
This year we have seen numerous issues resulting from human error. The configurations for applications and services has led to numerous data breaches. As with most emerging technologies, Docker Containers and Amazon S3 Buckets have proven a challenge for which a learning curve should be applied. In the move to embrace cloud based services organizations have jumped at the opportunity to be part of the leading edge. Recent disclosure for the exposure of 93,000,000 patient files in California is an indicator of how things can take a turn for the worse rather abruptly (Barth, 2019).
While the HIPAA Security Rule (NIST SP 800-66 Revision 1) is labeled as “Introductory,” NIST SP 800-144 (Guidelines on Security and Privacy in Public Cloud Computing) spells it out in a direct fashion. “Reducing cost and increasing efficiency are primary motivations for moving towards a public cloud, but relinquishing responsibility for security should not be.”
The burden for securing these new technologies lies with those in charge of securing the data. Configuration of applications and services is being brought to the light this year, and the management of security services will truly benefit in following years. This demonstrated need to understand security risks is a direct result of the likelihood of misconfigurations and the severity of the breaches they led to.