Security News

Many methods have failed in the effort to secure digital communications, but one has remained relatively reliable: Faraday cages. These metallic enclosures prevent all incoming and outgoing electrical charges, and have successfully been used in the past by those hoping to conceal their wireless communications. You may remember Chelsea Manning used a makeshift Faraday cage last year when she asked New York Times reporters to dump their phones in a microwave to prevent prying ears from listening in.

Despite their often unorthodox appearance, Faraday cages are largely considered an effective, if not extreme, additional step in securing communications. While many have utilized this technology for personal uses (A bar owner in the UK even created his own Faraday cage to keep drinkers off their phones), larger institutions like banks, governments, and other corporations turn to Faraday cages to house some of their most sensitive data. These systems also vary in size. Smaller Faraday cages and Faraday bags may be used for individuals while larger corporations may create entire Faraday conference rooms.

It appears, however, that these metal mesh cages may have a chink in their armor.

A new attack method laid out in two recently released papers from researchers at the Cyber Security Research Center in Ben Gurion University in Israel, show how data could potentially be compromised even when encased in a Faraday cage.

The extraction method, dubbed MAGNETO, works by infecting an “air-gapped” device—a computer that isn’t connected to the internet—with a specialized malware called ODINI that regulates that device’s magnetic fields. From there, the malware can overload the CPU with calculations, forcing its magnetic fields to increase. A local smartphone, (located a maximum of 12 to 15 centimeters from the computer) can then receive the covert signals emanating off the magnetic waves to decode encryption keys, credential tokens, passwords and other sensitive information.

Mordechai Guri, who heads research and development at the Cyber Security Research Center, said he and his fellow researchers wanted to show that Faraday cages are not foolproof.

“Faraday cages are known for years as good security for electromagnetic covert channels,” Guri told Motherboard in an email. “Here we want to show that they are not hermetic and can be bypassed by a motivated attacker.”

According to the research, even if phones are placed on airplane mode in secure locations, these extraction techniques could still work. Since the phone’s magnetic sensors are not considered communication interfaces, they would remain active even in airplane mode.

The foundations for the researcher’s breakthrough were built off of previous public examples of offline computer vulnerabilities. Last July, Wikileaks released documents allegedly demonstrating how the CIA used malware to infect air-gapped machines. The tool suite, called “Brutal Kangaroo,” allegedly allowed CIA attackers to infiltrate closed networks by using a compromised USB flash drive. The researchers at the Cyber Security Research Center highlighted “Brutal Kangaroo” in their paper as a real life example of the fallibility of air-gapped computers.

The papers point out that air-gapped computer networks are being used by banks to store confidential information and by the military and defense sectors as well. Guri said that institutions hoping to addresses these security issues may face some difficulty.

“In [the] case of the Magnetic covert channel, its fairly challenging, since the computer must be shielded with a special ferromagnetic shield.” Guri said. “The practical countermeasures is the ‘zoning’ approach, where you define a perimeter in which not [every] receiver/smartphone allowed in.”

 

 

The information contained in this website is for general information purposes only. The information is provided by Motherboard Vice and while we endeavour to keep the information up to date and correct, we make no representations or warranties of any kind, express or implied, about the completeness, accuracy, reliability, suitability or availability with respect to the website or the information, products, services, or related graphics contained on the website for any purpose. Any reliance you place on such information is therefore strictly at your own risk.
Through this website you are able to link to other websites which are not under the control of CSIRT-CY. We have no control over the nature, content and availability of those sites. The inclusion of any links does not necessarily imply a recommendation or endorse the views expressed within them.
Every effort is made to keep the website up and running smoothly. However, CSIRT-CY takes no responsibility for, and will not be liable for, the website being temporarily unavailable due to technical issues beyond our control.

Choosing the right threat intelligence solution is difficult when so many choices already exist in a growing market.

The needs of each organization vary, meaning the best solution for one group is not necessarily ideal for another, and selecting the best solution for your needs is never as simple as finding the most expensive or fully featured product.

Although a large organization with a complex network may leave themselves vulnerable to attack by choosing an insufficiently robust solution, a smaller organization might also harm themselves if they choose a powerful solution producing threat intelligence that they lack the time or capacity to make sense of and act upon.

Selecting the best threat intelligence solution for you is not a hopeless task, however. In its recent Market Guide, the technology research company Gartner lists six capabilities — defining how a vendor collects, processes, and analyzes raw information — that provide an important benchmark for choosing a solution that best fits your needs. According to Gartner, the quality of an intelligence product is generally linked to its ability to produce intelligence in line with the intelligence lifecycle:

  • Whether the vendor develops content based only on logs from current network activity or also gathers information by infiltrating and communicating with threat actor groups.
  • Whether the vendor gathers information only from open, public sources or also includes closed sources.
  • Whether the vendor gathers information only from English-language sources or includes and interprets information from non-English sources as well.
  • Whether the vendor analyzes data, correlates disparate data points, and draws informed conclusions or only provides a series of individual data points without analysis.
  • Whether the vendor is able to create personalized content that addresses the risks and threats specific to your organization.
  • Whether the vendor distributes content in a form that your organization can consume.

Let’s look at each of the capabilities in a little more detail.

1. Gathering content from both in and out of your network.

At a minimum, every organization should know what is going on in their own backyard. Getting a good idea of how your network normally looks will make unusual activity stand out more obviously. Further, keeping track of your internal network activity also helps monitor for malicious insiders — people within your organization who, for whatever reason, may seek to compromise your network or otherwise cause harm.

Some security solutions gather data and event logs from within your network to provide a baseline of what normal looks like, but limiting the dataset to this space means that you will never see an attack from the outside until it is already underway. More comprehensive services gather data from outside of your network, looking for indications of vulnerabilities or an impending attack in places like forums on the dark web. Identifying attacks before they happen and taking preventative steps can make all the difference in mounting a timely and effective response.

2. Gathering content from open and closed sources.

The section of the Internet that we can access through search engines like Google is vast — by some estimates, there are at least 4.56 billion pages indexed by search engines. Even so, this “public” part of the Internet only makes up about four percent of all the data on-line. The rest is locked away in the portions of the Internet called the deep web and the dark web, which make up about 90 percent and 6 percent of the remaining data, respectively.

The deep web refers to all the pages that are not indexed by search engines because it can only be accessed through secure logins or paywalls, comprising information like government and private company databases, personal information like medical and financial records, and scientific and academic reports. The dark web includes websites that are only accessible through certain browsers that provide encryption and anonymity, and many of those websites offer marketplaces for illicit goods and services, but also provide spaces for private and anonymous communications and exchanges of all kinds.

Exploits and vulnerabilities are frequently traded on forums on the dark web in particular, but they are also discussed in many spaces on the deep web by parties that wish to keep them safe. Threat intelligence vendors will sometimes cooperate and share their data in order to have more complete datasets than any individual vendor could gather and process, and this sort of cooperation simply won’t take place on the surface web. A vendor that gathers data from closed sources will have access to a magnitude of information — giving a more complete picture, but only if they have the resources to sort through it all.

3. Gathering content from foreign-language sources.

It’s right there in the name: the World Wide Web does not stop at national borders or divide itself based on the language of its users. Many of the largest and most devastating cyberattacks in recent times have come from foreign sources, meaning threat intelligence vendors that limit their datasets to English-language sources will potentially leave huge gaps in their analysis and prediction.

The NotPetya ransomware attack that occurred earlier in 2017, for example, was traced to a source in Ukraine but eventually infected hundreds of thousands of computers worldwide in less than a week. Some of the largest Cyberattacks are state-sponsored operations attacking foreign powers — like the Equifax hack this year, which some evidence suggests may have been undertaken by Chinese intelligence agents.

Determining whether your organization needs a threat intelligence solution that gathers content from foreign-language sources largely depends on your size. Smaller organizations whose customers are limited to one country or that do not have a significant enough market presence to attract unwanted attention from foreign parties may simply find it unnecessary to gather data from foreign-language sources.

4. Providing informed analysis and prediction.

Threat intelligence, as defined in the Gartner Market Guide, is evidence-based knowledge derived from a process, rather than a series of individual data points. Vendors that only provide data points without any analysis are not offering intelligence, in the proper sense. Even within the scope of threat intelligence properly conceived, however, there remains a wide range of offerings based on a vendor’s ability to not only gather data from the right sources and catch indicators of compromise, but also provide context, implications, and proactive suggestions.

Threat intelligence comes through two channels: machine-readable content, and content that people can understand. Machine-readable content generally includes highly automated real-time monitoring and notifications, enabling quick responses to detected threats. Because they are mostly automated, threat intelligence solutions that focus on producing machine-readable content tend to be more affordable. Content meant for human consumption will go a few steps further, providing a narrative analysis that may provide context, like the perceived intent of threat actors, and even make predictions about future threats or give suggestions. This takes skilled personnel on both sides — for the vendor to produce this kind of content, and for the consumer to apply it with wisdom.

5. Creating personalized content.

Because most organizations use software and systems that are are publicly available, truly personalized content is not always the key to producing effective threat intelligence. Many threats target vulnerabilities in software that are widely distributed rather than focusing on attacking a specific organization. In those cases, having a threat intelligence solution that gathers data from sources that are relevant but not unique to your organization is often enough.

The Gartner Market Guide notes that some organizations will benefit from a more bespoke solution, including brand monitoring on social media and in the closed parts of the internet. Keeping an eye out for specific companies or even specific people within a company being mentioned can help organizations predict better whether they are being phished or recognize false flag schemes, domain fraud, masquerading, social media amplification, or activist schemes. This kind of custom solution will provide more comprehensive threat intelligence, but will cost more and may be unnecessary for smaller organizations.

6. Distributing content that you can understand.

According to Gartner’s Market Guide, a number of open standards have evolved for machine-readable threat intelligence, and threat intelligence solutions that adhere to these standards rather than proprietary ones will generally be more successful. Using systems that have the ability to both understand and export threat intelligence content will lead to larger and more accurate datasets, especially as more groups begin to share their data with each other.

In a less literal sense, some vendors may produce threat intelligence in a form that your organization simply does not have the capacity to effectively apply. As mentioned before, threat intelligence solutions that produce detailed analysis geared toward human consumption are not necessarily the right solution for every organization (and not just because of price) if you do not have the manpower or know-how to act upon the intelligence.

Choose the Right Solution for You

Customers searching for the right threat intelligence solution have a wide variety of goals they want their solution to meet. They may want to understand the identity, methods, and motives of attackers and better defend against future attacks; they may want to understand a previous incident in greater detail; they may want to develop case studies to use for training exercises; they may want to have advance warning of future attacks against a shared IT infrastructure. Each organization’s needs and capacities are unique. Determining your needs and capacities first, and then evaluating the threat intelligence solutions on the market according to these six qualities, will help you easily find the perfect solution.

 

The information contained in this website is for general information purposes only. The information is gathered from Recorded Future while we endeavour to keep the information up to date and correct, we make no representations or warranties of any kind, express or implied, about the completeness, accuracy, reliability, suitability or availability with respect to the website or the information, products, services, or related graphics contained on the website for any purpose. Any reliance you place on such information is therefore strictly at your own risk.
Through this website, you are able to link to other websites which are not under the control of CSIRT-CY. We have no control over the nature, content and availability of those sites. The inclusion of any links does not necessarily imply a recommendation or endorse the views expressed within them.
Every effort is made to keep the website up and running smoothly. However, CSIRT-CY takes no responsibility for, and will not be liable for, the website being temporarily unavailable due to technical issues beyond our control.

Orangeworm was first spotted in January 2015, it appears to be focused on the healthcare industry, 40% of the targets belong to this industry. The hackers also targeted including IT (15%), manufacturing (15%), logistics (8%), and agriculture (8%) industries, but in all the cases the victims are part of the supply chain for healthcare entities.

Most of the victims are located in the United States (17%), followed by Saudi Arabia and India, anyway Orangeworm hit organization in many countries including Philippines, Hungary, United Kingdom, Turkey, Germany, Poland, Hong Kong, Sweden, Canada, France.

Orangeworm targeted a small number of victims in 2016 and 2017, but infections most affected large international corporations in several countries.

The hackers use a custom backdoor tracked as Trojan.Kwampirs to remotely control infected machine on the compromised network.

Initially, the backdoor is used as a reconnaissance tool, if compromised machine contains data of interest the backdoor “aggressively” spread among other systems with open network shares.

 

The experts observed attackers run a wide range of commands within the compromised systems:

The Kwampirs backdoor was discovered by Symantec on machines hosting software used for high-tech imaging devices, such as MRI and X-Ray machines. It was also discovered on devices used to assist patients in completing consent forms.

Experts highlighted that the methods used by Kwampirs to propagate over the target network are particularly “noisy,” this suggests Orangeworm is not overly concerned with being discovered.

At the time of the report, the experts still haven’t determined the real motivation of the attackers or their origin, but even if they are conducting cyber espionage there is no evidence that the operation is backed by a nation-state actor.

Experts noted that the actors behind Orangeworm do not appear to be concerned about their activities being detected.

 

The information contained in this website is for general information purposes only. The information is gathered from Security Affairs while we endeavour to keep the information up to date and correct, we make no representations or warranties of any kind, express or implied, about the completeness, accuracy, reliability, suitability or availability with respect to the website or the information, products, services, or related graphics contained on the website for any purpose. Any reliance you place on such information is therefore strictly at your own risk.
Through this website, you are able to link to other websites which are not under the control of CSIRT-CY. We have no control over the nature, content and availability of those sites. The inclusion of any links does not necessarily imply a recommendation or endorse the views expressed within them.
Every effort is made to keep the website up and running smoothly. However, CSIRT-CY takes no responsibility for, and will not be liable for, the website being temporarily unavailable due to technical issues beyond our control.

Denial of Service attacks have been a hazard for web sites since the earliest days of the World Wide Web.

Although the average speeds and network capacities for the earliest users of Internet service were nowhere near as high as they are today, it was still possible to generate enormous volumes of traffic and direct them at servers that were totally unprepared for the onslaught.

Today, due to innovative and often fast-reacting defensive measures, it is possible to mitigate most of the damage from what is now more accurately referred to as a “distributed denial of service” attack or DDoS. The purpose of a denial of service is to overload a web server or other service with so much unauthorized traffic that legitimate users can’t make use of it. The distributed nature of the attack means that traffic is not directed to the target server from one source. Rather it is coordinated across many sources so blocking one attacking web address is insufficient to stop the attack entirely.

As these attacks have grown in sophistication and power, the measures available to combat them have advanced as well. With adequate planning and a proper understanding of the threat, many of the largest sites on the web have reached a point where they are well defended against all but the most unusually intense events.

The chances of any one site being targeted are low, but if you run a mission-critical service online, whether it is web-based or runs on its own protocol, you should at least be aware of the potential for denial of service attacks and prepare yourself and your organization to combat them. Here are some things to consider.

Know Your Traffic Patterns

There are three primary “loads” on a web or network server. Your analytics software should be able to track one. Your network security should be able to, at minimum, track the other two. The first is volume, which is a measure of how many and what kind of network connections are being made to your server. By and large, this number shouldn’t deviate more than a few percentage points in any given day. If it does, your monitoring software or IT staff should be alerted and prepared to determine causes.

The second load is CPU utilization. For a standard web server, processor utilization should rarely climb above sixty percent. While high CPU load isn’t technically a denial of service attack, when combined with a strategically organized surge of network traffic, CPU load can create cascade effects through all your network services and degrade other devices like failover servers and anti-virus services running elsewhere on your network.

Third is storage. A full disk can not only cause degradation of services but can also cause operating systems and other software to malfunction. On some kinds of servers, a strategically timed series of large uploads combined with one or more other attack vectors can not only degrade services, but cut them off entirely.

The longer your server is running, the more data you will have regarding the normal ranges for all these loads. You can then set up your monitoring and analytics to alert IT staff in the event any of them move out of normal ranges.

Here are some of the most popular and effective ways to defend against and prevent distributed denial of service attacks.

1. Know If It’s Happening

Use the data provided by your monitoring and analytics. Be particularly careful to notice any deviation from your rolling 30 and 90 day patterns for network load, CPU utilization and storage. Occasionally a slow increase in one will precede a spike in one or more of the others. Set up alerts in your monitoring and security systems to notify key personnel in the event of any anomalies. For one-off testing, you can use a speed test tool like Dotcom-Tools in order to spot check website performance issues that could be related to DDoS.

2. Failover and Provisioning

If your services are commercial in nature, you should have enough network capacity available at any given time to endure a minimum 200% temporary increase in traffic. This is called “provisioning” and it is a service that most network operations centers can provide at minimal cost. Under no circumstances should your server be running without a cloned backup ready to take over operations in the event the front-line machine goes off-line. This is known as fail over protection and it is particularly important in the event of a denial of service attack, especially if your network operations staff needs to hotfix or spin up new security on the fly.

3. Reinforce at the Router

While not a permanent solution to a DDoS attack, your router can buy you some time in the early phases of the build-up to an attack. Truly massive targeted attacks often require some time to reach full capacity. These minutes are crucial, as they can be the difference between an ability to get back on-line quickly and having your systems down for extended periods. There are several ways your router can help. For example, setting lower timeouts on certain kinds of connections, reducing thresholds on UDP and SYN packet floods and identifying remote IP ranges to block can buy you anywhere from ten to thirty minutes of up-time in some cases. Even that much time can often make all the difference.

4. UDP Phantom Zone

Unless your servers have a very good reason for receiving or sending UDP traffic, your best option is to simply ask your upstream providers to drop the packets at their routers. Some of the most popular DDoS strategies use NTP and UDP amplification which can overwhelm many networks with relatively minimal hardware. However, if your network sends all UDP traffic to the phantom zone, your servers will never see it.

5. Geographically Distributed Servers

One of the best ways to avoid a distributed attack is to have a distributed server network, according to Web Hosting Buddy. The fewer points of failure your system has, the more vulnerable it is. However, if a DDoS attack only affects a localized geographic area, your network operations can distribute legitimate traffic to other servers on your network automatically and isolate the attacker before the unauthorized traffic has a chance to cause any trouble.

There are commercial companies, naturally, that can provide all these services for high reliability web sites and web services. Although most sites likely don’t need industrial strength denial of service defense, it is something to consider as your traffic grows and your network’s importance increases.

 

The information contained in this website is for general information purposes only. The information is gathered from Hackers Online Club while we endeavour to keep the information up to date and correct, we make no representations or warranties of any kind, express or implied, about the completeness, accuracy, reliability, suitability or availability with respect to the website or the information, products, services, or related graphics contained on the website for any purpose. Any reliance you place on such information is therefore strictly at your own risk.
Through this website, you are able to link to other websites which are not under the control of CSIRT-CY. We have no control over the nature, content and availability of those sites. The inclusion of any links does not necessarily imply a recommendation or endorse the views expressed within them.
Every effort is made to keep the website up and running smoothly. However, CSIRT-CY takes no responsibility for, and will not be liable for, the website being temporarily unavailable due to technical issues beyond our control.

Technology is pulsing all around you, and in the short amount of time that you are hosted in this network that you must try to understand its inner workings. Fortunately or unfortunately, most network and system administrators are persons of habit. All you have to do is listen for long enough, and more often than not it will yield some of those juicy findings, information security experts say.

Regardless of any discussion beforehand, a penetration test has a competitive feel from both sides. Consulting pentesters want their flag, and administrators want their clean bill of health to show that they are resilient to cyber-attack; something akin to a game of flag football. The difference here is that in flag football, both teams are familiar with the tools used to play the game.

It goes without saying that a pentester’s job is to simulate a legitimate threat to effectively determine your organization’s risk, but how can remediation happen without at least some familiarity?

In order to truly secure your networks, any administrator with cybersecurity duties will need to not only understand what they themselves have, but also step into the shoes of the opposite side.

This article’s intention is to focus on the ‘why’ and not completely the ‘how’. There are countless videos and tutorials out there to explain how to use the tools, and much more information than can be laid out in one blog post. Additionally, I acknowledge that other testers out there may have an alternate opinion on these tools, and w

1. Responder

This tool, in the information security expert opinion, makes the absolute top of the list. When an auditor comes in and talks about “least functionality”, this is what comes immediately to mind. If you are a pentester, Responder is likely the first tool you will start running as soon as you get your Linux distro-of-choice connected to the network and kick off the internal penetration test. The tool functions by listening for and poisoning responses from the following protocols:

  • Link-Local Multicast Name Resolution (LLMNR)
  • NetBIOS Name Service (NBT-NS)
  • Web Proxy Auto-Discovery (WPAD)

There is more to Responder, but I will only focus on these three protocols for this article.

NBT-NS is a remnant of the past; a protocol which has been left enabled by Microsoft for legacy/compatibility reasons to allow applications which relied on NetBIOS to operate over TCP/IP networks. LLMNR is a protocol designed similarly to DNS and relies on multicast and peer-to-peer communications for name resolution. It came from the Vista era, and we all know nothing good came from that time-frame. You probably don’t even use either of these. Attackers know this, and use it to their advantage.

WPAD, on the other hand, serves a very real and noticeable purpose on the network. Most enterprise networks use a proxy auto-config (PAC) file to control how hosts get out to the Internet, and WPAD makes that relatively easy. The machines broadcast out into the network looking for a WPAD file, and receive the PAC which is given. This is where the poisoning happens.

The information security professionals are aware that most protocols which rely on any form of broadcasting and multicasting are ripe for exploitation.

2. PowerShell Empire

Before, pentesters typically relied on Command and Control (C2) infrastructure where the agent first had to reside on-disk, which naturally would get uploaded to Virus Total upon public release and be included in the next morning’s antivirus definitions. The time spent evading detection was a seemingly never-ending cat-and-mouse game.

It was as if the collective unconscious of pentesters everywhere realised that the most powerful tool at their disposal was already present on most modern workstations around the world. A framework had to be built, and the Empire team made it so.

The focus on pen-testing frameworks and attack tools has undoubtedly shifted towards PowerShell for exploitation and post-exploitation.

It means that some of the security controls you have put in place may be easily bypassed. File-less agents (including malware) can be deployed by PowerShell and exist in memory without ever touching your hard disk or by connecting a USB.  Existing in memory makes antivirus whose core function is scanning disk significantly less effective.

When it comes to mitigation; the execution policy restrictions in PowerShell are trivial to bypass.

3. Hashcat with Wordlists

This combo right here is an absolute staple. Cracking hashes and recovering passwords is pretty straightforward of a topic at a high level.

Hashcat is a GPU-focused powerhouse of a hash cracker which supports a huge variety of formats, typically used in conjunction with hashes captured by Responder. In addition to Hashcat, a USB hard drive with several gigs of wordlists is a must. On every pentest that the information security analysts have been on, the time had to be allocated appropriately to maximize results, and provide the most value to the client.

Sysadmins, think about your baseline policies and configurations. Typically, it is best practice to align with an industry standard, such as the infamous DISA STIG, as closely as possible. Baselines such as DISA STIG support numerous operating systems and software and contain some key configurations to help you prevent against offline password cracking and replay attacks. This includes enforcing NIST recommended password policies, non-default authentication enhancements, and much more. DISA even does the courtesy of providing you with pre-built Group Policy templates that can be imported and custom-tailored to your organisation’s needs, which cuts out much of the work of importing the settings.

4. Web Penetration Testing Tools

It is important to note that a web penetration testing tool is not the same as a vulnerability scanner.

Web-focused tools have scanning capabilities to them, and focus on the application layer of a website versus the service or protocol level. Granted, vulnerability scanners (Nessus, Nexpose, Retina, etc.) do have web application scanning capabilities, though I have observed that it is best to keep the two separate.

Many organisations nowadays build in-house web apps, intranet sites, and reporting systems in the form of web applications. Typically, that since the site is internal, it does not need to be run through the security code review process, and gets published out for all personnel to see and use.

The surface area of most websites leaves a lot of room for play to find something especially compromising. Some of the major issues are:

  • Stored Cross-site Scripting (XSS).
  • SQL Injection.
  • Authentication bypass.
  • Directory traversal abuse.
  • Unrestricted file upload.

If you administer an organisation that builds or maintains any internal web applications, think about whether or not that code is being frequently reviewed. Code reuse becomes an issue where source code is imported from unknown origins, and any security flaws or potentially malicious functions come with it. Furthermore, the “Always Be Shipping” methodology which has overtaken software development as of late puts all of the emphasis on getting functional code despite the fact that flaws may exist.

Acquaint yourself with OWASP, whose entire focus is on secure application development. Get familiar with the development team’s Software Development Lifecycle (SDLC) and see if security testing is a part of it. OWASP has some tips to help you make recommendations.

Understand the two methodologies for testing applications, including:

  • Static Application Security Testing (SAST). The application’s source code is available for analysis.
  • Dynamic Application Security Testing (DAST). Analyses the application while in an operational state.

Additionally, you will want to take the time to consider your web applications as separate from typical vulnerability scans. Tools (open and closed source) exist out there, including Burp Suite Pro, OWASP Zed Attack Proxy (ZAP), Acunetix, or Trustwave, with scanning functionality that will crawl and simulate attacks against your web applications. Scan your web apps at least quarterly.

5. Arpspoof and Wireshark

Arpspoof is a tool that allows you to insert yourself between a target and its gateway, and Wireshark allows you to capture packets from an interface for analysis. You redirect the traffic from an arbitrary target, such as an employee’s workstation during a pentest, and snoop on it.

Likely the first theoretical attack presented to those in cybersecurity, the infamous Man-in-the-Middle (MitM) attack is still effective on modern networks, information security researchers said. Considering most of the world still leans on IPv4 for internal networking, and the way that the Address Resolution Protocol (ARP) has been designed, a traditional MitM attack is still quite relevant.

According to information security researcher, many falsely assume that because communications occur inside their own networks, they are safe from being snooped on by an adversary and therefore do not have to take the performance hit of encrypting all communications in their own subnets. Granted, your network is an enclave of sorts from the wild west of the Internet, and an attacker would first have to get into your network to stand between communications.

Now, let’s assume that a workstation is compromised by an attacker in another country using a RAT equipped with tools that allow a MitM to take place. Alternately, consider the insider threat.

The information security experts said that the best tactics of defence are: encrypt your communications. Never assume communications inside your network are safe just because there is a gateway device separating you from the Internet.

Keep your VLAN segments carefully tailored, and protect your network from unauthenticated devices. Implementing a Network Access Control (NAC) system is something you may want to add to your security roadmap in the near future or implementing 802.1X on your network may be a good idea. Shut down those unused ports, and think about sticky MACs if you are on a budget.

 

The information contained in this website is for general information purposes only. The information is gathered from Security Newspaper while we endeavour to keep the information up to date and correct, we make no representations or warranties of any kind, express or implied, about the completeness, accuracy, reliability, suitability or availability with respect to the website or the information, products, services, or related graphics contained on the website for any purpose. Any reliance you place on such information is therefore strictly at your own risk.
Through this website, you are able to link to other websites which are not under the control of CSIRT-CY. We have no control over the nature, content and availability of those sites. The inclusion of any links does not necessarily imply a recommendation or endorse the views expressed within them.
Every effort is made to keep the website up and running smoothly. However, CSIRT-CY takes no responsibility for, and will not be liable for, the website being temporarily unavailable due to technical issues beyond our control.