Νέα Ασφάλειας

A massive malware outbreak that last week infected nearly half a million computers with cryptocurrency mining malware in just a few hours was caused by a backdoored version of popular BitTorrent client called MediaGet.

Dubbed Dofoil (also known as Smoke Loader), the malware was found dropping a cryptocurrency miner program as payload on infected Windows computers that mine Electroneum digital coins for attackers using victims’ CPU cycles. Dofoil campaign that hit PCs in Russia, Turkey, and Ukraine on 6th March was discovered by Microsoft Windows Defender research department and blocked the attack before it could have done any severe damages.

At the time when Windows Defender researchers detected this attack, they did not mention how the malware was delivered to such a massive audience in just 12 hours. However, after investigation Microsoft revealed that the attackers targeted the update mechanism of MediaGet BitTorrent software to push its trojanized version (mediaget.exe) to users’ computers.

A signed mediaget.exe downloads an update.exe program and runs it on the machine to install a new mediaget.exe. The new mediaget.exe program has the same functionality as the original but with additional backdoor capability,” the Microsoft researchers explain in their blog.

Researchers believe MediaGet that signed update.exe is likely to be a victim of the supply chain attack, similar to CCleaner bug that infected over 2.3 million users with the backdoored version of the software in September 2017.

 

Also, in this case, the attackers signed the poisoned update.exe with a different certificate and successfully passed the validation required by the legitimate MediaGet.

The dropped update.exe is a packaged InnoSetup SFX which has an embedded trojanized mediaget.exe, update.exe. When run, it drops a trojanized unsigned version of mediaget.exe.”

Once updated, the malicious BitTorrent software with additional backdoor functionality randomly connects to one (out of four) of its command-and-control (C&C) servers hosted on decentralized Namecoin network infrastructure and listens for new commands.

It then immediately downloads CoinMiner component from its C&C server, and start using victims’ computers mine cryptocurrencies for the attackers.

Using C&C servers, attackers can also command infected systems to download and install additional malware from a remote URL.

The researchers found that the trojanized BitTorrent client, detected by Windows Defender AV as Trojan:Win32/Modimer.A, has 98% similarity to the original MediaGet binary.

Microsoft says behavior monitoring and AI-based machine learning techniques used by its Windows Defender Antivirus software have played an important role to detect and block this massive malware campaign.

 

 

The information contained in this website is for general information purposes only. The information is gathered from The Hacker News while we endeavour to keep the information up to date and correct, we make no representations or warranties of any kind, express or implied, about the completeness, accuracy, reliability, suitability or availability with respect to the website or the information, products, services, or related graphics contained on the website for any purpose. Any reliance you place on such information is therefore strictly at your own risk.
Through this website, you are able to link to other websites which are not under the control of CSIRT-CY. We have no control over the nature, content and availability of those sites. The inclusion of any links does not necessarily imply a recommendation or endorse the views expressed within them.
Every effort is made to keep the website up and running smoothly. However, CSIRT-CY takes no responsibility for, and will not be liable for, the website being temporarily unavailable due to technical issues beyond our control.

Orangeworm was first spotted in January 2015, it appears to be focused on the healthcare industry, 40% of the targets belong to this industry. The hackers also targeted including IT (15%), manufacturing (15%), logistics (8%), and agriculture (8%) industries, but in all the cases the victims are part of the supply chain for healthcare entities.

Most of the victims are located in the United States (17%), followed by Saudi Arabia and India, anyway Orangeworm hit organization in many countries including Philippines, Hungary, United Kingdom, Turkey, Germany, Poland, Hong Kong, Sweden, Canada, France.

Orangeworm targeted a small number of victims in 2016 and 2017, but infections most affected large international corporations in several countries.

The hackers use a custom backdoor tracked as Trojan.Kwampirs to remotely control infected machine on the compromised network.

Initially, the backdoor is used as a reconnaissance tool, if compromised machine contains data of interest the backdoor “aggressively” spread among other systems with open network shares.

 

The experts observed attackers run a wide range of commands within the compromised systems:

The Kwampirs backdoor was discovered by Symantec on machines hosting software used for high-tech imaging devices, such as MRI and X-Ray machines. It was also discovered on devices used to assist patients in completing consent forms.

Experts highlighted that the methods used by Kwampirs to propagate over the target network are particularly “noisy,” this suggests Orangeworm is not overly concerned with being discovered.

At the time of the report, the experts still haven’t determined the real motivation of the attackers or their origin, but even if they are conducting cyber espionage there is no evidence that the operation is backed by a nation-state actor.

Experts noted that the actors behind Orangeworm do not appear to be concerned about their activities being detected.

 

The information contained in this website is for general information purposes only. The information is gathered from Security Affairs while we endeavour to keep the information up to date and correct, we make no representations or warranties of any kind, express or implied, about the completeness, accuracy, reliability, suitability or availability with respect to the website or the information, products, services, or related graphics contained on the website for any purpose. Any reliance you place on such information is therefore strictly at your own risk.
Through this website, you are able to link to other websites which are not under the control of CSIRT-CY. We have no control over the nature, content and availability of those sites. The inclusion of any links does not necessarily imply a recommendation or endorse the views expressed within them.
Every effort is made to keep the website up and running smoothly. However, CSIRT-CY takes no responsibility for, and will not be liable for, the website being temporarily unavailable due to technical issues beyond our control.

Denial of Service attacks have been a hazard for web sites since the earliest days of the World Wide Web.

Although the average speeds and network capacities for the earliest users of Internet service were nowhere near as high as they are today, it was still possible to generate enormous volumes of traffic and direct them at servers that were totally unprepared for the onslaught.

Today, due to innovative and often fast-reacting defensive measures, it is possible to mitigate most of the damage from what is now more accurately referred to as a “distributed denial of service” attack or DDoS. The purpose of a denial of service is to overload a web server or other service with so much unauthorized traffic that legitimate users can’t make use of it. The distributed nature of the attack means that traffic is not directed to the target server from one source. Rather it is coordinated across many sources so blocking one attacking web address is insufficient to stop the attack entirely.

As these attacks have grown in sophistication and power, the measures available to combat them have advanced as well. With adequate planning and a proper understanding of the threat, many of the largest sites on the web have reached a point where they are well defended against all but the most unusually intense events.

The chances of any one site being targeted are low, but if you run a mission-critical service online, whether it is web-based or runs on its own protocol, you should at least be aware of the potential for denial of service attacks and prepare yourself and your organization to combat them. Here are some things to consider.

Know Your Traffic Patterns

There are three primary “loads” on a web or network server. Your analytics software should be able to track one. Your network security should be able to, at minimum, track the other two. The first is volume, which is a measure of how many and what kind of network connections are being made to your server. By and large, this number shouldn’t deviate more than a few percentage points in any given day. If it does, your monitoring software or IT staff should be alerted and prepared to determine causes.

The second load is CPU utilization. For a standard web server, processor utilization should rarely climb above sixty percent. While high CPU load isn’t technically a denial of service attack, when combined with a strategically organized surge of network traffic, CPU load can create cascade effects through all your network services and degrade other devices like failover servers and anti-virus services running elsewhere on your network.

Third is storage. A full disk can not only cause degradation of services but can also cause operating systems and other software to malfunction. On some kinds of servers, a strategically timed series of large uploads combined with one or more other attack vectors can not only degrade services, but cut them off entirely.

The longer your server is running, the more data you will have regarding the normal ranges for all these loads. You can then set up your monitoring and analytics to alert IT staff in the event any of them move out of normal ranges.

Here are some of the most popular and effective ways to defend against and prevent distributed denial of service attacks.

1. Know If It’s Happening

Use the data provided by your monitoring and analytics. Be particularly careful to notice any deviation from your rolling 30 and 90 day patterns for network load, CPU utilization and storage. Occasionally a slow increase in one will precede a spike in one or more of the others. Set up alerts in your monitoring and security systems to notify key personnel in the event of any anomalies. For one-off testing, you can use a speed test tool like Dotcom-Tools in order to spot check website performance issues that could be related to DDoS.

2. Failover and Provisioning

If your services are commercial in nature, you should have enough network capacity available at any given time to endure a minimum 200% temporary increase in traffic. This is called “provisioning” and it is a service that most network operations centers can provide at minimal cost. Under no circumstances should your server be running without a cloned backup ready to take over operations in the event the front-line machine goes off-line. This is known as fail over protection and it is particularly important in the event of a denial of service attack, especially if your network operations staff needs to hotfix or spin up new security on the fly.

3. Reinforce at the Router

While not a permanent solution to a DDoS attack, your router can buy you some time in the early phases of the build-up to an attack. Truly massive targeted attacks often require some time to reach full capacity. These minutes are crucial, as they can be the difference between an ability to get back on-line quickly and having your systems down for extended periods. There are several ways your router can help. For example, setting lower timeouts on certain kinds of connections, reducing thresholds on UDP and SYN packet floods and identifying remote IP ranges to block can buy you anywhere from ten to thirty minutes of up-time in some cases. Even that much time can often make all the difference.

4. UDP Phantom Zone

Unless your servers have a very good reason for receiving or sending UDP traffic, your best option is to simply ask your upstream providers to drop the packets at their routers. Some of the most popular DDoS strategies use NTP and UDP amplification which can overwhelm many networks with relatively minimal hardware. However, if your network sends all UDP traffic to the phantom zone, your servers will never see it.

5. Geographically Distributed Servers

One of the best ways to avoid a distributed attack is to have a distributed server network, according to Web Hosting Buddy. The fewer points of failure your system has, the more vulnerable it is. However, if a DDoS attack only affects a localized geographic area, your network operations can distribute legitimate traffic to other servers on your network automatically and isolate the attacker before the unauthorized traffic has a chance to cause any trouble.

There are commercial companies, naturally, that can provide all these services for high reliability web sites and web services. Although most sites likely don’t need industrial strength denial of service defense, it is something to consider as your traffic grows and your network’s importance increases.

 

The information contained in this website is for general information purposes only. The information is gathered from Hackers Online Club while we endeavour to keep the information up to date and correct, we make no representations or warranties of any kind, express or implied, about the completeness, accuracy, reliability, suitability or availability with respect to the website or the information, products, services, or related graphics contained on the website for any purpose. Any reliance you place on such information is therefore strictly at your own risk.
Through this website, you are able to link to other websites which are not under the control of CSIRT-CY. We have no control over the nature, content and availability of those sites. The inclusion of any links does not necessarily imply a recommendation or endorse the views expressed within them.
Every effort is made to keep the website up and running smoothly. However, CSIRT-CY takes no responsibility for, and will not be liable for, the website being temporarily unavailable due to technical issues beyond our control.

Technology is pulsing all around you, and in the short amount of time that you are hosted in this network that you must try to understand its inner workings. Fortunately or unfortunately, most network and system administrators are persons of habit. All you have to do is listen for long enough, and more often than not it will yield some of those juicy findings, information security experts say.

Regardless of any discussion beforehand, a penetration test has a competitive feel from both sides. Consulting pentesters want their flag, and administrators want their clean bill of health to show that they are resilient to cyber-attack; something akin to a game of flag football. The difference here is that in flag football, both teams are familiar with the tools used to play the game.

It goes without saying that a pentester’s job is to simulate a legitimate threat to effectively determine your organization’s risk, but how can remediation happen without at least some familiarity?

In order to truly secure your networks, any administrator with cybersecurity duties will need to not only understand what they themselves have, but also step into the shoes of the opposite side.

This article’s intention is to focus on the ‘why’ and not completely the ‘how’. There are countless videos and tutorials out there to explain how to use the tools, and much more information than can be laid out in one blog post. Additionally, I acknowledge that other testers out there may have an alternate opinion on these tools, and w

1. Responder

This tool, in the information security expert opinion, makes the absolute top of the list. When an auditor comes in and talks about “least functionality”, this is what comes immediately to mind. If you are a pentester, Responder is likely the first tool you will start running as soon as you get your Linux distro-of-choice connected to the network and kick off the internal penetration test. The tool functions by listening for and poisoning responses from the following protocols:

  • Link-Local Multicast Name Resolution (LLMNR)
  • NetBIOS Name Service (NBT-NS)
  • Web Proxy Auto-Discovery (WPAD)

There is more to Responder, but I will only focus on these three protocols for this article.

NBT-NS is a remnant of the past; a protocol which has been left enabled by Microsoft for legacy/compatibility reasons to allow applications which relied on NetBIOS to operate over TCP/IP networks. LLMNR is a protocol designed similarly to DNS and relies on multicast and peer-to-peer communications for name resolution. It came from the Vista era, and we all know nothing good came from that time-frame. You probably don’t even use either of these. Attackers know this, and use it to their advantage.

WPAD, on the other hand, serves a very real and noticeable purpose on the network. Most enterprise networks use a proxy auto-config (PAC) file to control how hosts get out to the Internet, and WPAD makes that relatively easy. The machines broadcast out into the network looking for a WPAD file, and receive the PAC which is given. This is where the poisoning happens.

The information security professionals are aware that most protocols which rely on any form of broadcasting and multicasting are ripe for exploitation.

2. PowerShell Empire

Before, pentesters typically relied on Command and Control (C2) infrastructure where the agent first had to reside on-disk, which naturally would get uploaded to Virus Total upon public release and be included in the next morning’s antivirus definitions. The time spent evading detection was a seemingly never-ending cat-and-mouse game.

It was as if the collective unconscious of pentesters everywhere realised that the most powerful tool at their disposal was already present on most modern workstations around the world. A framework had to be built, and the Empire team made it so.

The focus on pen-testing frameworks and attack tools has undoubtedly shifted towards PowerShell for exploitation and post-exploitation.

It means that some of the security controls you have put in place may be easily bypassed. File-less agents (including malware) can be deployed by PowerShell and exist in memory without ever touching your hard disk or by connecting a USB.  Existing in memory makes antivirus whose core function is scanning disk significantly less effective.

When it comes to mitigation; the execution policy restrictions in PowerShell are trivial to bypass.

3. Hashcat with Wordlists

This combo right here is an absolute staple. Cracking hashes and recovering passwords is pretty straightforward of a topic at a high level.

Hashcat is a GPU-focused powerhouse of a hash cracker which supports a huge variety of formats, typically used in conjunction with hashes captured by Responder. In addition to Hashcat, a USB hard drive with several gigs of wordlists is a must. On every pentest that the information security analysts have been on, the time had to be allocated appropriately to maximize results, and provide the most value to the client.

Sysadmins, think about your baseline policies and configurations. Typically, it is best practice to align with an industry standard, such as the infamous DISA STIG, as closely as possible. Baselines such as DISA STIG support numerous operating systems and software and contain some key configurations to help you prevent against offline password cracking and replay attacks. This includes enforcing NIST recommended password policies, non-default authentication enhancements, and much more. DISA even does the courtesy of providing you with pre-built Group Policy templates that can be imported and custom-tailored to your organisation’s needs, which cuts out much of the work of importing the settings.

4. Web Penetration Testing Tools

It is important to note that a web penetration testing tool is not the same as a vulnerability scanner.

Web-focused tools have scanning capabilities to them, and focus on the application layer of a website versus the service or protocol level. Granted, vulnerability scanners (Nessus, Nexpose, Retina, etc.) do have web application scanning capabilities, though I have observed that it is best to keep the two separate.

Many organisations nowadays build in-house web apps, intranet sites, and reporting systems in the form of web applications. Typically, that since the site is internal, it does not need to be run through the security code review process, and gets published out for all personnel to see and use.

The surface area of most websites leaves a lot of room for play to find something especially compromising. Some of the major issues are:

  • Stored Cross-site Scripting (XSS).
  • SQL Injection.
  • Authentication bypass.
  • Directory traversal abuse.
  • Unrestricted file upload.

If you administer an organisation that builds or maintains any internal web applications, think about whether or not that code is being frequently reviewed. Code reuse becomes an issue where source code is imported from unknown origins, and any security flaws or potentially malicious functions come with it. Furthermore, the “Always Be Shipping” methodology which has overtaken software development as of late puts all of the emphasis on getting functional code despite the fact that flaws may exist.

Acquaint yourself with OWASP, whose entire focus is on secure application development. Get familiar with the development team’s Software Development Lifecycle (SDLC) and see if security testing is a part of it. OWASP has some tips to help you make recommendations.

Understand the two methodologies for testing applications, including:

  • Static Application Security Testing (SAST). The application’s source code is available for analysis.
  • Dynamic Application Security Testing (DAST). Analyses the application while in an operational state.

Additionally, you will want to take the time to consider your web applications as separate from typical vulnerability scans. Tools (open and closed source) exist out there, including Burp Suite Pro, OWASP Zed Attack Proxy (ZAP), Acunetix, or Trustwave, with scanning functionality that will crawl and simulate attacks against your web applications. Scan your web apps at least quarterly.

5. Arpspoof and Wireshark

Arpspoof is a tool that allows you to insert yourself between a target and its gateway, and Wireshark allows you to capture packets from an interface for analysis. You redirect the traffic from an arbitrary target, such as an employee’s workstation during a pentest, and snoop on it.

Likely the first theoretical attack presented to those in cybersecurity, the infamous Man-in-the-Middle (MitM) attack is still effective on modern networks, information security researchers said. Considering most of the world still leans on IPv4 for internal networking, and the way that the Address Resolution Protocol (ARP) has been designed, a traditional MitM attack is still quite relevant.

According to information security researcher, many falsely assume that because communications occur inside their own networks, they are safe from being snooped on by an adversary and therefore do not have to take the performance hit of encrypting all communications in their own subnets. Granted, your network is an enclave of sorts from the wild west of the Internet, and an attacker would first have to get into your network to stand between communications.

Now, let’s assume that a workstation is compromised by an attacker in another country using a RAT equipped with tools that allow a MitM to take place. Alternately, consider the insider threat.

The information security experts said that the best tactics of defence are: encrypt your communications. Never assume communications inside your network are safe just because there is a gateway device separating you from the Internet.

Keep your VLAN segments carefully tailored, and protect your network from unauthenticated devices. Implementing a Network Access Control (NAC) system is something you may want to add to your security roadmap in the near future or implementing 802.1X on your network may be a good idea. Shut down those unused ports, and think about sticky MACs if you are on a budget.

 

The information contained in this website is for general information purposes only. The information is gathered from Security Newspaper while we endeavour to keep the information up to date and correct, we make no representations or warranties of any kind, express or implied, about the completeness, accuracy, reliability, suitability or availability with respect to the website or the information, products, services, or related graphics contained on the website for any purpose. Any reliance you place on such information is therefore strictly at your own risk.
Through this website, you are able to link to other websites which are not under the control of CSIRT-CY. We have no control over the nature, content and availability of those sites. The inclusion of any links does not necessarily imply a recommendation or endorse the views expressed within them.
Every effort is made to keep the website up and running smoothly. However, CSIRT-CY takes no responsibility for, and will not be liable for, the website being temporarily unavailable due to technical issues beyond our control.

According to new data by TrendMicro, attackers utilising the Emotet banking Trojan predominantly used internet providers located in the U.S.A. to host their Command & Control infrastructure.

In a recent blog post, TrendMicro states that the United States of America, with a 45% share, hosts more Emotet C2 infrastructure through Comcast, followed by Mexico and Canada. The top 3 ASN numbers being used to host the C2 servers are 7922 (Comcast Cable), 8151 (Telmex), and 22773 (Cox Communications). This infrastructure was determined by actively tracking Emotet and with nearly 15 thousand artifacts ranging between June and September 2018.

Top Countries hosting Emotet C&C servers

 

Emotet uses RSA certificates for confidential communication and by analysing Emotet malware samples, it was noted that on average a single sample contains 39 different C2 addresses. Each C2 uses one of six RSA certificates and by tracking the samples and certificates used by the C2, TrendMicro were able to further split the six certificates in to two groups; with three certificates per group.

These two groups show they are two separate C2 infrastructures operating in parallel. TrendMicro states that this makes it “more difficult to track Emotet and minimize the possibility of failure“. Correlating known campaigns against the two infrastructure groups display a clear distinction between the two and indicates a differing agenda which may even be controlled by different operators.

The research further discusses the review of compilation timestamps to make a hypothesis that the author may operate in UTC +10, which places them in east Russia or east Australia. However, TrendMicro admits this to be mere speculation, as at least three separate machines are used to package and operate varied timezones. Threat actors have also been known to change their locality and timezones to confuse reverse engineers.

While much of the world is impacted by Emotet, Europe and the United States have been impacted the greatest. It is ironic how infrastructure used by Emotet is located in the same regions as the victims, but further indicate these regions to be well connected and contain cheap hosting as well as easily compromised nodes.

 

The information contained in this website is for general information purposes only. The information is gathered from Bleeping Computer while we endeavour to keep the information up to date and correct, we make no representations or warranties of any kind, express or implied, about the completeness, accuracy, reliability, suitability or availability with respect to the website or the information, products, services, or related graphics contained on the website for any purpose. Any reliance you place on such information is therefore strictly at your own risk.
Through this website, you are able to link to other websites which are not under the control of CSIRT-CY. We have no control over the nature, content and availability of those sites. The inclusion of any links does not necessarily imply a recommendation or endorse the views expressed within them.
Every effort is made to keep the website up and running smoothly. However, CSIRT-CY takes no responsibility for, and will not be liable for, the website being temporarily unavailable due to technical issues beyond our control.