End Point Security

Endpoint security is the practice of securing endpoints or entry points of end-user devices such as desktops, laptops, and mobile devices from being exploited by malicious actors and campaigns. Endpoint security systems protect these endpoints on a network or in the cloud from cybersecurity threats. Endpoint security has evolved from traditional antivirus software to providing comprehensive protection from sophisticated malware and evolving zero-day threats.

Organizations of all sizes are at risk from nation-states, hacktivists, organized crime, and malicious and accidental insider threats. Endpoint security is often seen as cybersecurity’s frontline, and represents one of the first places organizations look to secure their enterprise networks.

As the volume and sophistication of cybersecurity threats have steadily grown, so has the need for more advanced endpoint security solutions. Today’s endpoint protection systems are designed to quickly detect, analyze, block, and contain attacks in progress. To do this, they need to collaborate with each other and with other security technologies to give administrators visibility into advanced threats to speed detection and remediation response times.

Endpoint protection platforms (EPP) and traditional antivirus solutions differ in some key ways.

  • Endpoint Security vs. Network Security:
    Antivirus programs are designed to safeguard a single endpoint, offering visibility into only that endpoint, in many cases only from that endpoint. Endpoint security software, however, looks at the enterprise network as a whole and can offer visibility of all connected endpoints from a single location.
  • Administration:
    Legacy antivirus solutions relied on the user to manually update the databases or to allow updates at pre-set time. EPPs offer interconnected security that moves administration responsibilities to enterprise IT or cybersecurity team.
  • Protection:
    Traditional antivirus solutions used signature-based detection to find viruses. This meant that if your business was Patient Zero, or if your users hadn’t updated their antivirus program recently, you could still be at risk. By harnessing the cloud, today’s EPP solutions are kept up to date automatically. And with the use of technologies such as behavioral analysis, previously unidentified threats can be uncovered based suspicious behavior.
Encryption and 2FA (Two Factor Authentication)

There are two strategies that are regarded fool-proof when it comes to data security – encryption and two-factor-authentication (2FA). The idea is that when all the data that is transmitted between servers is replaced with cipher text, hackers would fail to interpret them even if they were able to hack into it. Also, when a user’s access into a system is tied to a physical asset like mobile phone, there is no way a hacker who does not have access to this device would be able to break in.

 

While this continues to be the popular opinion among security analysts, some researchers have started wondering if the encryption and 2FA technologies deployed by many enterprises today is all but a security theater – a means to demonstrate improved security while not adding enough to actually make the system secure.

A recent report by PT Security showed that One-Time-Passwords used to authenticate user accounts on WhatsApp and Telegram are not effective since these codes are rendered over mobile communication systems which are not secure. The researchers here were able to hack into a message sent by Telegram to obtain the OTP. In short, the aura of additional security due to encrypted data transmission and 2FA was rendered ineffective because the channels used to carry out the authentication itself was insecure.

In another report prepared by the US National Institute of Standards and Technology (NIST), SMS based two factor authenticated was declared insecure since there are multiple scenarios where an SMS sent to a user’s phone could be accessed by a third party. Most software agencies follow NIST guidelines in their appliances and the latest report is being seen as the beginning of the end for SMS 2FA.

Despite these loopholes, encryption and two-factor-authentication remain two of our best bets against data theft. 2FA helps secure the end-points of a pipeline while encryption seals the pipe itself. Together, when executed correctly, 2FA and advanced encryption help seal data from hackers.

Patch and Vulnerability Management

Unpatched vulnerabilities are one of the easiest ways for attackers to enter an environment. According to a new ServiceNow study conducted by the Ponemon Institute, an alarming 57% of cyberattack victims report that their breaches could have been prevented by installing an available patch. And 34% of those respondents were already aware of the vulnerability before they were attacked. The lessons from Equifax breach, which compromised the PII of half of Americans, the one which could easily have been prevented and was warned about by US-CERT a couple of days before for the vulnerabilities were exploited by the hackers, should not be forgotten. Hackers are out there; the question is how prepared the organizations are, and if they aren’t a low hanging fruit waiting to be easily picked?

Any organization with a digital footprint is vulnerable in today’s world, including pretty much every organization. To start, an understanding of vulnerabilities is important. Vulnerabilities are loopholes in applications/software that allow unwanted access to an attacker or lead to unintended external entities’ disclosure of information. Vulnerabilities vary in severity from the low impact, which may lead to some information disclosure (not severe) to critical vulnerabilities, allowing remote code execution and a complete system compromise. An excellent place to understand the scoring of vulnerabilities would be CVSS. With this basic understanding of vulnerabilities, patch and vulnerability management will be looked into.

Various phases of vulnerability management include:

  • Scoping
  • Identification
  • Classification
  • Remediation
  • Confirmation of remediation

Scoping: To begin with vulnerability management, understanding one’s environment is important. Knowing what hardware and software are being used in the organization. Any hardware or software, not known about and that exists in an environment, is a blind spot and a big risk from a vulnerability management perspective, considering something not known can’t be remediated and can easily be attacked. A proper inventory of assets and software used in an organization helps run a vulnerability management program smoothly. Also, as part of a vulnerability management program, an updated inventory of assets must be maintained and integrate asset discovery as part of vulnerability management.

Identification: This phase includes identifying the vulnerabilities of the assets of an organization. For this, vulnerability scanners such as Qualys, Nessus, or Rapid7 can be used. Using the scanner’s misconfigurations and missing patches on an asset can be identified. Vulnerability scans must be authenticated(if possible) as it provides better results and confirmed vulnerabilities with less false positives. The vulnerabilities may belong to the software or OS installed on the asset. Web applications also need to be looked into for which web scanners from various vendors can be leveraged. The source code needs to be audited to integrate security from the start. Also, static and dynamic analysis can be done to check for vulnerabilities in the applications developed by an organization.

Classification: Not all the vulnerabilities identified have an equal impact. Vulnerabilities with higher severity need to be prioritized due to the higher impact they may have if exploited. For classification, CVSS scoring may be followed. It is important to note that each organization has a unique environment, and sticking to the CVSS may just not be enough. Consider a situation where a medium severity vulnerability is discovered on a server with PII (Personal Identifiable Information) data of customers. Due to the nature of data on the server, if exploited, the impact will be critical. Thus, proper classification is based not only on the severity of the vulnerability but also on the asset’s criticality. A proper risk framework must be in place, which classifies risk posed by a vulnerability based on these important factors.

Remediation: It is the trickiest part of vulnerability management. It is not easy, rather close to impossible, to remediate all the vulnerabilities and say that the organization is free of vulnerabilities. Thus, the need is to prioritize remediation based on the risk created by a vulnerability. The critical assets must be patched as soon as reasonably possible. An external-facing asset is a critical asset. Also, there is always a risk of patching a system, resulting in system crash and loss of data. Thus, it’s better to always have a dry run in a test environment to check how patching will impact a system’s stability. Stability can’t be compromised for security as it would cast the vulnerability management program as a roadblock (security should always be an enabler). If vulnerabilities per IP are seen and tried to remediate, a long list would be obtained, which can’t be checked off completely with reasonable efforts. Remediation should start from the vulnerabilities which have the highest impact in reducing the risk. A simple Registry change or ticking a checkbox may reduce many vulnerabilities and should be considered and prioritized. Sometimes it often happens that a vulnerability can’t be patched either due to patch not being available or patches impacting the systems’ stability. In such cases compensating controls should be introduced to mitigate the risk due to the vulnerability.

Confirmation of remediation: Just patching a vulnerability without confirming remediation isn’t enough. Often vulnerabilities need configuration changes in addition to patching to close the gap. These types of vulnerabilities are often missed in the first go. Thus the remediation should be followed by a rescan to confirm the mitigation. Ticketing systems like ServiceNow and Jira can be used for creating tickets and tracking vulnerabilities to closure.

Patch management is a part of vulnerability management, but not the whole thing. It specifically deals with how and when patches are applied to the vulnerable OS or software when identified. It is important to keep track of the patches when released for the software and firmware used in the environment. There should be a defined plan for patching based on the release cycle of patches. It makes the patching process efficient. Various patching tools like Shavlik can be used for making the patching efficient.

A good vulnerability management program identifies and tracks all the possible vulnerabilities in an environment. The vulnerabilities are remediated based on priority defined by the risk framework adopted by the organization. Not all vulnerabilities need to be remediated if it is aligned with the risk appetite of an organization. Vulnerability management is a continuous process. With appropriate reporting mechanisms and driven by efficient leadership, it is bound to succeed. If done properly, it helps an organization greatly in managing the risks and reducing the attack surface. With the integration of cyber threat intelligence, the process can be matured over time and prioritize vulnerabilities. Feeds from US-CERT and NCSC can be used to add value to the vulnerability management program. An important point that is often missed in a vulnerability management program is checking vulnerabilities on the recovery sites’ assets. If possible, efforts should be made to minimize the software footprint in an environment as it helps minimize the remediation and maintenance efforts. Cloud technologies offer a challenge to vulnerability management, and the ownership of the vulnerabilities arising from the service subscribed to should be clear.

Network Firewall / Network IPS (Intrusion Prevention System)

A firewall is a device installed between the internal network of an organization and the rest of the network. It is designed to forward some packets and filter others.

For example, a firewall may filter all incoming packets destined for a specific host or a specific server such as HTTP, or it can be used to deny access to a specific host or a service in the organization.

The following image depicts a firewall installation in the network.

Firewalls are a set of tools that monitors the flow of traffic between networks. Placed at the network level and working closely with a router, it filters all network packets to determine whether or not to forward them towards their destinations.

Working architecture

A firewall is often installed away from the rest of the network so that no incoming requests get directly to the private network resource. If it is configured properly, systems on one side of the firewall are protected from systems on the other side. Firewalls generally filter traffic based on two methodologies:

  • A firewall can allow any traffic except what is specified as restricted. It relies on the type of firewall used, the source, the destination addresses and the ports
  • A firewall can deny any traffic that does not meet the specific criteria based on the network layer on which the firewall operates

The type of criteria used to determine whether traffic should be allowed through varies from one type to another. A firewall may be concerned with the type of traffic or with source or destination addresses and ports. A firewall may also use complex rules based on analyzing the application data to determine if the traffic should be allowed through.

Firewall pros and cons

Every security device has advantages and disadvantages and firewalls are no different. If we applied strict defensive mechanisms into our network to protect it from breach, then it might be possible that even our legitimate communication could malfunction; or if we allow entire protocol communications into our network, then it can be easily hacked by malicious users. We should maintain a balance between strictly-coupled and loosely-coupled functionalities.

Advantages

  • A firewall is an intrusion detection mechanism. Firewalls are specific to an organization’s security policy. The settings of firewalls can be altered to make pertinent modification to the firewall functionality.
  • Firewalls can be configured to bar incoming traffic to POP and SNMP and to enable email access.
  • Firewalls can also block email services to secure against spam.
  • Firewalls can be used to restrict access to specific services. For example, the firewall can grant public access to the web server but prevent access to the Telnet and the other non-public daemons.
  • Firewall verifies the incoming and outgoing traffic against firewall rules. It acts as a router in moving data between networks.
  • Firewalls are excellent auditors. Given plenty of disk or remote logging capabilities, they can log any and all traffic that passes through.

Disadvantage

  • A firewall can’t prevent revealing sensitive information through social engineering.
  • A firewall can’t protect against what has been authorized. Firewalls permit normal communications of approved applications, but if those applications themselves have flaws, a firewall will not stop the attack: to the firewall, the communication is authorized.
  • Firewalls are only as effective as the rules they are configured to enforce.
  • Firewalls can’t stop attacks if the traffic does not pass through them.
  • Firewalls also can’t secure against tunneling attempts. Applications that are secure can be attacked with Trojan horses. Tunneling bad things over HTTP, SMTP and other protocols is quite simple and easily demonstrated.

Firewall classification

The way a firewall provides greater protection relies on the firewall itself and on the policies that are configured on it. The main firewall technologies available today are:

  • Hardware firewall
  • Software firewall
  • Packet-filter firewall
  • Proxy firewall
  • Application gateways
  • Circuit-level gateways
  • Stateful packet inspection (SPI)

Hardware firewall

A hardware firewall is preferred when a firewall is required on more than one machine. A hardware firewall provides an additional layer of security to the physical network. The disadvantage of this approach is that if one firewall is compromised, all the machines that it serves are vulnerable.

Software firewall

A software firewall is a second layer of security and secures the network from malware, worms, viruses and email attachments. It looks like any other program and can be customized based on network requirements. Software firewalls can be customized to include antivirus programs and to block sites and images.

Packet-filtering firewall

A packet-filtering firewall filters at the network or transport layer. It provides network security by filtering network communications based on the information contained in the TCP/IP header of each packet. The firewall examines these headers and uses the information to decide whether to accept and route the packets along to their destinations or deny the packet by dropping them. This firewall type is a router that uses a filtering table to decide which packets must be discarded.

Packer filtering makes decisions based upon the following header information:

  • The source IP address
  • The destination IP address
  • The network protocol in use (TCP, ICMP or UDP)
  • The TCP or UDP source port
  • The TCP or UDP destination port
  • If the protocol is ICMP, then its message type

Proxy firewall

The packet-filtering firewall is based on information available in the network and transport layer header. However, sometimes we need to filter a message based on the information available in the message itself (at the application layer).

For example, assume that an organization only allows those users who have previously established business relations with the company, then access to other users must be blocked. In this case, a packet-filtering firewall is not feasible because it can’t distinguish between different packets arriving at TCP port 80.

Here, the proxy firewall came into light as a solution: install a proxy computer between the customer and the corporation computer. When the user client process sends a message, the proxy firewall runs a server process to receive the request. The server opens the packet at the application level and confirms whether the request is legitimate or not. If it is, the server acts as a client process and sends the message to the real server. Otherwise, the message is dropped. In this way, the requests of the external users are filtered based on the contents at the application layer.

Application gateways

These firewalls analyze the application level information to make decisions about whether or not to transmit the packets. Application gateways act as an intermediary for applications such as email, FTP, Telnet, HTTP and so on. An application gateway verifies the communication by asking for authentication to pass the packets. It can also perform conversion functions on data if necessary.

For example, an application gateway can be configured to restrict FTP commands to allow only get commands and deny put commands.

Application gateways can be used to protect vulnerable services on protected systems. A direct communication between the end user and destination service is not permitted. These are the common disadvantages when implementing application gateway:

  • Slower performance
  • Lack of transparency
  • Need for proxies for each application
  • Limits to application awareness

Circuit-level gateways

Circuit-level gateways work at the session layer of the OSI model or the TCP layer of the TCP/IP. It forwards data between the networks without verifying it. It blocks incoming packets on the host but allows the traffic to pass through itself. Information passed to remote computers through it appears to have originated from gateway.

Circuit-level gateways operate by relaying TCP connections from the trusted network to the untrusted network. This means that a direct connection between the client and server never occurs.

The main advantage of a circuit-level gateway is that it provides services for many different protocols and can be adapted to serve an even greater variety of communications. A SOCK proxy is a typical implementation of circuit-level gateway.

Stateful packet inspection

A stateful packet inspection (SPI) firewall permits and denies packets based on a set of rules very similar to that of a packet filter. However, when a firewall is state-aware, it makes access decisions not only on IP addresses and ports but also on the SYN, ACK, sequence numbers and other data contained in the TCP header. While packet filters can pass or deny individual packets and require permissive rules to permit two-way TCP communications, SPI firewalls track the state of each session and can dynamically open and close ports as specific sessions require.

Firewall identification

Normally, firewalls can be identified for offensive purposes. Firewalls are usually a first line of defense in the virtual perimeter; to breach the network from a hacker perspective, it is required to identify which firewall technology is used and how it’s configured. Some popular tactics are:

Port scanning

  • Hackers use it for investigating the ports used by the victims.
  • Nmap is probably the most famous port-scanning tool available.

Firewalking

  • The process of using traceroute-like IP packet analysis in order to verify if a data packet will be passed through the firewall from source to host of the attacker to the destination host of the victim.

Banner grabbing

  • This is a technique to enable a hacker to spot the type of operation system or application running on a target server. It works through a firewall by using what looks like legitimate connections.

Intrusion detection system (IDS)

Intrusion Detection (ID) is the process of monitoring for and identifying attempted unauthorized system access or manipulation. An ID system gathers and analyzes information from diverse areas within a computer or a network to identify possible security breaches which include both intrusions (attack from outside the organization) and misuse (attack from within the organization).

An intrusion detection system (IDS) is yet another tool in the network administrator’s computer security arsenal. It inspects all the inbound and outbound network activity. The IDS identifies any suspicious pattern that may indicate an attack on the system and acts as a security check on all transactions that take place in and out of the system.

Types of IDS

For the purpose of dealing with IT, there are four main types of IDS.

Network intrusion detection system (NIDS)

A NIDS is an independent platform that identifies intrusions by examining network traffic and monitors multiple hosts. Network intrusion detection systems gain access to network traffic by connecting to a network hub, a network switch configured for port mirroring or a network tap. In a NIDS, sensors are placed at choke points in the network to monitor, often in the demilitarized zone (DMZ) or at network borders. Sensors capture all network traffic and analyze the content of individual packets for malicious traffic. An example of a NIDS is Snort.

Host-based intrusion detection system (HIDS)

A HIDS consists of an agent on a host that identifies intrusions by analyzing system calls, application logs, file-system modifications (binaries, password files, capability databases, access control lists and so on) and other host activities and state. In a HIDS, sensors usually consist of a software agent. Some application-based IDS are also part of this category. An example of a HIDS is OSSEC.

Intrusion detection systems can also be system-specific using custom tools and honeypots. In the case of physical building security, IDS is defined as an alarm system designed to detect unauthorized entry.

Perimeter intrusion detection system (PIDS)

Detects and pinpoints the location of intrusion attempts on perimeter fences of critical infrastructures. Using either electronics or more advanced fiber optic cable technology fitted to the perimeter fence, the PIDS detects disturbances on the fence. If an intrusion is detected and deemed by the system as an intrusion attempt, an alarm is triggered.

VM-based intrusion detection system (VMIDS)

A VMIDS detects intrusions using virtual machine monitoring. By using this, we can deploy the intrusion detection system with virtual machine monitoring. It is the most recent type and it’s still under development. There’s no need for a separate intrusion detection system since by using this, we can monitor the overall activities.

Comparison with firewall

Though they both relate to network security, an intrusion detection system (IDS) differs from a firewall in that a firewall looks outwardly for intrusions in order to stop them from happening. Firewalls limit access between networks to prevent intrusion and do not signal an attack from inside the network. An IDS evaluates a suspected intrusion once it has taken place and signals an alarm.

An IDS also watches for attacks that originate from within a system. This is traditionally achieved by examining network communications, identifying heuristics and patterns (often known as signatures) of common computer attacks and taking action to alert operators. A system that terminates connections is called an intrusion prevention system and is another form of an application layer firewall.

Anomaly detection model

All intrusion detection systems use one of two detection techniques:

Statistical anomaly-based IDS

A statistical anomaly-based IDS establishes a performance baseline using normal network traffic evaluations. It will then sample current network traffic activity to this baseline in order to detect whether or not it is within baseline parameters. If the sampled traffic is outside baseline parameters, an alarm will be triggered.

Signature-based IDS

Network traffic is examined for preconfigured and predetermined attack patterns known as signatures. Many attacks today have distinct signatures. In good security practice, a collection of these signatures must be constantly updated to mitigate emerging threats.

Indication of intrusions

System intrusions

  • System failure in identifying valid user
  • Active access to unused logins
  • Login during non-working hours
  • New user account created automatically
  • Modification in system software or configuration files
  • System logs are deleted
  • System performance decreased drastically
  • Unusual display of graphics, pop-ups
  • System crashes suddenly and reboots without user interventions

File intrusions

  • Identifications of unknown files and program on your system
  • File permission modifications
  • Unexplained modifications in file size
  • Identifications of strange file presence into system directories
  • Missing files

Network intrusions

  • Identifications of repeated attempts to log in from remote locations
  • Sudden increase in bandwidth consumptions
  • Repeated probes of the existing services
  • Arbitrary log data in log files
PIM (Privilege Identity Management) / Secure File Transfer

The adoption of cloud technology has forever changed modern identity and access management, with increased data access points, numbers, types and locations of users and privileged accounts.

As a result, data breaches are on the increase in terms of volume and severity. Whilst some attacks are the result of carelessness and a lack of training, the accuracy and volume of phishing attacks mean that we should assume our environment has been, or will be, compromised.

 

There’s a lot of confusion surrounding PAM and its relation to PIM (privileged identity management). Particularly over what they do and where they live within the Microsoft identity space.

This blog will explore the basics of PAM and familiarise you with its variations, giving you a better idea of what they do, where they do it, and why they’re a good idea.

What is privileged access management (PAM)?

We (hopefully) all learned years ago that performing non-administrative duties via an account with admin privileges is NOT a good idea.

For years, we provisioned users with multiple accounts – one for normal use and another (or more) for administrative tasks.

There are multiple reasons why organisations need to monitor and protect the use of these privileged (admin) accounts:

  • A user may log into an insecure computer using a privileged account.
  • A user may, intentionally or unintentionally, browse to a hostile site whilst logged in with a privileged account.
  • A user may set the same password for their privileged and non-privileged accounts making compromise twice as dangerous.
  • In a large organisation, privileged group memberships may become bloated.
  • With no-one monitoring the use of privileged accounts or membership of privileged groups, accounts can be compromised and privileges can be escalated unnoticed.

Privileged accounts come in multiple forms, such as global administrator, domain administrator, local administrator (on servers and workstations), SSH keys (for remote access), break glass (emergency access or firefighter) accounts, and non-IT accounts – these may have privileged access due to the nature of the applications and the type of data being consumed (such as a CFO).

Other privileged accounts which are often overlooked, but are just as vulnerable as the ones mentioned above, include service accounts, system accounts, application accounts, and SSH keys used by automated processes.

The modern approach to protecting these accounts is known as privileged access management or privileged access security (PAS). But you may also hear it called privileged identity management (PIM) or Cloud PAM, depending on where and how it’s applied.

The basic principles of privileged access

Broadly speaking, all PAM approaches follow the same basic principles:

  • Isolation/scoping of privileges: User accounts used for day-to-day work are not assigned privileges. Privileges must be requested and approved or denied based upon policy.
  • Just-in-time administration: Administrators should possess their privileged permissions for the minimum time possible.
  • Just-enough administration: Administrators should only have the permissions that they need to achieve the task at hand.
  • Elimination of permanent membership of administrative groups.
  • Implementation of secure administrative hosts.
  • Provide time-bound access to resources.
  • Require approval and justification to activate privileged access.
  • Enforce multi-factor authentication.
  • Configure notifications for when privileged access is activated.
  • Configure access reviews.
  • Configure audit logging.

So what’s the difference between PIM and PAM? Let’s clear up the confusion around what each provides and what they can (and should) be used for.

 

In order to protect all of those different accounts mentioned earlier, what we really need is some sort of control, with an audit log, for the IT systems.

If this was a secure physical location that people needed access to, we would put the keys in a box and make people sign them out only when they needed them.

In effect, this is what PIM and PAM do. When a user needs to elevate their privileges, they go to the PIM or PAM site and ask for permission to take the keys. Once this is approved, they are granted the relevant privileges and can do the work. After a set period, the keys are taken back from them and they become a normal user again.

Because the request is audited it is easy to see who had the keys and when. Mistakes become less likely as the user does not always have higher-level access.

So, why do we have both PIM and PAM? Simply put, we have two different directory environments – Active Directory (AD) and Azure Active Directory (AAD). One being on-premises (AD) and one in the Cloud (AAD). PAM deals with elevated privileges on-premises with any system that uses Active Directory to control the access. PIM does the same sort of thing for access to roles in Azure AD.

Easy to remember if you think that ‘pAm’ is Active Directory and ‘pIm’ is Internet.

PIM and PAM can be used to help address the following problems:

  • Pass the hash attacks.
  • Pass the ticket attacks.
  • Spear phishing.
  • Lateral movement attacks.
  • Privilege escalation.

So, PIM and PAM are related but live in two different realms. One provides access to AD resources and one to the Internet. Cousins separated by an internet pipe. Providing access to elevated privileges for the right users, when they need them. Both have their place, but they work independently to control privileged access to services.

FIM (File Integrity Monitoring) / Security Configuration Management

File integrity monitoring was invented in part by  Gene Kim and went on to become a security control that many organizations build their cybersecurity programs around. The term “file integrity monitoring” was widely popularized by the PCI standard.

FIM is a technology that monitors and detects changes in files that may indicate a cyberattack. Unfortunately, for many organizations, FIM mostly means noise: too many changes, no context around these changes, and very little insight into whether a change actually poses a risk. FIM is a critical security control, but it must provide sufficient insight and actionable intelligence.

 

Otherwise known as change monitoring, file integrity monitoring involves examining files to see if and when they change, how they change, who changed them, and what can be done to restore those files if those modifications are unauthorized.

Companies can leverage the control to supervise static files for suspicious modifications such as adjustments to their IP stack and email client configuration. As such, FIM is useful for detecting malware as well as achieving compliance with regulations like the Payment Card Industry Data Security Standard (PCI DSS).

3 Advantages of Running a Successful File Integrity Monitoring Program

  1. Protect IT Infrastructure: FIM solutions monitor file changes on servers, databases, network devices, directory servers, applications, cloud environments, virtual images and to alert you to unauthorized changes.
  2. Reduce Noise: A strong FIM solution uses change intelligence to only notify you when needed—along with business context and remediation steps. Look for detailed security metrics and dashboarding in your FIM solution.
  3. Stay Compliant: FIM helps you meet many regulatory compliance standards like PCI-DSS, NERC CIP, FISMA, SOX, NIST and HIPAA, as well as best practice frameworks like the CIS security benchmarks.

How File Integrity Monitoring Works (in 5 Steps)

There are five steps to file integrity monitoring:

  1. Setting a policy: FIM begins when an organization defines a relevant policy. This step involves identifying which files on which computers the company needs to monitor.
  2. Establishing a baseline for files: Before they can actively monitor files for changes, organizations need a reference point against which they can detect alterations. Companies should, therefore, document a baseline, or a known good state for files that will fall under their FIM policy. This standard should take into account the version, creation date, modification date, and other data that can help IT professionals provide assurance that the file is legitimate.
  3. Monitoring changes: With a detailed baseline, enterprises can proceed to monitor all designated files for changes. They can augment their monitoring processes by auto-promoting expected changes, thereby minimizing false positives.
  4. Sending an alert: If their file integrity monitoring solution detects an unauthorized change, those responsible for the process should send out an alert to the relevant personnel who can fix the issue.
  5. Reporting results: Sometimes companies use FIM tools for ensuring PCI DSS compliance. In that event, organizations might need to generate reports for audits in order to substantiate the deployment of their file integrity monitoring assessor.

4 Things to Look for When Assessing File Integrity Monitoring Tools

To complement the phases described above, organizations should look for additional features in their file integrity monitoring solution. That functionality should include, for example, a lightweight agent that can toggle “on” and “off” and can accommodate additional functions when necessary. The solution should also come with total control over a FIM policy. Such visibility should incorporate:

  • Management: Out-of-the-box policy customizations should come with the solution.
  • Granularity: The product should be capable of supporting different policies according to the device type.
  • Editing: Organizations should have the ability to revise a policy according to their individual requirements.
  • Updates: All systems should quickly update via content downloads.
SIEM (Security Incident Event Management)

A SIEM system ingests log and event data from a wide variety of sources such as security software and appliances, network infrastructure devices, applications, and endpoints such as servers and PCs, to give IT security teams a centralized tool for spotting and responding to security incidents.

How SIEM works

A SIEM has two closely related purposes: to collect, store, analyze, investigate and report on log and other data for incident response, forensics and regulatory compliance purposes; and to analyze the event data it ingests in real time to facilitate the early detection of targeted attacks, advanced threats, and data breaches.

A SIEM works by ingesting and interpreting all that data and incorporating threat intelligence and advanced analytics to correlate events that could signal a cyberattack is underway. The system will then alert security teams of the threat, and potentially suggest responses to mitigate the attack, such as shutting down access to data or machines and applying a missing patch or update.

An example of correlation might be to connect a port scan with access to sensitive data, perhaps in multiple locations, thus adding context to what might otherwise seem to be unrelated events.

Why is SIEM important?

To get an idea of how important a SIEM is, consider the scale of the security incidents and data involved. A large enterprise may generate more than 25,000 events per second (EPS) and require 50 TB or more of data storage.

A SIEM’s ability to filter through all the data and prioritize the most critical security issues makes security more manageable. An effective SIEM will pay for itself in staff time saved, even as the system itself requires management and tuning.

What to look for in the best SIEM solutions

SIEM products are differentiated by cost, features, and ease of use. Generally, the more you pay, the greater the capabilities and range of coverage, so buyers must weigh their needs, budget and expertise as they decide on a SIEM system. A small business might focus on automation, ease of use and cost, while an enterprise with a sophisticated security operations center (SOC) might focus on the breadth of threats and assets covered and machine learning capabilities for discovering new and emerging threats. Regardless of an organization’s size, deployment and integration of such a complex technology can take time, so the help of consultants and services firms is often needed.

Despite its relative maturity, the SIEM market is still growing at double-digit rates. A major trend is the growing use of behavioral analytics and automation to filter out less urgent alerts so security teams can focus on the biggest threats, with advanced UEBA and SOAR capabilities becoming increasingly common. Analysts see the cloud as a growing means of delivery for SIEM services, both for SMBs and for hybrid organizations seeking easier ways to keep track of their complex environments.

There are many good SIEM products out there; our top 11 received overall scores within 5 points of each other on a 100-point scale, so your buying decision should be driven by how well a product meets your specific needs. To that end, we used 115 data points in our analysis, including product features, user experiences, independent testing where available, analyst reports, pricing data and more. We ranked SIEM products in seven areas, in order of weighting: Detection, Response, Management, Ease of Use, Support, Value and Deployment. For more information on our evaluation process, see the list of key SIEM features we considered and our section on methodology.

Jump ahead to:

  • Securonix
  • LogRhythm
  • IBM QRadar
  • McAfee ESM
  • Splunk
  • Exabeam
  • Fortinet
  • Dell/RSA
  • Rapid7
  • AT&T Cybersecurity
  • Micro Focus
  • Additional market leaders
  • Key SIEM features
  • Methoology
NAC (Network Access Control)

Network Access Control (NAC) helps enterprises implement policies for controlling devices and user access to their networks. NAC can set policies for resource, role, device and location-based access and enforce security compliance with security and patch management policies, among other controls.

NAC is an effort to create order out of the chaos of connections from within and outside the organization. Personnel, customers, consultants, contractors and guests all need some level of access. In some cases, it is from within the campus and at other times access is remote. Adding to the complexity are bring your own device (BYOD) policies, the prevalence of smartphones and tablets, and the rise of the Internet of Things (IoT).

NAC was the highest IT security spending priority in eSecurity Planet’s 2019 State of IT Security survey – and is also one of the technologies users have the most confidence in.

Jump to:

Minimum capabilities

According to Gartner, the minimum capabilities of NAC are:

  • Dedicated policy management to define and administer security configuration requirements, and specify the access control actions for compliant and noncompliant endpoints
  • Ability to conduct a security state baseline for any endpoint attempting to connect and determine the suitable level of access
  • Access control so you can block, quarantine or grant varying degrees of access.
  • The ability to manage guest access
  • A profiling engine to discover, identify and monitor endpoints
  • Some method of easy integration with other security applications and components

One trend to watch is the rise of zero trust security products. These new access control tools restrict access to just the data and applications users need rather than granting them access to the entire network, reducing the risk of lateral movement within the network. The market is still new, but Gartner expects sales of these products to begin to gain traction in 2021.

MDM (Mobile Device Management)

Mobile Device Management (MDM) is the trend of managing mobile devices that are used in organizations to access sensitive business data. It includes storing essential information about mobile devices, deciding which apps can be present on the devices, locating devices, and securing devices if lost or stolen.

With the increased adoption of mobile devices, mobile device management (MDM) has now evolved into Enterprise Mobility Management (EMM).

Mobile devices now have more capabilities than ever before, which has ultimately led to many enterprises adopting a mobile-only or mobile-first workforce. In these types of environments, both personal and corporate-owned mobile devices are the primary devices used for accessing or interacting with corporate data. To simplify the management of mobile devices, many businesses use a third-party mobile device management software such as Mobile Device Manager Plus to manage mobile devices.

With a number of enterprises moving to a cloud-based infrastructure, the ease of use mobile devices offer has contributed to mobile devices replacing conventional desktops, as shown below in Figure 1:

 

Mobile Device Management (MDM) Software/Solution

MDM Software or MDM solution is a type of security or management software used by IT admins to monitor, manage and secure corporate or personally-owned mobile devices. It’s also known as mobile device management software.

Mobile devices are portable in nature and ensure work can be done from anywhere. While the portability of mobile devices can offer many advantages, mobile devices also come with their own set of problems, such as unauthorized data access and data leakage. If you want to leverage portability to improve productivity without compromising security, you need a proper mobile device management system set up to simplify the challenge of managing mobile devices.

The right MDM application can make a world of difference for system administrators trying to manage mobile devices. An MDM solution or an MDM server provides a unified console to manage the different device types used in an organization. They let you manage the apps being installed or removed on mobile devices, monitor the devices in the MDM server, configure basic settings on devices, and set up devices that will be used for specific purposes, like point of sale (POS). These solutions are also available with multiple MDM deployment options to meet the requirements of every organization.

Why is Mobile Device Management (MDM) important?

The main purpose of MDM or mobile device management is to allow enterprises to focus on improving productivity of their employees by allowing them to access corporate data on the go using corporate or personally-owned mobile devices

  • Ease of deployment

MDM solutions can be deployed on-premises or in private or public cloud environments, providing enterprises with the convenience of choosing a deployment method that caters to their business’ specific needs.

  • Efficient Integrations

Many MDM solutions seamlessly integrate with help desk ticketing software, app development tools, and other business solutions.

  • Manage multiple device types

Simplified Mobile Device Management requires managing multiple OSs such as iOS, Android, Windows, macOS, tvOS, and Chrome OS, as well as multiple device types such as tablets, laptops, and smartphones.

How does Mobile Device Management (MDM) software work?

Mobile Device Management (MDM) uses client-server architecture, with the devices acting as clients while MDM server remotely pushes configurations, apps, and policies managing the devices over-the-air (OTA). IT admins can remotely manage mobile endpoints such as laptops, tablets, and mobile phones via the MDM server. It leverages the notification services available to contact the managed devices

DLP (Data Loss Prevention) / Data Classification / Rights Management

You can also use these solutions to ensure access policies meet regulatory compliance, including HIPAA, GDPR, and PCI-DSS. DLP solutions can also go beyond simple detection, providing alerts, enforcing encryption, and isolating data.

Other features common in DLP solutions include:

  • Monitoring—tools provide visibility into data and system access.
  • Filtering—tools can filter data streams to restrict suspicious or unidentified activity.

Reporting—tools provide logging and reports helpful for incident response and auditing

  • Analysis—tools can identify vulnerabilities and suspicious behavior and provide forensic context to security teams.

DLP solutions can be helpful in a variety of use cases, including:

  • Security policy enforcement—DLP tools can help you identify deviations from policy making it easier to correct misconfigurations.
  • Meeting compliance standards—DLP tools can compare current configurations to compliance standards and provide proof of measures taken.
  • Increasing data visibility—DLP tools can provide visibility across systems, helping you ensure that data is secure no matter where it’s stored.

Trends Driving DLP Policy Adoption

  • Growth of the CISO role—as organizations appoint Chief Information Security Officers (CISO), they become responsible for leaks, and use a DLP policy as a tool to gain visibility and report on organizational data.
  • Evolving compliance requirements—new regulations are introduced all the time, for example GDPR in Europe, and the NYDFS Cybersecurity Regulations in New York State. DLP policies can help comply with these new regulations.
  • There are more places to protect your data—businesses today use tools that are difficult to monitor, such as supply chain networks and cloud storage. This makes data protection more difficult. Knowing exactly which data crosses organizational boundaries is critical to preventing misuse.
  • Data exfiltration is a growing risk—sensitive data is an attractive target for attackers. The number of attempted and successful breaches at organizations of all sizes is rapidly growing.
  • Insider threats—data loss is increasingly caused by malicious insiders, compromised privileged accounts or accidental data sharing.
  • Stolen data is worth more—the Dark Web allows adversaries to buy and sell stolen information. Data theft is a profitable business.
  • More data to steal—the scope and definition of sensitive data has grown over time. Sensitive data now covers intangible assets, for example business methodologies and pricing models.
  • Security talent shortage—many businesses are finding it difficult to fill security-related roles. In recent surveys by ESG and ISSA, 43% of organizations surveyed were affected by the talent shortage. This makes automated tools like DLP more attractive.

Building Your Data Loss Prevention Policy—How to Develop a DLP Strategy

Individuals in organizations are privy to company information and can share this information, which can lead to accidental or intentional data loss. The distributed nature of today’s computer systems magnifies the problem.

Modern storage can be accessed from remote locations and through cloud services; laptops and mobile phones contain sensitive information and these endpoints are often vulnerable. It is becoming increasingly difficult to ensure that data is secure, making a data loss prevention strategy so important.

3 REASONS FOR IMPLEMENTING A DATA LOSS PREVENTION POLICY

  1. Compliance
    Businesses are subject to mandatory compliance standards imposed by governments (such as HIPAA, SOX, PCI DSS). These standards often stipulate how businesses should secure Personally Identifiable Information (PII), and other sensitive data A DLP policy is a basic first step to compliance, and most DLP tools are built to address the requirements of common standards.
  2. Intellectual property and intangible assets
    An organization may have trade secrets, other strategic proprietary information, or intangible assets such as customer lists, business strategies, and so on. Loss of this type of information can be extremely damaging, and accordingly, it is directly targeted by attackers and malicious insiders. A DLP policy can help identify and safeguard critical information assets.
  3. Data visibility
    Implementing a DLP policy can provide insight into how stakeholders use data. In order to protect sensitive information, organizations must first know it exists, where it exists, who uses it and for what purposes.

TIPS FOR CREATING A SUCCESSFUL DLP POLICY

  • Classifying and interpreting data—Identify which information needs to be protected, by evaluating risk factors and how vulnerable it is. Invest in classifying and interpreting data, because this is the basis for implementing a suitable data protection policy.
  • Allocate roles—clearly define the role of each individual involved in the data loss prevention strategy.
  • Begin by securing the most sensitive data—start by selecting a specific kind of information to protect, which represents the biggest risk to the business.
  • Automate as much as possible—the more DLP processes are automated, the broader you’ll be able to deploy them in the organization. Manual DLP processes are inherently limited in its scope and the amount of data they can cover.
  • Use anomaly detection—some modern DLP tools use machine learning and behavioral analytics, instead of simple statistical analysis and correlation rules, to identify abnormal user behavior. Each user and group of users is modeled with a behavioral baseline, allowing accurate detection of data actions that might represent malicious intent.
  • Involve leaders in the organization—management is key to making DLP work, because policies are worthless if they cannot be enforced at the organizational level.
  • Educate stakeholder—putting a DLP policy in place is not enough. Invest in making stakeholders and users of data aware of the policy, its significance and what they need to do to safeguard organizational data.
  • Documenting DLP strategy—documenting the DLP policy is required by many compliance standards. It also provides clarity, both at the individual and organizational level, as to what is required and how the policy is enforced.
  • Establish metrics—measure DLP effectiveness using metrics like percentage of false positives, number of incidents and Mean Time to Response.
  • Don’t save unnecessary data—a business should only use, save and store information that is essential. If information is not needed, remove it; data that was never stored cannot go missing.

4 Best Practices for Implementing a DLP

  1. DATA CLASSIFICATION MUST BE CENTRAL TO DLP EXECUTION

Before you implement a DLP solution, pay special attention to the nature of sensitive information, and determine how it flows from one system to another. Identify how information is transferred to its consumers—this will reveal transmission paths and data repositories. Use labels or categories such as “employee data”, “intellectual property”, and “financial data” to classify sensitive data.

Make sure to investigate and record all data exit points. Organizational processes may not be documented, and not all data movement is the outcome of a routine practice.

  1. ESTABLISH POLICIES UPFRONT

Engage IT and business staff in the early stages of policy development. This stage of the process should include identifying:

  • Data categories that have been singled out
  • Steps that need to be implemented to combat malpractice
  • Future growth of the DLP strategy
  • Steps that need to be taken if there is an abnormal occurrence.

Before the DLP strategy is put into practice, it is essential to establish incident management processes and ensure they are practical for every data category.

  1. HOW TO START

Start DLP implementation by monitoring organizational data. This lets you fine-tune and anticipate the effect that the DLP may have on organizational culture and operations. By jumping the gun, and blocking sensitive information too soon, you may harm central business activities.

Obtaining further

information by make a

contact with our

experienced IT staffs.

We’re available for 8 hours a day!
Contact to require a detailed analysis and
assessment of your plan.

Reach out now!