- Details
- Written by: Meena
- Category: Cybersecurity PRISM

Cybersecurity Analytics is marriage of two disparate fields:
Data Analytics and Cybersecurity
You may know that 'Data Analytics' is the process of examining DATASETS to draw conclusions about the information they contain. Valuable insights can be derived from uncovering and examining data patterns. Scientists categorize data as descriptive, diagnostic, predictive, or prescriptive to help them utilize data in many innovative ways. In general, Data Analytics can help companies better understand the purchasing habits of their customers, measure the efficacy of their advertising campaigns, discover new markets, and develop new products, and many things more.
Cybersecurity, on the other hand, is the practice of defending your organization's digital assets against malicious attacks. It employs various techniques, strategies, processes, and tools to diagnose, predict, and prevent unauthorized access of networks, systems, and devices.
Thus, you can safely say that Cybersecurity Analytics is concerned with the use of data analytics to achieve a cybersecurity objective. It is a powerful tool born of a deep understanding of data that can describe cybersecurity risks, diagnose vulnerabilities, predict future malicious behavior, and prescribe protective remedies.
Cybersecurity Analytics has evolved over the last few decades to become the basis for essential cybersecurity solutions and practices. It has provided a crucial understanding of bad actors, their techniques, and behaviors.
It is the application of BIG DATA ANALYTICS, rather than computer science or programming, that sets cyber analytics apart from traditional cybersecurity methodologies. To be sure, both disciplines examine the same exploits, vulnerabilities, threats, and attack methods. Still, for a cyber data scientist, these challenges are viewed through the lens of big data security analytics.
-
What is cybersecurity analytics?
- Details
- Written by: Meena
- Category: Cybersecurity PRISM

U.S. Military when it was engaged in Vietnam war, some of their efforts were being led by a team, named as Purple Dragon. This team, Purple Dragon, noticed a phenomenon that their adversaries were seemingly able to anticipate their battle-strategies and tactics successfully. The question arose, HOW?
They were able to established that Vietnamese warriors were neither able to decrypt US Military communications, nor they had any intelligence assets inside US Military to collect intelligence from inside. Then how the Vietnamese warriors were able to anticipate moves of US Military. In the end, the Purple Dragon team arrived at one conclusion, i.e., US forces themselves were revealing vital information to the enemy 'inadvertently.'
I hope that you haven't missed the word 'inadvertently' here.
-
What is Operational Security?
- Details
- Written by: Meena
- Category: Cybersecurity PRISM

What is Identity & Access Management (IAM)?
It is about your USERS (read, employees)...and their identities and the respective Access.
As you already know that when any user's credentials (user name, passwords) are compromised, it opens a new gate for hackers to enter into your company's network and attack your most valuable data and resources. Identity and Access Management (IAM) is one such tool which is used by most companies to ward off the attackers, and to protect their data and people.
In a very simplistic way, you can say that IAM is framework of some security policies, processes, and technologies which enable your organization to manage the 'digital identities' of your users and to control their access to critical corporate information. It works by assigning your users with 'specific roles' and ensuring they have the right level of access to corporate resources and networks, so that they can carry out their roles effectively. Thus, IAM improves the user experience and security at the same time.
The core ideas of IAM is to assign one 'digital identity' to each individual or a device. On the basis of this digital identity, it modifies, and monitors access levels and privileges through each user’s access life cycle.
An IAM platform is capable of verifying and authenticating individuals on the basis of their 'roles' and 'contextual information' such as geography, time of day, or (trusted) networks, etc. They can capture and record login events of all users. They allow you to assign access privileges to your users or to remove them. If any change in the privileges of any users happens, it can monitor those too.
As I just said above, IAM is a great tool to build 'Role-Based Access Control (RBAC), as defined by their job title, level of authority, and responsibility within your business. However, you may not be aware that these platforms are also capable of automatically de-provisioning the access-rights of any user, if he or she departs from your organization or their role changes within organization. Thus they prevent many security risks.
-
What Are The Key Components Of An IAM Platform?
- Details
- Written by: Meena
- Category: Cybersecurity PRISM

The classical approach to network security was to isolate your internal network from the rest of the world, or say internet, using the firewall. Your firewall succeeded in preventing outsiders from coming in to your network, while still allowing your internal network users to connect to the external networks. For long many years, this approach worked for most organizations.
But this approach is not effective any longer, because now most users are bring in their own devices (BYOD) which are managed and used by users themselves, not by your network administrators. When network admins have no control over how these devices are used by users and how to deploy security measures on these devices, both the device and your network are not safe. There are countless of instances of successful 'Phishing' attacks by hackers when corporate users were lured or tricked into entering their user 'credentials,' which led to hackers gaining access to penetrate systems of organisations. Such phishing attacks are everyday phenomenon worldwide.
Since the classical network security approach heavily depended upon your hardware, primarily, your firewall, a new and far robust approach to security has emerged in the form of Software-defined Perimeters (SDPs).
-
What is SDP?
- Details
- Written by: Meena
- Category: Cybersecurity PRISM

If you reflect upon the modern IT networks, you would immediately realize that they are quite complex. These are made up using various combinations of number of components, e.g., Routers, Switches, Firewalls, Servers and also include cloud-related resources such as Virtual Machines (VMs), Hypervisors, containers, etc. Most of these elements are simultaneously present in the network and interconnected too by various means.
The moment you would think of security such complex networks, you would impromptu realize that it is critical to monitor all these components carefully around the clock. From your perspective, each component of modern networks increases your attack-surface. But for every hacker and threat actor, this complexity of networks and the resulting challenge of 'visibility' create numerous opportunity to attack and exploit your network.
Not only this, if any of these devices fails, the performance of your network would be immediately hindered, so staying on top of the performance of each element of the network is critical to the smooth, uninterrupted production of your organization.
You have no choice but to know and monitor the traffic of your own network.
Your network traffic is the amount of data that is moving across your computer network at any given point in time. You all know that this traffic consists of data packets which are sent over your network, before they are re-assembled by receiving computer or device.
But you need to have a look at your network traffic from various lenses. Traffic affects quality of your network, because an unusually high amount of traffic can result in slow download speeds or spotty Voice over Internet Protocol (VoIP) connections. Traffic is also related to security because an unusually high amount of traffic could be the sign of an attack.
Before you get into details, you need to get acquainted with some conceptual framework here. There is your data-center...
- Details
- Written by: Meena
- Category: Cybersecurity PRISM

An endpoint is any device that connects to your corporate network from outside your firewall, e.g., Computers, Laptops, Tablets, Mobile devices, Servers, Printers, IoT devices, POS systems, Switches, ATM machines, Industrial machines, Medical devices and other devices that communicate with your corporate network.
They encompass any machine or connected device that could conceivably connect to your corporate network. And for hackers, these endpoints are particularly lucrative entry points to your business networks and systems. It is therefore vital for your organization to consider every device that is or could be connected to your network and ensure it is protected.
Every endpoint that connects to your corporate network is a vulnerability, providing a potential entry point for cyber criminals. Therefore, every device an employee uses to connect to any business system or resource carries the risk of becoming the chosen route for hacking into your organization. These devices can be exploited by malware that could leak or steal sensitive data from your business.
-
What is Endpoint Security?
- Details
- Written by: Meena
- Category: Cybersecurity PRISM

SCADA (Supervisory Control And Data Acquisition) is a category of software application program for industrial process control, the gathering of data in 'real-time' from remote locations in order to control equipment and conditions.
SCADA is a system of software and hardware elements that allows industrial organizations to:
-
Control industrial processes locally or at remote locations
-
Monitor, gather, and process real-time data
-
Directly interact with devices such as sensors, valves, pumps, motors, and more through human-machine interface (HMI) software
-
Record events into a log file
SCADA systems are used by industrial organizations and companies in the public and private sectors to control and maintain efficiency, distribute data for smarter decisions, and communicate system issues to help mitigate downtime. SCADA is used in power plants as well as in oil and gas refining, Food and beverage, Telecommunications, Transportation, Water and waste control, Manufacturing, Recycling, Pharmaceutical/Bio-tech, HVAC and commercial building management, Energy pipelines and utilities, Energy management and refrigeration, and many more.
-
Evolution of SCADA systems
- Details
- Written by: Meena
- Category: Cybersecurity PRISM

First there was DAP...
Directory Access Protocol was a protocol for accessing information in a directory service based on the X.500 recommendations. It specified how an X.500 Directory User Agent (DUA) communicates with a Directory System Agent (DSA) to issue a query. Using DAP, network users were able to view, modify, delete, and search for information stored in the X.500 directory if they had suitable access permissions.
But it was a complex protocol with a lot of overhead. That's why, it was generally considered 'unsuitable' for implementations in a Microsoft Windows environment. Thus, a group of developers wanted to come up with a less complex replacement for DAP, and they created LDAP in 1993. It was a new protocol that used far less code and would become more accessible to people using desktop computers. Since then, LDAP has remained very popular, to the extent that the LDAPv3 became a directory services 'standard' and it also inspired the creation of OpenLDAP, the leading open source directory services platform. It also laid out the foundation on which Microsoft built 'Active Directory' in the late 1990s. LDAP has been very crucial to developing cloud-based directories also.
-
What is LDAP?
- Details
- Written by: Meena
- Category: Cybersecurity PRISM

Remote Desktop is about being able to connect to and use a desktop computer which is far away from you. This allows you to access your desktop, to open and edit files, and use the applications installed on it, as if you are actually sitting on that desktop, but without being there.
It is quite common to use remote desktop now a days. People are frequently using this to access their office computers when they are working from home or travelling.
However, accessing remote desktop is not akin to cloud computing. But you need to remember that cloud computing is much better option, viz-a-viz, remote desktops. Still, lot many companies and their employees are still using remote desktop access to carry out many tasks of their work-routines.
There are many different protocols which can be used by software for this purpose, for example, RDP, VNC, and ICA, etc. But RDP is most widely used protocol.
The Remote Desktop Protocol (RDP) Remote desktop software can use several different protocols, including RDP, Independent Computing Architecture (ICA), and virtual network computing (VNC), but RDP is the most commonly used protocol. RDP was initially released by Microsoft and is available for most Windows operating systems, but it can be used with Mac operating systems too.
-
What is Remote Desktop Protocol?
- Details
- Written by: Meena
- Category: Cybersecurity PRISM

All modern businesses need higher levels of speed and reliability of their networks. In order to achieve these, they depends on the aggregation.
Within the context of networking, aggregation is about using two connections in parallel. Aggregation provides you two benefits:
1. You can fall-back upon another connection if one of the connections fails due to any reason.
2. You can increase the performance by boosting the overall 'throughput' of the connections.
In real life, you would be implementing aggregation at two levels: LINK Aggregation, and WAN Aggregation.
What is LINK Aggregation?
Read more: What is LINK Aggregation? What is WAN Aggregation? How Does WAN Aggregation Work?
- Details
- Written by: Meena
- Category: Cybersecurity PRISM

Before 1980, within the campus of MIT, USA computers (mainframe) were used throughout for research works, but they were not available to undergraduates for use, except in Course VI (computer science) classes. But they wanted to make computers available to most undergraduates too, but no department of university was taking any interest, except 3-departments, all related with Computer Science, Engineering, and Electricals. Then two things happened.
First, the School of Engineering in 1982 approached DEC to donate equipment for 'itself. ' DEC agreed to contribute more than 300 terminals, 1600 microcomputers, 63 minicomputers, and five employees to them.
Two, MIT Corporation wanted the project to benefit the rest of the university too, and they approached IBM to donate equipment for rest of MIT. IBM agreed to contribute 500 microcomputers, 500 workstations, software, five employees, and grant funding.
With all this, Project Athena began in May 1983. The project was intended to extend computer power into fields of study outside computer science and engineering, such as foreign languages, economics, and political science, etc.
Initial goals of Project Athena were to:
1. Develop computer-based learning tools that are usable in multiple educational environments
2. Establish a base of knowledge for future decisions about educational computing
3. Create a computational environment supporting multiple hardware types
4. Encourage the sharing of ideas, code, data, and experience across MIT
MIT built computer labs for their users, although the goal was to put networked computers into each student dormitory.
Project Athena ended in June 1991, MIT's IT department took it over and extended it into the university's research and administrative divisions too. The system they made, ATHENA, is still used by many in the MIT community through the computer labs scattered around the campus. It is also now available for installation on personal computers, including laptops.
You might be wandering why I am sharing all this info. There is a reason... Because the Project Athena made highly significant contribution to modern computing as we know and use today. If we use modern terminology, then you would understand better as what they created for us. Here they are:
-
'Client–Server model' of distributed computing using three-tier architecture (read, Multi-tier architecture)
-
'Thin client' (stateless) desktops
-
System-wide security system (e.g., Kerberos encrypted authentication and authorization)
-
Naming service (e.g., Hesiod)
-
X Window System, widely used within the Unix community
-
X tool kit for easy construction of human interfaces
-
Instant messaging (e.g., Zephyr real time notification service)
-
System-wide use of a directory system
-
Integrated system-wide maintenance system (e.g., Moira Service Management System)
-
On-Line Help system (OLH)
-
Public bulletin board system (e.g., Discuss)
This post is focused only on the 'Kerberos' system...
-
What is Kerberos?
- What is ACL? How Do ACLs Work? What are the important components of ACLs?
- What is 802.1x authentication? What are key component of 802.1x Authentication? What is the security of 802.1x?
- What is FTP? What Are Various Types of FTP ? How to secure your FTP connections?
- What about Security of SD-WAN? 4-Major Security Concerns of SD-WAN
- What is Ethernet Switching? How Do Ethernet Switches Work?
- What is MPLS? How does MPLS work? Is MPLS Layer 2 or Layer 3?
- How Does RADIUS Work? What is RADIUS protocol?
- What is a Firewall as a service (FWaaS)? How does a FWaaS work?
- What is Data Integrity? Why is Data Integrity valued so highly in InfoSec?
- What is a SSL Certificate? How do SSL certificates work?