Recently, Normalyze, a data-first cloud security platform, came out of stealth with $ 22.2M in Series A funding. This was the perfect time to catch up with co-founder and CEO Amer Deeba.
In this interview with Help Net Security, he talks about the path data security as well as visibility challenges.
Modern organizations have their data scattered throughout various cloud environments. What advice would you give to a CISO that wants to achieve visibility?
Visibility is the first step to establishing data security and protection across multi-cloud environments. Therefore, start by using a tool that can easily connect to your cloud environments to continuously discover data stores and maintain discovery.
Some data discovery solutions give you only metadata, which is a good place to start, but comprehensive data classification based on sensitive content provides the additional context so you can focus on protecting what is important first.
When it comes to data security, keep in mind the 3 Vs of data – volume, variety and velocity. So, make sure that solution you are using can handle these 3 characteristics effectively:
Volume: Data volumes in cloud environments are on the increase and hence the solution you pick needs to be able to handle large volumes of data and can scale itself up or down as needed to do both discovery and classification of the data.
Variety: The solution you pick must discover all types of data stores – structures and structured – across all 3 public clouds and has the ability to connect natively to scan and classify the data where it resides without any impact on performance or privacy of the data.
Velocity: Data velocity refers to the speed at which data is generated and collected. Hence, your solution needs to be able to understand various data structures and provide sampling capabilities to classify the data accurately and efficiently.
Lastly, ask your solution provider how much the cost will be to scan the petabytes of data you have in your cloud environment to evaluate its cost efficiency. Since data scanning and classification happens on your compute time to maintain data privacy, it is important to understand the cost parameters upfront before choosing a data security solution.
What are the main challenges when it comes to identifying all the access paths to sensitive data? What should organizations secure first?
Data access happens in multiple layers and risky accesses can happen due to many reasons such as:
- Wrong authorizations at the application layer
- Wrong permissions at the infrastructure IAM layer
- Misconfigurations at the network layer
- Vulnerabilities in app layer giving access to the infrastructure layer
- Misconfigurations in the IAM layer giving access to the network layer
The sheer number of nodes in enterprise cloud environments and all access paths that lead to complexity combined with the less-than-ideal information at the hands of administrators typically leads to one or more of the above causes for data breaches. Therefore, organizations should focus on identifying the most valuable data and securing all assets around them. Then achieve least-privilege on the data, followed by achieving least privilege recursively for each asset with access to this data.
The larger the organization, the more apps and services it probably uses. The result? They seldom know where all of their data resides. What are the possible repercussions of this situation?
This is becoming increasingly an issue in large organizations as data sprawl is on the rise due to a variety of trends, including the proliferation of data, an explosion of microservices, rapid cloud adoption, hybrid work environments, compliance, and more. As a result, shadow and abandoned data stores are popping up everywhere in cloud environments with excessive privileges that could lead to expensive data breaches. Data visibility becomes paramount in order to detect, classify and secure cloud data.
How do you expect data visibility to evolve in the next few years?
In the next few years, cloud security tools will evolve and make data be at the center of it. Such tools will enable CISOs to visualize the data across their cloud environments and understand how the data is used and for what purpose. This will also help avoid data compliance fines, such as the recent Twitter incident where the company got fined $ 150M for wrongly using consumer’s phone numbers for ad targeting.
In addition, data provenance and usage will become an important part of privacy regulations and new tools will emerge that will certify correct usage of data based on provenance.
Lastly, compliance programs will start using these tools on a continuous basis to provide ongoing monitoring and assurance in order to avoid violating privacy regulations and to secure data at rest and in motion.