NeverHack
Data Engineering · Cybersecurity · Real-Time · Consultancy
My first job out of university. I joined Open3s as a data engineer building security pipelines for enterprise clients. The work was mostly ELK-based detection systems, Airflow orchestration, Splunk dashboards, and automated incident response. I spent most of my time with the data team but jumped into ML and infrastructure projects when other teams needed help. Open3s was later acquired by NeverHack.
The Company
Open3s (now NeverHack) is a European cybersecurity company. They do security consulting, managed SOC services, and build internal tooling for threat detection. Most clients are large Spanish enterprises, including several IBEX35 companies.
My Role
I started as a junior and got promoted to lead projects within my first year. I ended up owning two products: a real-time threat detection platform and an application monitoring system for banks and public entities. I also helped other teams with ML models, consultancy and infrastructure when they got stuck.
Projects
I was the technical lead on two main products, both serving enterprise clients with different security needs:
- vSOC Platform: Real-time threat detection processing 150M+ events daily. Built on ELK, Airflow, and Siemplify with sub-minute alerting.
- App Monitoring: Monitoring platforms for banks and public entities. Crash detection, behavioral analytics, and usage forecasting with Splunk and Python.
Technical Approach
Security data has a particular property: failures are dangerous, not just inconvenient. An hour of missed logs might contain the early signs of a breach. So everything we built prioritized reliability first, speed second.
We designed for 99.99% uptime with redundant pipelines and graceful degradation. When something broke, the system kept ingesting data while we fixed it. No blind spots.
Latency mattered almost as much. A threat detected in real-time can be contained. The same threat an hour later might already be exfiltrating data. We pushed for sub-minute detection on critical events.
The other big focus was automation. SOC analysts face thousands of events per shift, mostly noise. We built playbooks that handled routine cases automatically so analysts could focus on actual threats.
Results
After a year of work, these are the results we got:
- 150M+ daily events processed in real time
- 99.99% yearly uptime on every environment
- 50% reduction in manual analyst workload
- Scaled to 10 enterprise customers
- Crash detection reduced from hours to seconds
- Sub-minute threat detection
- 50+ Airflow DAGs coordinating the security workflow
- 50+ playbooks automating incident response
- 10+ ML/DL models serving different use cases
What I Learned
The key learning is that volume without context is noise. Processing 150M events isn't the hard part. Finding the 10 that matter is. This tension between catching everything and not drowning in false positives shaped how I think about data systems.
In parallel, automation in security isn't about replacing analysts. It's about respecting their time. Every playbook we automated, every false positive we filtered, gave someone hours back to do actual security work. In some cases, the analyst checked the affected service manually but without the same level of urgency.
On a personal level, I grew up fast here. I went from writing my first production DAG to leading projects for IBEX35 companies in under a year. The team rewarded initiative over perfection, which meant I took on projects I didn't have the full technical knowledge for and figured them out along the way with the necessary technical support. I got comfortable asking questions, shipping imperfect solutions, and owning mistakes publicly.
I also developed a real appreciation for operational discipline. When downtime means undetected breaches, you learn to build systems that fail gracefully. You learn to monitor everything. You learn to design for the 3am scenario.