Pass the SALT 2019 live report (part 2/3)
2nd edition of the security conference in Lille
July 02, 2019
Time-efficient assessment of open-source projects for Red Teamers
by Thomas Chauchefoin and Julien Szlamowicz (slides)
In their pentest team, Julien often does red team assessments with big scopes, often facing a blue team. The talk is a case study on the work they did assessing GPLI, and the methodology they used.
GLPI is a GPLv2 inventory tool often used by sysadmins, widely deployed in France and Brazil, which made it an interesting target.
In Red Team assessments, discretion is key, as opposed to traditional pentests, where noise does not matter as much. Thomas says the forensing footprint should be as low as possible. Therefore, a good Red Team vulnerability should be silent.
An aspect of assessing Open Source Software, is that you don't work with a blackbox, and it's easier to replicate an accurate environment in a lab. In this case, the attack surface that was analyzed is only comprised of non-authenticated code paths. PHP apps often have scripts that are directly accessible. They used an internal tool to help finding public-facing paths, as well as looked as previous GLPI vulnerabilities.
They didn't have semantic tooling, but used various hooks for DB queries, or low-level PHP functions, as well as profilers to do the analysis. They also wrapped
$_POST objects in order to have automatic analysis of bad usage.
The first issue they found was an infoleak, that exposed the various versions of GPLI, PHP, the OS. Then, they found an SQL injection, which wasn't immediately usable because the queries parameter were sanitized. This sanitization was still bypassable in a few cases, which they were able to do with another injection.
They then looked at the way the "Remember me" feature was implemented with json in a cookie. This allowed controlling the algorithm of password verification, therefore enabling a denial of service on the server. It also used PHP loose comparisons, which allows abusing string compare when a string starts with '0e' (see my writeup of 'La simplicité' in SIGSEGv1 CTF).
They also found a Local File Inclusion (LFI) issue in a query, which then allowed calling arbitrary functions.
The communication with GPLI maintainers was very smooth, and they were quick to react and apply patches, even if it took some time to arrive in released versions.
by Orange Tsai (slides)
Jenkins is the most used CI/CD software in the world. It's a very interesting target, because it has access to source code, might have access to credentials, or compute nodes. It has been exploited in the wild.
The most common attack vector is a dictionary attack on the login page. Then, the previous known vulnerabilities, like previous deserialization bugs, of which there were many instances, because the initial fixes were blacklist-based. The serialization has since been rewritten to replace Java serialization with an HTTP API.
Orange then decided to review core Jenkins code, starting with the router. He found an issue with crafting URLs, that were mapped to class names and methods. But since access paths were whitelisted for non-authenticated users, he had to find a given path through whitelist objects in order to reach a dangerous invocation.
The next step, was to find another vulnerability to chain with in order to reach code-execution on the server. For this, he looked at Pipeline, a DSL built on top of groovy that allows doing reproducible, trackable in VCS Jenkins scripts.
Pipeline scripts must have a valid syntax in order to be interpreted: this is simple to do, but only path he found only did parsing, and no execution. To bypass that, Orange used Groovy meta-programming with the
@Asttest decorators that allowed to execute code. Finally, he found an issue in the
@Grab implementation that allowed injecting a jar file by URL with
After the vulnerabilities were reported and fixed, new vulnerabilities were found by other researchers, to have a more generic entry points and ease exploitation. Unfortunately, public exploitation of these issues were common, including the infamous hack of the Matrix infrastructure, because many people were slow to update their Jenkins instances.
by Jean-Baptiste Kempf (slides)
VLC is the most popular video player, and its popularity comes from the fact that it can understand most video formats, even incomplete files. Jean-Baptiste estimates that it has more than 450 millions users, even there is no telemetry to have an exact count, because that would be "spying on users".
VLC has about 1 million line of codes, but lots(100+) of dependencies, which brings the total to more than 15 millions lines of code, include C, C++ and handcrafted ASM, of varying quality.
VLC development happens on mailing list, with relatively long review processes. Static and dynamic analysis is done by most developers. Fuzzing has been added recently. Hardening has started in 3.0, from PIE code to fixing most warnings of modern compilers, enabling ASRL, DEP etc.
The release process is very strict, with offline signing and very well defined steps.
Despite Jean-Baptiste's hate for bug bounties, VLC participated in the EU-FOSSAv2 program, with a twist: they decided to add bonuses for researchers that provided fixes. The result of this bug bounty were 31 security issues, with one classified as high. The program was successful in the end.
The best researchers were very good, but the worst were very bad going as far as insults or death threats. In general, half of the security reports are "total crap", Jean-Baptiste says. There's also a tendency to overblowing security issues: from bad CVSS scores to click-baiting articles. The evaluation of the impact is also very bad, since even very-hard to exploit issues are given up to 9.8 CVSS scores without PoCs. Jean-Baptiste followed with more examples of bad behaviour coming from parts of the infosec community.
A research project inside VLC is to put a sandbox inside VLC to segment the different parts of VLC to have different permissions; hopefully, this should improve the general security, but this is a complex endeavour.
OSS in the quest for GDPR compliance
by Aaron Macsween (slides)
Aaron started by saying he was filling in for Critina Delisle, the original author of the talk that couldn't make it.
Privacy and security are often "added at the end" of a project which doesn't work and has terrible consequences. And there's no single fix to this, since these domains are complicated, and often dependents. For both, one must evaluate what the threat model : what you're protecting, for how long, from whom, etc. In some cases, you need to chose between Privacy and Security.
An example, Aaron says, is that you might optimize for security by reducing privacy via surveillance: for example what you're bank does with financial transactions.
At the other end of the spectrum, you can optimize for privacy with less security, by having web services that have no authentication, like privacy pastebins, or mega's ciphered uploads for example.
Cryptpad, which Aaron is the lead developer of, is a real-time collaboration tool like Etherpad, but with encryption. The browser-based "thick" client is doing the most work, with an append-only log data structure on the server. It has many extensions, from read/write/delete features, to file-server capability, etc. It's used by various users in hackerspace or activist groups; it was funded with a grant from BPI France, the NLNet Foundation, and donations.
The European privacy regulation has been in effect since May 2018, and has made Aaron's job much easier by raising awareness on the privacy issues, he says.
The strategy in Cryptpad is of data minimization, by reducing what's needed at a given moment, like the way cryptpad does peer-to-peer conflict resolution instead of server-based one.
The Data Protection Officer (DPO) role (Cristina's) can be adversarial, but always useful: it forces auditable traces around the handled data for example. The data controllers are the DPO's employers, the ones handling the data. And the data processors can be any third parties handling the data, like the hosting or payment processing companies.
Aaron says there still a few areas of uncertainty, like when a self-host becomes a data controller, or how to challenge the "legitimate use" that has a fuzzy definition in the law.
TLS 1.3: Solving new challenges for next generation firewalls
by Nicolas Pamart
Nicolas is presenting joint work with Damien Deville and Thomas Malherbe on how they adapted their firewall and IPS product to work with TLS 1.3.
The Intrusion Prevention System inside the proprietary Stormshield product does TLS analysis: it looks at the data in the Client Hello Server Hello handshake packets to get the client and server certificates. With TLS 1.3, it's no longer possible to get the Server Certificate just by looking at network traffic.
Since they didn't want to decrypt in order to stay passive, they elected to buffer the ClientHello and replay it once the connection was approved. In order to get the Server certificate, the IPS contacts the destination server with the same SNI and cipher list, but with its own KeyShare. It can then make a decision and replay the original ClientHello so that its connection can be established. A cache was added on top of that to have only one request per domain per time period.
In order to handle TLS 1.3 session resumption, in the case SNI isn't provided; there is also an SNI coherence layer, which is a cache of SNI presence.
In a response to a question, Nicolas said that encrypted SNI with DNSSEC might completely break this feature of the IPS.
Lookyloo: A complete solution to investigate complex websites - with a decent UI
by Quinn Norton and Raphaël Vinot (slides)
Lookyloo is an UI and visualize requests done in complex websites. It allows visualizing exactly which URLs are loaded when contacting a website. It's built on top Splash and the ETE toolkit.
When looking at a tree, you can see when a requests switches to insecure mode, or how many ads toolkit are loaded.
It can be used to detect when websites use a technique to bypass TLS mixed-content warnings, or when there are transparent HTTP meta redirects.
It can help popular sites analyze what resources are pulled by the single ad network code they put on their frontpage. Everytime you load a page, it might change, and lookyloo allows looking at the requests, saving them and analyzing it offline.
Quinn and Raphaël showed an example where a very popular website showed a GDPR warning, but still loaded dozens of resources before user consent was given.
The rumps are five minutes talks on various subjects.
by Eloïse Brocas and Eric Leblond
bpfctrl is a new tool to analyze and manipulate eBPF maps loaded in the Linux kernel. Eloïse built it as wrapper on top of bpftool. It's higher-level and written in a mix of C and rust. It was missing in Suricata, to debug the traffic that was filtered.
$0.02 DNS Firewall with MISP
by Xavier Mertens (slides)
Xavier recommends using your own resolver, and log all queries, because everything goes through the DNS. With RPZ, it's possible to filter malicious domains by returning fake addresses.
There are plenty of malicious domain sources, but Xavier chose to use MISP, an incident response and sharing platform, to handle this list. A script does the extraction of malicious domains, which is then used by the bind configuration. Xavier posted about his configuration here
Gamebuino as a keyboard
by Antoine Cervoise (slides)
Antoine presented how he added keyboard functionality, to his Gamebuino, an open console based on Arduino, and used it to run automated commands, with different keyboard layouts supported.
by Alexandre Brianceau (slides)
Rudder is an open-source continuous compliance auditing et configuration platform.
Why fuzz Rust code ?
by Pierre Chifflier (slides)
Is the Rust memory safety and a test suite enough to ensure correctness ? Pierre says no, and every error should be handled properly without crashing. That's why rust code should be fuzzed. The
cargo-fuzz crate can help for that, especially if you combine it with code coverage analysis.
Pierre says fuzzing is necessary, but neither sufficient, nor a starting point. You should also share the fuzzing corpus, because it has a lot of value.
by Alexandre Dulaunoy (slides)
CIRCL does a lot of crawling of websites, including on Tor, where they take a lot of screenshots of webpages. They have a lot of data, but need to analyze it. Total recall is a tool to do large-scale image comparison and classification, to find phishing sites that look like popular websites.
by Simon Heilles (slides)
Simon goes over a few ways to manipulate humans. Understanding the different techniques is very useful to protects oneself.
That's it for part 2. Part 3 is continued here.