SecureIQLab Cloud Web Application Firewall (WAF) CyberRisk Validation – Summer 2022
AMTSO Test ID
Statement from Test Lab
|Akamai||Web Application Protector||included|
|Barracuda||CloudGen Level 5 m4.large||included|
|F5||BIG-IP Virtual Edition||participant|
|Microsoft||Azure Application Gateway WAF v2||participant|
|Sucuri||Website Firewall Professional||included|
|Wallarm||Web Application and API Protection (WAAP)||included|
These Vendors did not chose to adopt Participant status under the AMTSO Standard, but may have engaged with the test lab in other ways.
|Commentary||Start date||End date|
|Phase 1 Commentary||2022-06-20||2022-06-28|
|Phase 2 Commentary||2022-11-07||2022-11-15|
I have no specific comment on the conduct of this evaluation, which was particularly well done. I do, however, have a comment on the way the false positives metric is reported. Indeed, the false positive score is reported only with Operational Efficiency (OE) for the ROSI calculation. This makes sense because false positives have a significant impact on operational effectiveness, but it could be also interesting to have it reported with the Global Security (GS) score. Without, the GS score is basically a metric measuring only whether the solutions correctly detect attacks (recall metric or a catching rate) and not an accuracy metric taking into account all dimensions of the evaluation (FP, FN, TP, TN), like a Matthews Correlation Coefficient (MCC) for instance. Of course, combining both OE and GS metrics we do have a good information, but having a WAF accuracy information via a single metric, such as a MCC score, plus the OE, would be very interesting.