SecureIQLab Cloud Web Application Firewall (WAF) CyberRisk Validation – Summer 2022

Test Lab


Test Title

SecureIQLab Cloud Web Application Firewall (WAF) CyberRisk Validation – Summer 2022






Akamai, AWS, Barracuda, Cloudflare, F5, Fortinet, Google, Imperva, Microsoft, Oracle, Prophaze, Stackpath, Wallarm

Publication date


Statement from Test Lab

More than 9,000 attacks were tested against each of the 14 products validated. Individual reports simplify and summarize our findings and include group averages for context. Individual reports for the 14 tested solutions are projected to publish over the next few weeks and culminate with a comparative report. The comparative report will provide a high-level comparison for security efficacy, operational efficiency, and return on security investment (ROSI).

Tested products

VendorProductVendor status
AkamaiWeb Application Protectorincluded
AWSAWS WAFincluded
BarracudaCloudGen Level 5 m4.largeincluded
F5BIG-IP Virtual Editionparticipant
GoogleCloud Armorincluded
ImpervaCloud WAFparticipant
MicrosoftAzure Application Gateway WAF v2participant
ProphazeBusiness WAFparticipant
SucuriWebsite Firewall Professionalincluded
WallarmWeb Application and API Protection (WAAP)included

AMTSO Standard compliance info

Notification issued


Notification method

Publicly posted test plan, Contact list notification

Test plan

Commencement date



These Vendors chose to adopt Participant status under the AMTSO Standard, gaining certain guaranteed rights in return for attestations.

“Included” Vendors


These Vendors did not chose to adopt Participant status under the AMTSO Standard, but may have engaged with the test lab in other ways.

Commentary dates
CommentaryStart dateEnd date
Phase 1 Commentary2022-06-202022-06-28
Phase 2 Commentary2022-11-072022-11-15

Commentary received

VendorCommentary phaseComment
ImpervaPhase 2

I have no specific comment on the conduct of this evaluation, which was particularly well done. I do, however, have a comment on the way the false positives metric is reported. Indeed, the false positive score is reported only with Operational Efficiency (OE) for the ROSI calculation. This makes sense because false positives have a significant impact on operational effectiveness, but it could be also interesting to have it reported with the Global Security (GS) score. Without, the GS score is basically a metric measuring only whether the solutions correctly detect attacks (recall metric or a catching rate) and not an accuracy metric  taking into account all dimensions of the evaluation (FP, FN, TP, TN), like a Matthews Correlation Coefficient (MCC) for instance. Of course, combining both OE and GS metrics we do have a good information, but having a WAF accuracy information via a single metric, such as a MCC score, plus the OE, would be very interesting.

AMTSO Standard compliance status

Confirmed Compliant with AMTSO Standard v1.3Compliance Report