Virus Bulletin publishes first web filter test report

Virus Bulletin publishes first web filter test report

Posted by   Martijn Grooten on   Feb 19, 2016

[Original Post: https://www.virusbulletin.com/blog/2016/02/virus-bulletin-published-first-corporate-web-filter-test-report/]

After a lot of preparation, Virus Bulletin is proud to have published the first “VBWeb” comparative web filter test report, in which products’ ability to block web-based malware and drive-by downloads was tested. Fortinet’s FortiGuard appliance was the first product to achieve a VBWeb certification.

Virus Bulletin publishes first web filter test report

Virus Bulletin has been testing security products for more than 18 years, and in recent years, we have had many requests from product developers asking us to test their web security products. After all, whether malicious software is downloaded directly from websites or through sneaky drive-by downloads, the web remains an important infection vector.

In response to those requests we have built a new test suite to add to our existing VB100 and VBSpam tests.

The new test, called VBWeb, measures products’ ability to block malware spreading through HTTP. The test’s current focus is (corporate) gateway solutions that run on the network as an implicit or explicit proxy. We are looking to extend the test in the future to on-desktop and in-browser solutions.

vbweb-verified.jpg

Given how quickly web-based threats change, and given how many of them actively attempt to frustrate researchers (something we have also frequently run into), building a web security product is not a trivial task, and submitting such a product to a public test isn’t something vendors do without serious consideration.

In this test, while there were several participants, the developers of Fortinet‘s FortiGuard appliance were alone in agreeing for their product to be tested publicly. Their confidence in the product proved to be well founded: it blocked all but a few out of hundreds of malicious downloads, as well as a significant number of live exploit-kits.

Indeed, with an 83% catch rate –  well over the 70% threshold required for VBWeb certification – FortiGuard is a clear and deserved winner of the very first VBWeb award.

web-page-blocked-fortiguard-small.png

You can read the full report here in HTML format, or download it here as a PDF. The report describes the testing methodology in full detail.

From now on, VBWeb will be run every second month. Product developers who are interested in submitting a product for the test (publicly or privately) can contact Virus Bulletin‘s Editor Martijn Grooten at martijn.grooten@virusbtn.com.

VBSpam report has good news for users of email security solutions

Of course, HTTP isn’t the only infection vector systems administrators have to be worried about. But in the case of email, there is good news: all 16 participating full solutions achieved certification in the latest VBSpam test, which saw record catch rates. Ten products even achieved a VBSpam+ award, after blocking more than 99.5% of spam while also avoiding false positives.

You can read the full report here in HTML format, or download it here as a PDF.

VBSpam-quadrant-Jan16.jpg

IBM Invents ‘Resistive’ Chip That Can Speed Up AI Training By 30,000x

IBM researchers, Tayfun Gokmen and Yurii Vlasov, unveiled a paper in which they invented the concept for a new chip called a Resistive Processing Unit (RPU) that can accelerate Deep Neural Networks training by up to 30,000x compared to conventional CPUs.

A Deep Neural Network (DNN) is an artificial neural network with multiple hidden layers that can be trained in an unsupervised or supervised way, resulting in machine learning (or artificial intelligence) that can “learn” on its own.

This is similar to what Google’s AlphaGo AI has been using to learn playing Go. AlphaGo used a combination of a search-tree algorithm and two deep neural networks with multiple layers of millions of neuron-like connections. One, called the “policy network,” would calculate which move has the highest chance of helping the AI win the game, and another one, called the “value network,” would estimate how far it needs to predict the outcome of a move before it has a high enough chance to win in a localized battle.

Many machine learning researchers have begun focusing on deep neural networks because of their promising potential. However, even Google’s AlphaGo still needed thousands of chips to achieve its level of intelligence. IBM researchers are now working to power that level of intelligence with a single chip, which means thousands of them put together could lead to even more breakthroughs in AI capabilities in the future.

“A system consisted of a cluster of RPU accelerators will be able to tackle Big Data problems with trillions of parameters that is impossible to address today like, for example, natural speech recognition and translation between all world languages, real-time analytics on large streams of business and scientific data, integration and analysis of multimodal sensory data flows from massive number of IoT (Internet of Things) sensors,” noted the researchers in their paper.

The authors talked about how in the past couple of decades, machine learning has benefited from the adoption of GPUs, FPGAs, and even ASICs that aim to accelerate it. However, they believe further acceleration is possible by utilizing the locality and parallelism of the algorithms. To do this, the team has borrowed concepts from next-generation non-volatile memory (NVM) technologies such as phase change memory (PCM) and resistive random access memory (RRAM).

The acceleration for Deep Neural Networks that is achieved from this type of memory alone reportedly ranges from 27x to 2,140x. However, the researchers believe the acceleration could be further increased if some of the constraints in how NVM cells are designed were removed. If they could design a new chip based on non-volatile memory, but with their own specifications, the researchers believe the acceleration could be improved by 30,000x.

“We propose and analyze a concept of Resistive Processing Unit (RPU) devices that can simultaneously store and process weights and are potentially scalable to billions of nodes with foundry CMOS technologies. Our estimates indicate that acceleration factors close to 30,000 are achievable on a single chip with realistic power and area constraints,” said the researchers.

As this sort of chip is only in the research phase, and because regular non-volatile memory hasn’t reached the mainstream market yet, it’s probably going to be a few years before we begin to see something like it on the market. However, the research seems promising, and it may raise the attention of companies such as Google, which wants to accelerate its AI research as much as possible. IBM itself is also interested in solving Big Data challenges in healthcare and other domains so the company’s own businesses should benefit from this research in the future.

 

Source: Toms hardware

http://www.tomshardware.com/news/ibm-chip-30000x-ai-speedup,31484.html

Contact us

We're not around right now. But you can send us an email and we'll get back to you, asap.

Questions, issues or concerns? I'd love to help you!

Click ENTER to chat