The cloud is a bargain, but it’s not cheap

According to findings from Tariff Consultancy, the average cloud computing price for enterprises has dropped by two-thirds since 2014. Tariff found that an average entry-level cloud computing instance is currently valued at 12 cents per hour for Windows users, with cloud services now employed by enterprises across a range of crucial applications.

The cost of the public cloud appears to have stabilized. Amazon Web Services, Google, and Microsoft offer comparable entry-level compute-instance pricing, as do other providers.

Despite such low prices, I still hear complaints from IT about the cost of cloud.

The actual cost of the cloud is not for the services themselves. That’s a small part. The real money goes to the people, tools, time, and risk mitigation. IT shops that look at only the cost of AWS versus Microsoft are missing a huge part of the equation. And they’re the ones that get sticker shock when they see the entire cost of cloud-based migrations, including new development and operations, that have little to do with the cost of compute instances.

I advise that you consider the cost holistically. That means working up a well-thought-out TCO (total cost of ownership) model that considers all aspects of the cost of moving to the cloud, such as people, migration, security, operations, and testing. You need to then balance that cost with the value of agility and time to market, which are typically huge for most enterprises.

That analysis usually shows that moving to the cloud is more cost-effective than leaving the applications and data where they are, in your data center. But using the cloud is not as cheap as it may seem from those compute prices.

Source: InfoWorld

http://www.infoworld.com/article/3023039/cloud-computing/the-cloud-is-a-bargain-but-its-not-cheap.html

Intel Unveils Skylake vPro CPUs With Triple-Hardened ‘Intel Authenticate’ Security

Intel announced that its 6th Gen (Skylake) Core vPro processors are now available for purchase. With Skylake, Intel opted to introduce a new security technology called Intel Authenticate to increase the amount of protection provided by vPro processors.

The Intel Authenticate software is still in development, but users are now able able to preview and test the software themselves. The software uses multiple authentication factors in order to determine who is accessing the system and ensure it is the correct person. The system supports up to three hardened factors that can consist of an item you have such as a smartphone or ring, a physical feature such as a retinal scan or fingerprint, or something you need to remember such as a password.

If the software is configured to use three identification factors, then just entering your password or scanning your fingerprint won’t be enough to grant you access to the system. This software uses the vPro hardware in order to enhance the accuracy of these authentication factors, and it is not supported on non-vPro processors. Intel Authenticate is compatible with Windows 7, 8, 8.1 and 10.

Systems using the Skylake vPro processors are now available from several OEMs including Acer, Asus, Dell, Fujitsu, HP, Lenovo, Panasonic and Toshiba.

 

 

 

 

 

 

 

Source: Tomshardware

http://www.tomshardware.com/news/intel-skylake-vpro-intel-authenticate,31032.html

When it comes to online banking, sub-optimal encryption isn’t our biggest concern

When it comes to online banking, sub-optimal encryption isn’t our biggest concern

When it comes to online banking, sub-optimal encryption isn’t our biggest concern

Posted on 06 January 2016 by Martijn Grooten

https://www.virusbtn.com/blog/2016/01_06.xml

Malware authors and scammers won’t attack the crypto.

Under the headline “no zero-day necessary”, Xiphos has published a rather scary blog post on the state of SSL security within the UK’s finance industry. It concludes that more than 50% of UK-owned retail banks have weak SSL implementations on their online banking sites, with 14% of them getting the lowest grade on Qualys‘sSSLLabs service.

This isn’t good. Banking is largely based on trust, and getting IT security right should play an important role in being trusted. But we should be careful not to confuse sub-optimal security with a likelihood of this leading to actual attacks.

Of the vulnerabilities Xiphos mentions, CRIME and POODLE are the most serious. They make it easy for an attacker with a man-in-the-middle position to steal secure session cookies, thus allowing them to hijack a browsing session. This simply should not be possible on a site where people manage their finances.

However, cybercriminals rarely use man-in-the-middle attacks. For them, the fact that they often don’t scale well and can’t be performed remotely, makes such attacks rather uninteresting. Moreover, most banks mitigate session-hijacking attacks by requiring the user to authenticate transactions through a second channel. Hence it isn’t surprising that there have been no known instances of CRIME or POODLE having been used in the wild.

The other weaknesses mentioned, such as the support for RC4, the lack of support for TLS 1.2 and the use of SHA-1 certificates, can only be abused in a purely theoretical setting (in the case of RC4), or not at all.

Interestingly, the blog post doesn’t mention the fact that many banks — including the four main UK retail banks — don’t use HTTPS by default on their main site. Given that this is how many users browse to their online banking service, an attacker with a man-in-the-middle position, or malware running on the user’s system, could trivially modify the link to a site they control. After all, no encryption is infinitely worse than sub-optimal encryption.

Still, this isn’t the thing users should be most concerned about. It would be far better if they concerned themselves with becoming more aware of the various ways in which malware and scams try to steal their money — none of which attack the encryption protocols the bank uses.

It is good to hold banks accountable when it comes to security on their websites. But we have to be realistic about where the actual risks are. They are not in the crypto.

In March, I will give a talk, “How Broken Is Our Crypto Really?“, on this subject at the RSA Conference in San Francisco.

YouTube Kids Ads Called Out for Tricking Children

Ask almost any exhausted parent, and he or she will agree that kid-centric video apps are a pretty useful invention. YouTube Kids is an app for iOS and Android that targets the preschool set with a wide selection of free clips of cartoons, educational programs and other generally anodyne content. But it may not be as innocent as it seems, thanks to advertisements that blur the line between content and hawking products.

The Georgetown Law Institute for Public Representation, along with eight children’s advocacy groups, penned a document to the FTC entitled “Request for Investigation into Google’s Unfair and Deceptive Practices in Connection with its YouTube Kids App,” and asking the commission to investigate. The 60-page document alleges that the app mixes actual content and commercials in a way that children cannot meaningfully distinguish, and that such behavior would never fly on broadcast or cable TV.

Google advertises YouTube Kids as an app “designed for curious little minds to dive into a world of discovery, learning and entertainment.” While its content matches its mission statement, Georgetown Law argues that some of the content blurs the line between entertainment and commercial.

For example, many of the videos available on the service are user-created toy and candy “unboxing” videos, which highlight excited consumers getting their hands on a new product for the first time. These reviewers often receive products directly from the companies they review, and do not disclose this information in a way that very young children can understand.

While these videos are not advertisements in the strictest sense, they would probably violate FCC television standards that disallow children’s show hosts from hawking particular products. (Young children, generally speaking, have more trouble differentiating ads from entertainment than their older siblings and parents.)

Branded channels also present something of a challenge. The Lego channel, for example, provides cartoons and webisodes about Lego characters, but also hosts full TV commercials for Lego products. Other companies, like McDonald’s, host a mix of narrative content and straight-up ads as well.

Despite speaking out against commercial content, Georgetown Law takes little issue with the actual ads that YouTube Kids shows in-between videos. They tend to be public service announcements for organizations like the U.S. Forestry Service or Adopt U.S. Kids, which, as advertisements go, are fairly inoffensive — arguably even wholesome.

Source: tomshardware

 

http://www.tomsguide.com/us/youtube-kids-confuses-audiences,news-20746.html

 

Let’s Encrypt certificate used in malversiting

Let’s Encrypt certificate used in malversiting

Let’s Encrypt certificate used in malversiting

We’d better get used to a world where malicious traffic is encrypted too.

According to some people, myself included, Let’s Encrypt was one of the best things that happened to the Internet in 2015. Now that, as of December, the service is in public beta, anyone can register certificates for domains they own, in a process that is both easy and free.

Cybercriminals have noticed this too, and rather unsurprisingly, Trend Micro reports that a certificate issued byLet’s Encrypt was used in an Angler exploit kit-powered malvertising campaign, to make the malicious advertisements harder to detect.

  The malvertising taking place. Source: Trend Micro.

What makes this case particularly interesting is the fact that the domain for which the certificate was issued was the subdomain of a legitimate site, whose DNS was compromised. As domain-based reputation isn’t usually granular enough to distinguish between subdomains, this could have helped them avoid detection even further.

Let’s Encrypt only issues Domain Validation certificates, which don’t do more than validate the domain; hence it doesn’t believe it needs to police the content of the domains for which it issues certificates, although as a possibly temporary compromise, the domains are checked against Google‘s Safe Browsing API before a certificate is issued.

I agree with Let’s Encrypt here. I think our goal should be to encrypt all Internet traffic, and if bad traffic gets encrypted too, then that is a feature of the system, not a bug. Given how easy it is to register certificates, more policing would simply lead to a cat-and-mouse game. And there is also the danger of a slippery slope, where governments and interest groups start to pressure Let’s Encrypt to revoke the certificates of sites they perceive as bad.

We’ll just have to accept that more and more traffic is encrypted and find ways to block malicious activity in an environment where all traffic is encrypted.

Of course, this particular case is a little different: the exploit kit users in this case didn’t “own” the subdomain; they were merely able to point it to their own server. It might be worth Let’s Encrypt considering an automatic way in which domain owners can revoke certificates issued to subdomains. But that may well complicate the whole process and make little impact in practice. After all, for a successful malware campaign, domains only need to be active for a very short period of time.

If you’ve been telling people that the mere presence of a ‘lock icon’ in the address bar is a sign that a site is harmless, now is really the time to stop doing that.

https://www.virusbtn.com/blog/2016/01_08.xml

Posted on 08 January 2016 by Martijn Grooten.

Microsoft is the company to watch in 2016

It isn’t often that Microsoft is the company to watch for the new year. But it will be in 2016.

CEO Satya Nadella and his team have shaken things up, surprising customers with better products, a continuing move to the cloud, an embrace of open source, and a willingness to stand up for user privacy in the face of government pressure.

We even have to acknowledge a success that was born in the bad old days of Steve Ballmer: Microsoft’s largely successful do-over on Windows 10. Being rooted in the PC era is problematic, to say the least, but the Surface Book is (surprise) an exciting product that shows that the company is taking an old-school product as far as it can go. It’s even matched — or maybe outdone — Apple with the newest Surface Pro tablet.

There are still major challenges, and if any one area is liable to trip up the Redmonders it’s mobile. The ill-conceived purchase of Nokia cost billions, and even worse is the failure to develop a coherent mobile strategy years after it became a necessity.

But the balance sheet is now definitely tilted in Microsoft’s favor, which couldn’t be said a few years ago. Wall Street makes a lot of bad calls when it comes to technology companies, but it’s revealing that Microsoft’s price-to-earnings ratio, a measure of future expectations, is higher than Apple’s.

Beating the PC makers at their own game

Flawless execution is something few companies achieve, and Microsoft is no exception. Both Windows 10 and the Surface Book have problems that can’t be ignored. But unlike Windows Vista or Windows 8, Windows 10’s problems are fixable, and so are the issues plaguing the Surface Book.

Microsoft entered the hardware space a few years ago when it became clear that none of the PC makers were likely to produce a decent Windows tablet. The original Surface, particularly the weird and nearly useless Surface RT version, wasn’t successful — in fact, it cost the company a $900 million writedown.

Contrast that kludge with the new Surface Pro 4. It’s expensive, but it’s powered by Intel’s new Skylake processor, and Microsoft has reworked the heat distribution system to allow those chips to run at full speed so that they can tear through demanding applications.

Similarly, Microsoft entered the PC space because the PC makers were boring the buying public to death with unimaginative hardware larded with annoying, and sometimes contaminated, bloatware. You can read the reviews yourself, but suffice it to say that the Surface Book is, to quote my colleague Woody Leonhard, “one sexy piece of hardware.” When was the last time you heard someone who is often critical of Microsoft say something like that?

Microsoft won’t bank tons of money selling such an expensive machine, but clearly the company aims to push the PC makers into making better products, an essential step in keeping the Windows franchise afloat. It also wants to push the PC makers into dialing way back on bloatware, which is why the Microsoft Store sells bloatware-free Signature Editions of PCs made by other companies.

From open source to augmented reality

You don’t have to go back many years to find evidence of Microsoft’s arrogant rejection of the open source community. That’s been changing for some time, and as the company struggles to keep developers on its side, open source has become even more important.

There was a key development on that front last month when Microsoft announced plans to open-source its Chakra JavaScript engine. It shows, as my colleague Serdar Yegulalp wrote, “that Microsoft wants to become a player in the JavaScript ecosystem that has ambitions to be a near-universal runtime for every kind of software.”

There isn’t a huge amount of money here, but the Chakra strategy is indicative of a new openness and willingness to work in environments where Microsoft is not in a position to dominate the playing field.

Then there’s HoloLens. Sure, it’s been delayed a few times, but I’m excited to see Microsoft garner buzz — it practically eclipsed Windows at Microsoft’s January 2015 public preview. More important, it shows a willingness to go beyond the corporate comfort zone.

Writing at Ars Technica, Peter Bright put it this way: “With HoloLens I saw virtual objects — Minecraft castles, Skype windows, even the surface of Mars — presented over, and spatially integrated with, the real world.”

Augmented reality has the potential to be more than a cool toy. Companies like Epson have already developed and sold units that help field technicians fix complex devices and warehouse workers pick products from shelves. This field is crowded, and it will take some doing for Microsoft to succeed, but its willingness to risk it speaks volumes.

I don’t mean to minimize Microsoft’s weakness or defend boorish behavior like its annoying campaign to push users to download Windows 10. But having watched Microsoft decline as a relevant tech power over the years, I see a lot of reasons to expect a continued resurgence. Watch it carefully in 2016.

Source: InfoWorld

http://www.infoworld.com/article/3019721/microsoft-windows/microsoft-is-the-company-to-watch-in-2016.html

Zotac Prepares Premium NVMe SSD Based On Phison’s E7

Zotac Prepares Premium NVMe SSD Based On Phison’s E7

We know of several new SSD controllers slated for release in 2016, but the Phison PS5007-E7 is one of the most interesting. We first spotted the E7 at Computex, and it was already working well enough to run some performance tests. Phison has been fine-tuning the controller for 2D (1x, 1y, 1z) and next generation 3D NAND flash over the last several months.

This isn’t the first time we’ve spotted a working sample, but as you can see, the Zotac R&D model looks production-ready. This version pairs the Phison PS5007-E7 controller with Toshiba’s 15nm multi-level cell flash. The claimed performance is up to 2500 MB/s sequential read and 1,200 MB/s sequential write.

Zotac told us that this is the first and only model for a product that has yet to receive an official name. The drive just arrived from Hong Kong this morning, so it is a prototype. Upon arrival to Las Vegas, Zotac installed Windows and a few game demos on the drive. The game demos are running from the drive in the suite. This is the first time a public demo on an E7 ran with a real-world workload.

This is also the first time we’ve seen the E7 in a non-M.2 design outside of Phison’s office. All of the other E7 proposed products took the shape of M.2 2280 and 22110 form factors. The Zotac drive is a true AIC (add-in card) like the Intel SSD 750. AIC cards allow the drive to use more PCB surface area to increase NAND flash parallelization and consume more power. This leads to higher performance for end users.

Zotac’s primary business is gaming products. The company started out selling video cards exclusively, but it has expanded to other products like small form factor PCs and now solid-state drives. The future PCIe-based SSD takes on a gamer feel with large branding.

Zotac stated that the new drive could be ready as early as next month, but that contradicts what Phison has stated over the last few days. We don’t expect to see a retail E7-based product until Computex in June or Flash Memory Summit in August.

 

Source: toms hardware

http://www.tomshardware.com/news/zotac-phison-e7-ssd,30903.html

A Short History of Computer Viruses

A Short History of Computer Viruses

A Short History of Computer Viruses

September 4, 2014 | By Natasha Devotta
https://antivirus.comodo.com/blog/computer-safety/short-history-computer-viruses/

signposts

Computers and computer users are under assault by hackers like never before, but computer viruses are almost as old as electronic computers themselves.  Most people use the term “computer virus” to refer to all malicious software, which we call malware. Computer Viruses are actually just one type of malware, a self-replicating programs designed to spread itself from computer to computer. A virus is, in fact, the earliest known malware invented.

The following is a history of some of the most famous viruses and malware ever:

1949 – 1966 – Self-Reproducing Automata: Self-replicating programs were established in 1949, to produce a large number of  viruses,  John von Neumann, whose known to be the “Father of Cybernetics”, wrote an article on the “Theory of Self-Reproducing Automata” that was published in 1966.

1959 – Core Wars: A computer game was programmed in Bell Laboratory by Victor Vysottsky, H. Douglas McIlroy and Robert P Morris. They named it Core Wars. In this game, infectious programs named organisms competed with the processing time of PC.

1971 – The Creeper: Bob Thomas developed an experimental self-replicating program. It accessed through ARPANET (The Advanced Research Projects Agency Network) and copied to a remote host systems with TENEX operating system. A message displayed that “I’m the creeper, catch me if you can!”. Another program named Reaper was created to delete the existing harmful program the Creaper.

1974 – Wabbit (Rabbit): This infectious program was developed to make multiple copies of itself on a computer clogging the system reducing the performance of the computer.

 

1974 – 1975 –  ANIMAL: John Walker developed a program called ANIMAL for the UNIVAC 1108. This was said to be a non-malicious Trojan that is known to spread through shared tapes.

1981- Elk Cloner: A program called the “Elk Cloner” was developed by Richard Skrenta for the Apple II Systems. This was created to infect Apple DOS 3.3. These programs started to spread through files and folders that are transferred to other computers by floppy disk.

1983 – This was the year when the term “Virus” was coined by Frederick Cohen for the computer programs that are infectious as it has the tendency to replicate.

1986 –  Brain: This is a virus also known as the “Brain boot sector”, that is compatible with IBM PC  was programmed and developed by two Pakistani programmers Basit Farooq Alvi, and his brother, Amjad Farooq Alvi.

1987- Lehigh: This virus was programmed to infect command.com files from Yale University.

Cascade: This virus is a self-encrypted file virus which was the outcome of IBM’s own antivirus product.

Jerusalem Virus: This type of virus was first detected in the city of Jerusalem. This was developed to destroy all files in an infected computers on the thirteenth day that falls on a Friday.

1988 – The Morris Worm: This type of worm was created by Robert Tappan Morris to infect DEC VAX and Sun machines running BSD UNIX through the Internet.  This is best known for exploiting the computers that are prone to buffer overflow vulnerabilities.

1990 – Symantec launched one of the first antivirus programs called the Norton Antivirus, to fight against the infectious viruses. The first family of polymorphic virus called the Chameleon was developed by Ralf Burger.

1995 – Concept: This virus name Concept was created to spread and attack Microsoft Word documents.

1996 – A macro virus known as Laroux was developed to infect Microsoft Excel Documents, A virus named Baza was developed to infect Windows 95 and Virus named Staog was created to infect Linux.

1998 – CIH Virus: The release of the first version of CIH viruses developed by Chen Ing Hau from Taiwan.

1999 – Happy99: This type of worm was developed to attach itself to emails with a message Happy New Year. Outlook Express and Internet Explorer on Windows 95 and 98 were affected.

2000 – ILOVEYOU: The virus is capable of deleting files in JPEGs, MP2, or MP3 formats.

2001 – Anna Kournikova: This virus was spread by emails to the contacts in the compromised address book of Microsoft Outlook. The emails purported to contain pictures of the very attractive female tennis player, but in fact hid a malicious virus.

2002 – LFM-926: This virus was developed to infect Shockware Flash files.

Beast or RAT: This is backdoor Trojan horse and is capable of infecting all versions of Windows OS.

2004 – MyDoom: This infectious worm also called the Novang. This was developed to share files and permits hackers to access to infected computers. It is known as the fastest mailer worm.

2005 – Samy XXA: This type of virus was developed to spread faster and it is known to infect the Windows family.

2006 – OSX/Leap-A: This was the first ever known malware discovered against Mac OS X.

Nyxem: This type of  worm was created to spread by mass-mailing, destroying Microsoft Office files.

2007 – Storm Worm: This was a fast spreading email spamming threat against Microsoft systems that compromised millions of systems.

Zeus: This is a type of Trojan that infects used capture login credentials from banking web sites and commit financial fraud.

2008 – Koobface: This virus was developed and created to target Facebook and MySpace users.

2010 – Kenzero: The is a virus that spreads online between sites through browsing history.

2013 – Cryptolocker: This is trojan horse encrypts the files infected  machine and demands a ransom to unlock the files.

2014 – Backoff: Malware designed to compromise Point-of-Sale (POS) systems to steal credit card data.

Sad to say, the history will continue. That makes keeping up with the latest antivirus and firewall technology ever so important.

Razer Green Switches: Don’t Call Them Kailh

Razer Green Switches: Don’t Call Them Kailh

There aren’t that many mechanical keyboard switches on the market. Cherry and Kailh switches are by far the most ubiquitous, but there are several others that you’ll bump into, as well. Greetech and TTC are two switch makers that we’ve seen on shipping products recently, and some keyboard OEMs have also begun crafting their own, including Logitech G’s Romer switches, EpicGear’s EG switches and Razer’s Green (and Orange) switches.

Tackling Misconceptions

Of the above, few are as misunderstood as Razer’s Green switches. It’s a common misconception that they’re just rebranded Kailh switches, but that is not the case. It is true that Kailh (Kaihua Electronics) is one of the manufacturers of the Razer Green switch, but Razer has them made to its own specifications.

That is to say, they are not identical to any Kailh switch.

The Razer Green switches even run on different production lines than Kailh switches. Further, Kailh is not the only manufacturer to produce the Razer Green switch. Razer will not divulge who any other manufacturing partners are, but Kaihua Electronics is not the sole supplier.

So, The Switch

Anyone that has mistaken a Razer Green switch for a Cherry MX or Kailh Blue could be forgiven, as they all feel rather similar. In all three, you get a nice, fat tactile bump along with a definitive, loud “click.” There are subtle differences, and they’re worth exploring, but the average user is unlikely to be able to determine which of these three switches are under their fingers without directly comparing them to one another. It is, however, possible that more particular or discerning users would notice.

Partially, this was by design. Razer had used Cherry switches on its keyboards in the past, and so it opted for a backwards-compatible stem and buckle design when creating its own switch.

Personally, I feel like the Razer Green is a little smoother than Cherry or Kailh Blues (although it could partially be an illusion because of the soft-touch Razer key caps), and the bottom portion of the travel (past the tactile bump) feels and sounds a bit like a linear Red switch. That is to say, to me the Razer Green switch comes off as something of a hybrid of two switch types.

It was certainly Razer’s design to create a fast-action switch that also had some tactility — a switch, it says, that was designed from the beginning for gaming. (Razer reps said that when they developed the switch, they enlisted pro esports gamers to try them out in real-life scenarios and were able to tweak the end design from there. This design is the result of that feedback.) Note that in the specification comparison table below that, indeed, the delta between the actuation and reset points (that is, the physical distance between when the switch engages and when it resets so that it can be pressed again) is significantly smaller than the competing Blue switches.

Razer Green
Cherry MX Blue
Kailh Blue
Actuation Point 1.9 mm (+/-0.4 mm) 2.2 mm (+/-0.6 mm) 2.0 mm (+/-0.4 mm)
Actuation/Reset Delta 0.4 mm 0.7 mm 0.9 mm
Lifespan 60 million strokes 50 million strokes 50 million strokes
Actuation Force 50g (55g to get over tactile bump) 50g (60g to get over tactile bump) 50g (60g to get over tactile bump))
Total Travel 4 mm 4 mm 4 mm

However, there is a reality check we must take here. First, note that when it comes to pretravel and actuation/reset points, we’re talking about differences of tenths of a millimeter. That is such a minute distance that arguably, the differences may be imperceptible. (If you can reliably discern between 1.9 mm and 2.0 mm just by feel, I tip my cap to you.)

Concerning actuation and reset, there is certainly a wider, and therefore more easily perceptible, gap when you compare the three switch types above. For example, Razer Greens reset at just 0.4 mm, whereas Kailh Blues reset at 0.9 mm. That’s half a millimeter difference, but, again, that’s a tiny distance.

Another wrinkle here, though, is that even with a single manufacturer, you have to consider tolerances switch-to-switch. Note that in the above chart, the actuation points are listed with a +/- rating, which means that, technically, a Razer Green switch designed to actuate at 1.9 mm could actually actuate at anywhere between 1.5 and 2.3 mm (1.9 mm [+/-0.4 mm]).

Therefore, perhaps it would be better to chart actuation points this way:

Razer Green Cherry MX Blue Kailh Blue
Actuation Point 1.5-2.3 mm 1.6-2.8 mm 1.6-2.4 mm

What is one to make of the above? To be honest, not too much. Again, these switch distances are measured in tenths of a millimeter to begin with, and when you take into consideration the acceptable pretravel tolerances of any switch (and the fact that total key travel is still just 4 mm, and that pretravel can easily comprise half of that), it’s exceedingly difficult to detect any meaningful differences between similar types of Razer, Cherry and Kailh switches.

Granted, there is no tolerance for variability between actuation and reset points. The Razer Green switch has a shorter delta between those points than either Cherry or Kailh (0.4 mm versus 0.7 mm and 0.9 mm, respectively), so you can reliably assume that the fast-fingered can technically type faster. Whether or not that speed boost is perceptible in real life scenarios will vary person to person.

Intense Quality Control

In my conversations with various Razer employees, I was struck by how intensely they manage the switch making and quality assurance process. Razer staff is embedded in each factory, and that person’s job is to oversee all the production of the switches. (A Razer representative told me the production facility is nearly as clean as a semiconductor fab; he has to go through a special chamber to get cleaned off before entering, and has to wear a clean suit, too.)

To ensure quality, the switches are inspected by hand as they come off the production line, and then Razer staff further sorts the batches of switches as an additional check. According to the company, this is the daily QA grind in the factory for Razer staff:

Beyond that, there’s a need to check durability. Razer tests the switches and gets a force curve, and then after running through 60 million strokes of fatigue testing (which takes months), as well as thermal shock, salt mist (corrosion), humidity, and vibration and drop tests, the company rechecks the force curve to ensure nothing has changed. (It performs abrasion testing on whole keyboards, after the switches are mounted.)

This attention to detail extends to the RGB LEDs adjacent to the switches, as well. Razer personnel conducts a process of bending and sorting the LEDs themselves to make sure they have the most accurate lights, and they use a spectrography test to check for color shade accuracy. The goal is R, G and B at maximum brightness to ensure purest white. (To take advantage of these capabilities, Razer had to employ a new microcontroller on its keyboard PCBs.)

Measuring For Ourselves

In our quest to measure some of these things for ourselves, Razer offered to provide us with a height gauge and some switches to measure.

This is a somewhat custom setup. Although the height gauge itself is an off-the-shelf tool, some of the overseas Razer guys hacked together a custom box with two switches mounted onto it. One of the switches is a Razer Green, and the other is a Cherry MX Blue. They rigged it so an LED lights up upon actuation, and they machined a metal baseplate that fits both the height gauge and the box.

Although we hoped to use the height gauge to measure multiple switches on actual keyboards, Razer advised us against it, as that use case is outside of the scope of what the machine is designed to accurately measure. (We ran our own tests anyway, but we discovered that there were indeed some inaccuracies with testing keyboard-mounted switches.) Thus, in the end we were limited to measuring just the two switches mounted in the box Razer provided.

Before each test, we lowered the arm of the height gauge until it touched, but did not depress, the switch. Then we zeroed out the gauge so we were starting the measurement at 0.0 mm. When we reached the actuation point (when the LED engaged), we noted the height, and then continued to depress the switch until the travel bottomed out.

Then, we reset the gauge again to 0.0 mm and measured from the bottom of the travel to the reset point (when the LED disengaged), and that is the distance in the cells in the table below.

With actuation and reset measured thusly, we can measure the delta between them for each test run. Also note that actuation and total key travel were measured together on the downstroke, and the reset point was measured on the upstroke.

Razer Green Switches
Actuation (mm) Key Travel (mm) Reset (mm) Actuation/Reset delta (mm)
1.93 4.02 2.44 0.51
1.88 4.03 2.51 0.63
1.86 3.98 2.46 0.6
1.94 4.06 2.47 0.59
1.88 4.02 2.47 0.59
(0.08 variance) (0.08 variance) (0.07 variance) (0.12 variance)
Cherry MX Blue Switch
Actuation (mm) Key Travel (mm) Reset (mm) Actuation/Reset delta (mm)
2.02 4.02 2.55 0.53
2.01 3.98 2.53 0.52
2.02 3.94 2.5 0.48
2.02 3.95 2.48 0.46
2 4.08 2.62 0.62
(0.02
variance)
(0.13
variance)
(0.14
variance)
(0.14
variance)

Note that these tests were performed on two switches total. Therefore, these findings can be extrapolated only if we assume that the manufacturing consistency from switch-to-switch is precise, and as we’ve already discussed, there’s a great deal of tolerance in the pretravel.

It’s also important to keep in mind that although this is a machine, the height gauge is hand-cranked, and therefore there’s a very slight margin of error introduced by the human operator. We performed multiple test runs on each switch, and we threw out any clear outliers in order to ensure that we had at least five reasonably consistent results for each measurement.

The performance of this one Razer Green switch shows that it certainly meets the listed spec in regard to the actuation point. However, the delta between the actuation point and reset is between 0.51 – 0.63 mm, which is higher than Razer’s claimed 0.4 mm delta.

The Cherry MX Blue switch actuated at a shorter distance than its stated 2.2 mm (although our 2 – 2.02 mm findings are within Cherry’s acceptable tolerance range). The most notable finding from the whole spate of tests is that the actuation/reset delta of the Cherry switch was between 0.46 – 0.62 mm, which is tighter than the listed 0.7 mm spec.

We confirmed that the total key travel for both switches matches their stated 4 mm depth, give or take a few hundredths of a millimeter.

This video by Razer shows some of the things we’ve discussed here, including the height gauge used in our tests. (You can mute the audio to avoid the promotional language if you like; just watch it for the eye candy.)

The World’s First Mechanical Switch Designed for Gaming

Busting Myths And Testing Claims

The echo chamber is far too prevalent when it comes to knowledge about mechanical keyboard switches, and a common myth is that Razer’s switches are just Kailh rebrands. As I stated at the beginning of this article, that is not in fact the case. The Razer Green switch has different specifications than any Kailh switch, and although Kailh does manufacture some of Razer’s switches, it is not Razer’s only manufacturing partner.

The basic testing we were able to perform on these switches confirms some of Razer’s claims about the Green switch’s performance and denies another (the actuation/reset delta), but before we draw any definitive conclusions either way, we would need more comparative data from performing the same tests on whole batches of switches.

Razer is clearly dedicated to creating an ideal gaming switch with its own twist, and through beta testing with pro esports gamers and intense quality assurance practices, it appears to have done so — however imperceptibly different the Green switches may be from competing Blue switches.

Source: toms hardware

http://www.tomshardware.com/news/razer-green-switches-not-kailh,30817.html

Debunking the myths around secure passwords

Debunking the myths around secure passwords

Debunking the myths around secure passwords

keyboard

Most websites that we use today generally give you feedback on the passwords that you have created when setting up a new account, rating them either weak or strong. They also advise you to use a mix of upper and lower case letters, along with numbers, to ensure a secure password. However good the advice may be, it doesn’t tell you exactly which order the mix should be in.

By sheer coincidence, it appears that all of us tend to put the upper case letters at the start of the passwords with the numbers taking up the final spaces. This was discovered by a group of security experts who work for Eurecom, an investigation institute based in France.

The results of their study, presented at the last ACM Conference on Computer and Communications Security in Denver, has shown that we are confusing what constitutes a secure password, and that this is putting out privacy at risk.

password

The programs traditionally used by cybercriminals to guess passwords only handled certain combinations until finding the right one.

However, modern methods aren’t based on random guess work. Criminals can now train the software with large lists of passwords – such as the 130 Adobe user passwords that were leaked in 2013 – so as to find the most common combinations. This method allows them to have a greater chance of success in their attacks.

Using this premise as a base, the experts have used a program – similar to the one used by the criminals – to analyze over 10 million passwords. They’ve done this to compile a list of the easiest passwords for criminals to guess.

The result is a “predictability index” that they tested on another 32 million passwords to verify its effectiveness. According to the results, the least common passwords were the most secure. This means that it is important to have a long password that includessymbols as opposed to just upper and lower case letters.

password strength

The aim for users from now on should be to create passwords that are not at all predictable, no matter if they include numbers, upper case, or lower case letters. The group behind the study say that passwords should be longer, even adding a few extra words in necessary.

Their investigation should help people to become more aware when creating new login codes which will help to protect their accounts better. Although they can’t guarantee a bulletproof way of creating passwords, they assure us that their method is the safest yet.

On the other hand, the investigators advise that technology companies begin to place less emphasis on passwords as a means of accessing accounts, and that they look at alternative means where possible. There are always new ways of decrypting login details, which makes them ever more ineffective.

BY

NOVEMBER 18, 2015

Original Post

Contact us

We're not around right now. But you can send us an email and we'll get back to you, asap.

Questions, issues or concerns? I'd love to help you!

Click ENTER to chat