Wednesday 31 July 2013

Eyeing faster chips, scientists measure super-fast electrical switching

Researchers in Silicon Valley have managed to observe electrical switching that is thousands of times faster than transistors used in today’s computer chips. Their work could lead to a better understanding of how transistors work at the atomic level and, in turn, help to enable more powerful computers.

Transistors are semiconductor devices that act as simple on-off electrical switches. The number of transistors in a computer chip has a direct effect on its speed and power, so researchers are continually trying to make their transistors smaller and faster.

In work at the SLAC National Accelerator Laboratory in Menlo Park, California, researchers using an X-ray laser to discovered it takes just one trillionth of a second to switch between on and off states in a sample of magnetite, a type of mineral.

They hit each sample with a pulse of visible light from a laser, which caused the electronic structure of the material to rearrange itself. Immediately afterwards, they hit it with a burst from an ultrabright, ultrashort X-ray laser which revealed that the rearrangement had begun hundredths of quadrillionths of seconds after the initial pulse hit the sample.

The precise time for the switching from a non-conducting (off) state to a conducting (on) state was determined by varying the interval of the X-ray laser pulses.

“This breakthrough research reveals for the first time the ‘speed limit’ for electrical switching in this material,” Roopali Kukreja, a materials science researcher at SLAC and Stanford University who is a lead author of the study, said in a statement.

Despite the research, chips made from magnetite aren’t expected anytime soon. The research required the material to be cooled to minus 190 degrees Celcius, which makes it impractical for widespread commercial use.

Magnetite experiment (1)Greg Stewart / SLACAn optical laser pulse (red streak from upper right) shatters the ordered electronic structure (blue) in an insulating sample of magnetite, switching the material to electrically conducting (red) in one trillionth of a second.

But, using the research as a base, the team will go on to study more complex materials and applications at room temperature, a SLAC spokesman said. The hope is that increased knowledge of electrical switching in materials like magnetite will help scientists understand the switching inside materials such as silicon, which is used in current chips, or new hybrid materials that might offer improvements on silicon.

The research was published July 28 in Nature Materials. It was carried out with scientists in Germany, the Netherlands, Italy, Switzerland and other facilities in the U.S.

Martyn Williams covers mobile telecoms, Silicon Valley and general technology breaking news for The IDG News Service.
More by Martyn Williams, IDG News Service


View the original article here

Bogus Chrome, Firefox extensions pilfer social media accounts

Trend Micro has found two malicious browser extensions that hijack Twitter, Facebook and Google+ accounts.

The attackers plant links on social media sites that, if clicked, implore users to install a video player update. It is a common method hackers use to bait people into downloading malicious software.

The bogus video player update lures people in a macabre manner: it says it leads to a video of a young woman committing suicide, according to Trend's description.

The video player update carries a cryptographic signature that is used to verify that an application came from a certain developer and has not been modified, wrote Don Ladores, a threat response engineer, with Trend.

"It is not yet clear if this signature was fraudulently issued, or a valid organization had their signing key compromised and used for this type of purpose," he wrote.

Hackers often try to steal legitimate digital certificates from other developers in an attempt to make their malware look less suspicious.

If the video update is executed, the malware then installs a bogus Firefox or Chrome extension depending on which browser the victim uses.

The malicious plugins try to appear legitimate, bearing the names Chrome Service Pack 5.0.0 and the Mozilla Service Pack 5.0. Ladores wrote that Google now blocks the extension that uses its name. Another variation of the extension claims it is the F-Secure Security Pack 6.1.0, a fake product from the Finnish security vendor.

The plugins connect to another website and download a configuration file, which allow them to steal the login credentials from a victim's social networking accounts such as Facebook, Google+, and Twitter. The attackers can then perform a variety of actions, such as like pages, share posts, update statuses and post comments, Ladores wrote.

Send news tips and comments to jeremy_kirk@idg.com. Follow me on Twitter: @jeremy_kirk


View the original article here

Microsoft discloses weak Surface revenue

Microsoft's Surface tablet has earned the company less in revenue than it paid to write down unsold stocks of the device.

The company said in a regulatory filing with the U.S. Securities and Exchange Commission that the Surface had earned revenue of US$853 million in its fiscal year ended June 30. The Redmond, Washington, software company did not disclose how many units of the tablet it had shipped during the year.

Microsoft announced earlier this month it took a charge for Surface RT inventory adjustments of approximately $900 million. The company also saw a $898 million increase in advertising costs, associated mainly with the Windows 8 operating system and Surface, according to the filing.

Aimed to compete with Apple's iPad and other tablets, the Surface RT built around a ARM-based processor and running Windows RT operating system was generally available from October. The Surface Pro, which runs Windows 8 on an Intel processor, became available in February.

Microsoft decided to design and manufacture the product, to the dismay of some partners who were used to dealing with Microsoft as a supplier of software, rather than as a competitor in the computing devices market.

"A competing vertically-integrated model, in which a single firm controls the software and hardware elements of a product and related services, has been successful with some consumer products such as personal computers, tablets, mobile phones, gaming consoles, and digital music players," Microsoft said in the filing, while discussing its competition. The company said it also offers some vertically-integrated hardware and software products and services, but its competitors in smartphones and tablets have established significantly larger user bases.

The Surface has not been a runaway success in the market. Microsoft shipped about 900,000 Surface tablets in the first quarter of this year, giving it a 1.8 percent market share of the tablet market, according to IDC. Apple led with 19.5 million iPad shipments, a market share of almost 40 percent, followed by Samsung with 18 percent share, Asus at 5.5 percent and Amazon.com at 3.7 percent share. Overall, Windows 8 and Windows RT tablets, including from other vendors, continued to struggle to gain traction in the market, and total Windows 8 and Windows RT shipments across all vendors reached 1.8 million units, IDC said.

Microsoft said in the filing that it would continue to invest in the Surface.

John Ribeiro covers outsourcing and general technology breaking news from India for The IDG News Service.
More by John Ribeiro, IDG News Service


View the original article here

Review: Peazip 5 decompresses everything

PeaZip 5.0 You won't find an easier, more capable compressed archive utility, though performance could be better on some file types.

Download Now

Peazip is one of the most versatile file compression and archiving utilities out there—possibly the most versatile—and it's free. It's also quite easy to use and offers the same features found in programs such as WinZip and WinRAR such as context menu integration, passwords, command line usage, etc.

Peazip supports every compressed archive format I'm aware of: 7Zip, ARC, GZ, TAR, ACE, RAR, etc. There's also support for both ISO and UDF disc images and Linux package formats such as DEB, RPM, PET/PUP, and SLP. As a bonus, you can open Mac HFS DMG files. There are more, but the point is that there's little out there in the way of compressed file you can't browse and extract from.

As you can tell from the file type associations dialog, Peazip supports a lot of formats.

While I've always appreciated the effort that goes into this piece of free software, I've also installed, then uninstalled Peazip several times over the years because of operational and performance issues. This has left me using WinRAR, which while a bit old-school looking, is rock-solid and fast. Version 5 of Peazip seems quite a bit more stable, but it still has a tendency to not close and launch 100% smoothly when you're dealing with multiple instances. Also, the progress bar is still inaccurate, claiming far less progress than as actually been made. Several times, it was only 25% across when the task completed.

Peazip also did not not like that fact that I'd redirected my Downloads folder to a non-default location, complaining that it was no longer accessible. I could of course browse to the real location, but obviously Windows Vista/7/8/8.1 compatibility isn't complete. The program depends on a lot of open source and free libraries to handle the various archive types, and some of them are faster than others. That's a minor complaint: Peazip is easily fast enough on any modern PC.

One thing I most definitely do like about Peazip is its interface, which is clean with lots of nice little touches. For instance, it ignores unpopulated higher-level folders when you open archives. E.g. if you have a batch of zipped pictures in say “\Picture\Holiday 2013\Grandma”, it takes you directly to the Grandma folder rather than forcing you to tunnel down three levels. Also, it keeps the files that you've already viewed open till the archive is closed. This is handy when you're using the Windows image preview and want to scroll through the pictures you've selected to view from within the archive without permanently extracting them.

The Peazip interface is simple and clean.

There are other compression utilities out there: WinZIP, 7Zip, and WinRAR just to name a few, but Peazip compares favorably with all of them. I wish it was a little faster, but on a modern PC or laptop, the difference isn't as significant as it once was. I'm liking it so much so far that I may not be uninstalling again.


View the original article here

Review: Razer's Naga 2014 gaming mouse makes 17 buttons look good

The 17-button mouse feels like a punchline, something you’d flip past with a chuckle in a yellowed issue of MAD magazine. And yet here we are: third or so in its line, the Razer Naga 2014 brushes off any pretense of restraint and serves up a twelve-button number pad, coupled with a pair of buttons on the spine and the requisite scroll-wheel. It is, at a glance, the exact same mouse as later year’s model. But grip it in your hands after installing Razer's Synapse software, and you’re in for a bit of a surprise.

Razer claims that the Naga is the best-selling MMO gaming mouse in the world, and while I’d take issue with drafting such a narrow category to claim top honors in, credit should be given where it’s due. The new Naga feels fantastic, eschewing the complimentary set of ergonomic grips that came with last year’s model in favor of a one-size-fits-all mold that I found rather comfortable in my—admittedly large—mitts.

The latest version of the Razer Naga has mechanical switches built into the buttons that provide very satisfying, very clicky feedback.

It’s a mouse, and does its job amicably. You can tweak the sensitivity—all the way up to 8200 DPI, which I find ludicrous—and even calibrate the mouse laser’s ability to track your particular mouse pad or surface. The twelve side buttons are mechanical now, which ostensibly offers improved accuracy. I do love the clicky sound of mechanical keys—hence my preference for mechanical keyboards—and the mouse’s tactile and audible feedback should help you know exactly when buttons are being clicked. It’s a marked improvement from the squishy buttons of Nagas past—you can check out PCWorld’s guide to mechanical keyboards for the lowdown on why mechanical keys are, in general, pretty neat.

That’s all well and good, but of far greater importance is the fact that each of those twelve buttons is arrayed in a seemingly haphazard but actually brilliant angled pattern, which makes it easy to find each and every button. This is crucial, as keeping track of twelve buttons can be a colossal pain—earlier Nagas featured buttons that were all uniform, which made firing off that critical spell or ability a confusing mess.

Razer’s Synapse software has also stepped its game up, an attempt to evolve from onerous gewgaw to potentially useful tool. The Naga works out of the box, but you’ll need to sign up for a Razer account (and thus, have Internet access) to customize any of the mouse’s features.

If you're willing to create a free Razer account, you can use the Synapse software to customize the Naga and save those settings to your profile.

This has always felt a little ridiculous. The general idea is that by requiring Internet access to use Synapse you're able to save your settings into the cloud and have access to your keybinds, macros and the like should you travel with your gear, replace something, or purchase a new PC. I’m sure this would be of use to professional gamers who frequent LAN parties, but many a connectivity headache could be avoided if this cloud-sync functionality were just made optional.

Gripes aside, Synapse’s robust key-mapping functionality is as powerful and welcome as ever. This mouse can be programmed to do just about anything you can imagine, including firing off complex keyboard macros with a single button press. Better still, Razer has introduced an all new in-game configurator, a compact overlay that lets you tweak the Naga 2014’s settings while you’re playing a game. This is huge—being able to add new functionality or get DPI settings just right without ducking out of a fight or what have you is nigh indispensable—provided you’re into that sort of thing.

That brings us back to square one: does anyone need a 17-button mouse? Pragmatists will dismiss this monstrosity as a fool’s bauble, a toy for folks with more money than sense. But the rest of us are unabashed MMO-junkies, and have already mentally mapped important abilities to the first three or six buttons, debating which abilities could be shunted from keyboard down onto the rest of the 12-button number pad and wondering if we’re dextrous enough to hit everything on the fly.

Angled buttons make it easy to find the right command without looking at your hand during tense gaming sessions.

For what its worth the new Naga’s updated design makes keeping track of individual buttons easy, and after a few hours slogging through my MMOs du jour (Guild Wars 2 and Firefall, if you’re keeping track) I had no trouble at all. And all of this customizability will let you mold the mouse to suit your needs, instead of requiring you to change countless keybinds and the like to fit some finicky tool. That’s always been the strength of pricey gaming hardware, and—despite my issues with Synapse’s account requirements—Razer has always excelled at this sort of thing.

The Razer Naga 2014 will set you back $80 at the time of this review, and if you’re a dedicated MMO player who spends time setting up keybinds and the like you will not be disappointed.  There are other options: Mad Catz’s $99 R.A..T. 7 comes to mind, an ergonomical dream that’s equal parts conversation starter and gaming gear. That said, it’s marginally pricier and once you’ve molded it to fit your hand all of the fiddly bits start to feel a little comical. If spending this much on a mouse feels dumb, then you’ll probably be set with two buttons and a scrollwheel and can look right past these. But trust me on this one; the new Naga feels fantastic in the hand, with buttons are far more functional than its predecessors. It’s arguably only going to make sense for folks who need a lot of abilities and functions at their disposal, and excels in every way.

17 unique mechanical buttons and an in-game configuration utility make the Naga an excellent mouse for MMO gaming.

One more thing: Razer has pledged to offer a version of the new Naga for lefties. As a southpaw I’ve begrudgingly grown acclimated to mousing with my wrong (read: right) hand, so as to avoid missing out on all the bells and whistles my gaming peers are afforded. This is good news to say the least, if only because Razer is pretty much the only peripheral maker left willing to take a loss and cater to that forgotten segment of the gaming population. I’m loathe to replace my functional mouse, but as a lefty who’s also an MMO junkie, I’ll be putting some cash under my pillow and waiting for a lefty Naga to hit Razer's site.


View the original article here

Review: SteelSeries Apex is a supersized keyboard for any gamer

Like a juggernaut on your desk, the SteelSeries Apex will command attention with its large size, multicolored lighted zones, and vast array of macro keys. You'll feel like a starship captain with an advanced console at your fingertips—if you can fit it on your desk, that is.

You'll notice right away just how large the Apex is. The palm rest isn't an optional accessory as it is with many other keyboards, so it comes out of the box in one large unit. The length is also exaggerated by the many macro and media keys on either side.

SteelSeriesThat spacebar is humongous.

Following the oversizing theme, the keys are large and easy to find with your fingers. SteelSeries boasts that the Apex is one of the fastest keyboards around thanks to a key layout that is low and flat, and to a spacebar the size of a candy bar. Not one of those fun-size candy bars either: This baby is king-size. No matter where your hands are flying to, your thumbs will be in proximity to it.

Unfortunately, the Apex strays away from the popular choice of mechanical keys, going with rubber-dome switches. SteelSeries defends the decision by claiming that the rubber-dome technology offers little resistance, making your keystrokes faster. Also, the Apex is really quiet to type on, so if you prefer the tactile feedback and clicks of a mechanical keyboard, you'll want to think twice.

After you plug it in and fire it up, you're going to notice colors—so many colors. Colors everywhere. The Apex has five color zones that are customizable through the SteelSeries Engine software, which you'll want to install to get the most out of the keyboard. According to SteelSeries, each of the zones draws from a palate of 16.8 million colors—it'll be tough to choose your favorite one.

The SteelSeries Engine is great for complete keyboard customization.

The Engine isn't just for making pretty colors on your keyboard, it's also the gamer's best friend with enough key customizations to make your head spin. You can change any key to be a different keystroke, macro, or shortcut to launch programs. And you can save unlimited profiles that you can set to switch to automatically if a program becomes active. Do you have, say, a whole separate keyboard layout for Web surfing? Have it switch automatically when you launch the browser.

On top of all that customization, every profile can have four different layers. With a simple keystroke you have an entirely new keyboard layout—perfect for those who go from work to gaming to surfing lightning-fast.

The Apex takes up two USB spots, but don't fret—you get them right back on the top of the keyboard itself. The USB 2.0 ports can be used for anything USB-related such as your mouse, headset, or easy access for a flash drive.

You have to replace the feet..

As comfortably large as the keyboard is, I was disappointed that it didn't have adjustable feet on the back. It does come with two sets of rubberized feet to tilt the keyboard at different angles (7 degrees or 10 degrees). Pop out the old ones (and find somewhere safe to put them, or you'll misplace the little rubber devils); pop in the new ones. Unfortunately, the elevation change is tough to notice—the keyboard is designed to lie flat.

The SteelSeries Apex was made with gaming enthusiasts in mind, from the flashy customizable colors and macro keys to the addition of diagonal directional keys to minimize key presses. Once you've got your fancy new gaming rig, you'll want a keyboard to match. At just $100, the Apex is a quality keyboard with flashy features to keep gamers new and old happy for years to come while not beating up your wallet.

Alex covers desktops, everything from fancy to practical. He's also an avid (addicted) gamer and loves following the industry.
More by Alex Cocilova


View the original article here

Salesforce.com expands mobile application development support

Salesforce.com has expanded the number of mobile application development tools it supports and also created a series of templates aimed at helping coders create mobile applications faster.

The focus of the 20 open-source HTML5 and CSS-based templates is on exploiting "micro-moments," or the many brief periods of interaction people have with their mobile devices each day, said Adam Seligman, vice president of developer relations, Salesforce.com platform.

Salesforce.com got the idea for the templates, which cover narrow scenarios such as an inventory check or appointments, through feedback from developers during a 40-city tour it conducted recently, Seligman said.

"The thing our developers are trying to do is not take massive, legacy applications and make them available in the cloud," Seligman said. "They're trying to get to what their customer wants to do," namely "super-fast," relevant and contextual business activities, he added.

Salesforce.com has gone even further than the templates, with a new "gallery" of sample mobile applications, which are also available now.

The mobile application world is fragmented, with many developers already having a favorite development framework. Salesforce.com's strategy is to embrace the situation rather than attempt to hem developers into a single homegrown toolset, although it has also developed one itself.

Earlier this year, Salesforce.com announced that it supported access to its APIs (application programming interfaces) from several popular JavaScript frameworks, including JQuery and Backbone.js.

On Tuesday, it announced API support for four additional frameworks, including Knockout.js, Appery.io, Sencha Touch and Xamarin. The last is aimed at Microsoft .NET developers and is based on Mono, the open-source implementation of .NET.

Salesforce.com's own mobile SDK is also getting an update, with a main new feature called SmartSync, which allows developers to create applications that work with data both off and online.

While Salesforce.com eventually hopes to generate significant revenue through mobile applications, it's taking the open-source approach now in order to seed the market.

"We have developers building apps fast and furious for the platform, and we want make them super, super successful," Seligman said.

Chris Kanaracus covers enterprise software and general technology breaking news for the IDG News Service.
More by Chris Kanaracus


View the original article here

Smile! Microsoft stuffs the SkyDrive website with new, photo-friendly features

Microsoft is giving photographers a few more reasons to store their images on SkyDrive, with a heaping helping of new options for viewing and managing photos on the Web.

The meatiest improvements on SkyDrive.com are related to sharing photos and files. Previously, you could only share an entire folder or individual files. Now, SkyDrive.com lets you share multiple files at a time from anywhere in your cloud storage. You can share an entire day's worth of photos just by clicking on the date heading. The recipient will see the photos you've shared as a single album.

MicrosoftSelecting a group of photos by date to share. (Click to enlarge)

Microsoft is also giving a much-needed makeover to the “Shared” folder on SkyDrive.com: It now shows the files you've shared, not just the ones shared with you. That should make it a lot easier to figure out what you've shared in the past and control who has access to your files.

MicrosoftViewing all shared files, including files you've shared. (Click to enlarge)

Other changes are minor, but still helpful. Animated GIF files will actually animate now when viewed through SkyDrive.com, and you can now rotate photos directly through the Website, which should help with camera photos that don't come out facing the right way. Also, the “All photos” view in SkyDrive has been expanded, so you can seek out photos in individual folders, not just the entire drive. For computers with ultra-high resolutions, such as the MacBook Pro with Retina Display and its new wave of Windows rivals, SkyDrive.com will show higher-resolution images to match.

Not all the improvements to SkyDrive.com are photo-related. In February, Microsoft starting letting people edit online documents without logging in, but only if the file was shared through Office Web Apps or Office 2013. Now, users can e-mail a document through SkyDrive.com and check a box that lets recipients edit the document without signing in.

MicrosoftSkyDrive's new text editor. (Click to enlarge)

Finally, Microsoft has added a basic text editor to SkyDrive, similar to Notepad on the desktop. This editor works with TXT files as well as JavaScript, CSS, HTML and other code files. It's a pretty simple editor, but it does include syntax highlighting, word completion suggestions and collaborative editing.

Although none of these features are game-changers, they do help Microsoft's cloud storage service stand out from competitors such as Google Drive and Dropbox, neither of which offer the quick editing or date-based photo sharing that SkyDrive.com now offers. And while Dropbox does have an all photos view, SkyDrive's ability to filter those photos by folder makes this feature even more useful.

Microsoft says the new SkyDrive.com features will be available for all users today.


View the original article here

SoftBank's net profit more than doubles as number of mobile subscribers soars

SoftBank's net sales increased by over 21 percent in the second quarter as the company saw handset sales and subscriber numbers increase.

The Japanese company reported Tuesday that net profit was ¥263 billion ($2.7 billion) in the quarter, up 122 percent from the same quarter a year earlier. The company had net sales of ¥881 billion in the quarter.

SoftBank adopted International Financial Reporting Standards (IFRS) from the second quarter, prompting it to consolidate the results of gaming company GungHo Online Entertainment and mobile communications service provider eAccess, in which it holds significant stakes.

The company's domestic mobile communications business, which provides mobile communications services and sells mobile handsets, saw sales increase by 26.8 percent year on year to ¥662 billion.

SoftBank added 810,000 subscribers in its mobile business in the quarter, attracted by sales promotions or by the opportunity to buy Apple's iPhone or specialized devices including the Mimamori phone, which is a handset with a security buzzer, or PhotoVision, a digital photo frame with telecommunications functionality. The company had over 33 million mobile subscribers at the end of the quarter.

The company's Internet business also saw net sales up 18.9 percent year on year to ¥96 billion, with much of the growth coming from sponsored-search advertising. In fixed-line telecommunications, the third business for which SoftBank breaks out sales figures, net sales increased by 4.3 percent year on year to ¥133 billion, mainly as a result of making eAccess a subsidiary.

SoftBank has forecast at least ¥1 trillion in operating income under IFRS in the current fiscal year ending March 31, 2014.

Earlier this month, SoftBank completed its acquisition of a 78 percent stake in U.S. operator Sprint for US$21.6 billion.

Sprint reported a second-quarter net loss of $1.6 billion on Tuesday, weighed down by the cost of shutting its Nextel wireless network. Revenue, at $8.8 billion, was almost flat in comparison to the same quarter last year. The company dropped "Nextel" from its corporate name just a few days after shutting down the network it acquired through its 2005 purchase of Nextel.

John Ribeiro covers outsourcing and general technology breaking news from India for The IDG News Service.
More by John Ribeiro, IDG News Service


View the original article here

Spoofed! Fake GPS signals lead yacht astray

U.S. researchers have managed to spoof GPS (Global Positioning System) signals and send a yacht hundreds of meters off course, while fooling the crew into thinking the yacht was remaining perfectly on course.

The test, conducted last month off the coast of Italy, is one of the most sophisticated ever reported against GPS and represents several years of work by the team at the University of Texas at Austin.

GPS works by measuring signals received from satellites that orbit about 12,427 miles above the Earth. By knowing the location of each satellite and very accurately timing when the signals arrive, it’s possible to determine a receiver’s location to within a few yards.

To fool the yacht’s GPS system, the researchers needed to generate fake signals that were slightly different from the legitimate ones. In theory, the navigation system would accept the signals, but the result would be a location that wasn’t completely accurate.

A typical GPS receiver relies on signals from at least four satellites, but accuracy is improved with more satellites. On the ocean, it’s possible to receive signals from about 10 satellites at any one time.

If only one satellite signal was faked, it might get discarded by the receiver as erroneous because it was out of character with all the others. And if half were faked, the system might sense it was being attacked or fed fake information.

“We mimicked the entire GPS constellation,” said Todd Humphreys, a researcher at the university’s department of aerospace engineering and engineering mechanics.

“We had a counterpart for each signal coming down from every satellite in the sky. When they mixed together with legitimate signals in the receiver, ours were slightly stronger,” he said in an interview.

Humphreys was on the yacht’s bridge when the experiment took place, and graduate students Jahshan Bhatti and Ken Pesyna were on an upper deck with the spoofing device.

He said that once the yacht’s GPS system was being fed the spoofed data, the researchers began to manipulate the fake GPS signals so the yacht would think it was heading off course. In reality, it hadn’t deviated from its course—yet. But once the erroneous position was fed to the yacht’s computer it issued a course correction that resulted in the yacht actually turning.

Because the navigation computer was basing its movements on fake signals, the computer chart on the bridge showed the yacht moving in a perfectly straight line.

“I saw the reactions of the captain and his first mate,” said Humphreys. “They have come to trust their electronic chart displays so much over the years, so when it came to that, they were very surprised.”

Once the team had tried their trick several times, the yacht was several hundred meters off course, said Humphreys. Here's a video that demonstrates how it worked:

To conduct their GPS spoofing attack, the researchers used a custom-built device on the upper deck of the yacht, close to the GPS antennas, but Humphreys said it could have been done from miles away.

Developing the spoofing device took several years of work, and it’s thought to be the first that has been publicly acknowledged.

“If it was to get out, it would be a real problem for transportation systems,” he said.

GPS sits at the heart of modern logistical systems that route trucks, ships and aircraft around the world. It’s considered to be such an important aid to global commerce that China and the European Union are building their own satellite navigation systems so they don’t have to rely on the U.S.-controlled GPS. So anything that could undermine confidence in the system is potentially serious.

Groups at universities around the world are looking at improvements that can make GPS more secure, but they face the constraints of working with an installed base of billions of receivers that need to continue to function.

“All of the most practical things we can do are the weakest,” said Humphreys. “All of the most impractical are the strongest. In the short term, all we can do is apply Band-Aids. It will be five or 10 years before we can do something stronger.”

Martyn Williams covers mobile telecoms, Silicon Valley and general technology breaking news for The IDG News Service.
More by Martyn Williams, IDG News Service


View the original article here

Sprint promises wide rollout and device support for ex-Clearwire spectrum

Sprint says it will have live LTE sites using former Clearwire spectrum across the U.S. next year and expects all its new mobile devices in 2014 to be equipped for those frequencies -- though not necessarily iPhones.

The company gave an update on progress in its Network Vision upgrade project during a conference call on Tuesday to discuss second-quarter financial results, according to a transcript provided by Seeking Alpha. Earlier this month, the fourth-largest U.S. mobile operator got a shot in the arm with its US$21.6 billion acquisition by SoftBank and also bought out former partner Clearwire.

With the Clearwire acquisition, Sprint got access to an emerging Clearwire LTE network that it plans to use for extra mobile data capacity in densely populated areas. Though it uses a slightly different form of LTE than Sprint's and operates in a relatively short-range spectrum band, around 2.5GHz, the former Clearwire network could give the carrier a large amount of capacity to bolster services in cities.

The network had been intended for Sprint's use through the longstanding partnership between the two companies, but Sprint's takeover of Clearwire gave that plan a more solid foundation.

There were already about 2,000 Clearwire LTE sites completed when the buyout was completed earlier this month, said Steve Elfman, president of network operations and wholesale, on the conference call. He expects several thousand 2.5GHz LTE base stations on the air this year, with sites across the country next year, though not the full deployment of sites that will use the spectrum. The 2.5GHz radios don't have as long a range as Sprint's other gear, so they'll be deployed in a larger number of sites, he said.

Sprint plans eventually to operate LTE in three spectrum bands: Its own 1.9GHz band, the 800MHz frequencies from its defunct Nextel network, and the 2.5GHz spectrum. Earlier this month it introduced the first mobile device that will be able to use all those bands.

The carrier expects to have a few handsets with 2.5GHz capability in the fourth quarter, and starting in 2014, all its new devices will be able to use that spectrum, Elfman said. But asked later on the call whether that would include the Apple iPhone, Elfman clarified that Sprint couldn't say whether Apple would adopt 2.5GHz for that device.

"We can't confirm anything on the iPhone at this time or anytime," Elfman said.

Stephen Lawson covers mobile, storage and networking technologies for The IDG News Service. Follow Stephen on Twitter at @sdlawsonmedia. Stephen's e-mail address is stephen_lawson@idg.com


View the original article here

Strong sales of IP networking equipment drive Alcatel-Lucent revenue growth

Sorry, I could not read the content fromt this page.

View the original article here

Windows 8.1 Enterprise Preview Reflects the Growing Trend of Working Remotely

Microsoft unleashed Windows 8.1 Enterprise Preview today. The early look at the enterprise version of Windows 8.1 follows the release of Windows 8.1 Preview at Microsoft’s BUILD conference last month, and includes a variety of tools that show Microsoft’s commitment to both BYOD and virtualization.

Aside from the slew of changes and enhancements in the regular Windows 8.1 Preview edition, Windows 8.1 Enterprise Preview also includes features uniquely designed for business customers. Windows 8.1 Enterprise Preview adds business-friendly elements like Direct Acess, and BranchCache. It also provides IT admins with the power to configure and lock down the Start screen on Windows 8 clients.

Microsoft also has tools in Windows 8.1 Enterprise Preview to help out with BYOD and virtualization: Windows To Go, and Virtual Desktop Infrastructure (VDI). Windows To Go lets the company put an entire managed Windows 8 desktop environment on a bootable USB thumb drive, and VDI gives the business the tools to enable users to use critical business software from virtually any Internet-connected device.

One of the hottest trends in business technology today is mobility and working remotely. The driving forces behind working remotely are the “bring your own device” (BYOD) trend and virtualization.

More and more companies are embracing BYOD and allowing (or requiring) employees to provide their own PCs and mobile devices. BYOD can be a cost-cutting measure for the company, because the employee is taking on some (or all) of the burden of purchasing the PC. BYOD enables users to be more productive and have higher job satisfaction because they get to use the hardware they prefer, and are more comfortable with.

BYOD also introduces some unique concerns, though, when it comes to enforcing policies and protecting company data. Regardless of its benefits, companies can’t just let employees connect rogue computers to the network, or store sensitive company data on a personal PC without any protection. The nice thing about Windows To Go is that it turns any Windows 7 or Windows 8 device into a managed Windows 8 PC without installing any additional software, or putting the personal applications or data of the employee at risk.

Another factor in working remotely is virtualization. Whether hosted locally or in the cloud, virtual servers allow the company to maximize the value from its investment in hardware, and adapt quickly to changing demand or business needs. From an endpoint perspective, virtual applications, or virtual desktop are more valuable. A virtual desktop infrastructure like in Windows 8.1 Enterprise simplifies deployment and management of software because the company only has to install and maintain it in one place. At the same time, it helps the users get more done even on older or weaker hardware because much of the processing overhead is handled on the server end.

Small and medium businesses have a lot to gain from both BYOD and virtualization. The features and capabilities of Windows 8.1 Enterprise Preview demonstrate Microsoft’s commitment to keeping SMB customers on the cutting edge.


View the original article here

Tuesday 30 July 2013

Dell's Project Ophelia could be more bad news for PCs

Dell is shipping Project Ophelia devices to early beta testers. PC sales are already suffering at the hands of mobile devices, and now Dell’s Android PC-on-a-stick threatens the relevance of traditional PCs from a different angle.

First, a little about Project Ophelia. The device is about the size of a large USB thumb drive. Instead of just flash-based storage, though, Project Ophelia packs a Rockchip RK3066 processor and 1GB of RAM, as well as both Bluetooth and Wi-Fi connectivity into that small space. It also has a microSD card slot to add additional storage if necessary.

It runs on Google’s Android mobile OS. The device demonstrated at Mobile World Congress in Barcelona earlier this year ran Android 4.1 (a.k.a. “Jelly Bean”), but it seems reasonable to assume Dell will ship the device with the current version of Android before its official launch, which is expected to be the end of this year.

Project Ophelia is not a revolution that will make PCs irrelevant overnight. Android is great at what it does, but much of the business world runs on the Microsoft Office productivity suite and line-of-business or custom applications developed for a Microsoft Windows environment. Project Ophelia is rumored to be a meager $100 and plugs into an HDMI or MHL port on a TV or monitor. HDMI doesn’t transmit power, so it requires a separate USB connection when using that input.

If that describes your company, Project Ophelia probably isn’t for you. However, businesses that have embraced virtual servers and virtual PCs and take advantage of cloud-based servers and applications could benefit from a device like Project Ophelia.

Wyse PocketCloud enables Project Ophelia devices to connect with resources and data.

Although Android itself is not a threat to Windows as a desktop operating system, the value of Project Ophelia is that it’s not limited to what you can run on Android or on the device itself. It connects to the Web, which means that it can access and work with just about any cloud-based applications and services, and it connects with Dell’s Wyse PocketCloud, which can be used to run a virtual desktop environment.

Project Ophelia can’t run Microsoft Office natively, but it can connect to services like Google Docs or Office Web Apps, or an Office 365 account. It won’t run Windows-based software, but it can connect to a virtual server or desktop environment, and accomplish the same thing.

A small or medium business with the right infrastructure can shed traditional PCs, and replace them with inexpensive Project Ophelia devices that fit in your pocket and turn any display with an HDMI or MHL port into a functional computer.

Tony is principal analyst with the Bradley Strategy Group, providing analysis and insight on tech trends. He is a prolific writer on a range of technology topics, has authored a number of books, and is a frequent speaker at industry events.
More by Tony Bradley


View the original article here

Apple supplier Pegatron slammed for alleged labor abuses in China

Apple supplier Pegatron is facing criticism from a watchdog group for poor working conditions at its factories in China.

The Taiwanese electronics maker came under fire for allegedly violating Chinese labor laws with the publication Monday of a new 60-page report from New York-based China Labor Watch that documents conditions at the factories.

The alleged violations include unfairly deducting or failing to pay wages, providing insufficient worker training, and making overtime work mandatory, among others.

The report also questioned Apple's efforts to cap the work week at its supplier factories to 60 hours. China Labor Watch's investigation found that the hours ranged from 66 to 69 hours at the facilities, and that Pegatron was allegedly falsifying worker attendance to keep the reported hours down.

There have been rumors that Pegatron will make a budget iPhone for Apple.

China Labor Watch, which has been critical of Apple and Samsungfor their labor policies in China, investigated three Pegatron factories in China, one of which it claims is building the budget version of the iPhone. From March to July of this year, the group sent undercover investigators to work at the factories and interview nearly 200 employees.

Apple has been in "close contact" with China Labor Watch over the last several months, and is investigating the reported issues, the company said in a statement Monday.

Since 2007, the U.S. tech giant has conducted 15 audits of Pegatron facilities covering more than 130,000 workers, Apple said. In the past 18 months, surprise audits were made at two of the Pegatron factories named in China Labor Watch's report.

"Our most recent survey in June found that Pegatron employees making Apple products worked 46 hours per week on average," Apple added. The company, however, is sending teams to investigate the three Pegatron facilities this week, and is requiring the Taiwanese manufacturer to reimburse workers for any instances of unpaid compensation.

Pegatron is also investigating the claims and will correct any violations found, the company's CEO Jason Cheng said in a statement. "We strive to make each day at Pegatron better than the last for our employees. They are the heart of our business," he said.

China Labor Watch had previously accused the company of poor working conditions last year as it was meeting orders for the iPad Mini. In 2011, Pegatron also gained media attention after an explosion at a factory in Shanghai sent 61 workers to hospital.

In its latest report, the watchdog group claimed Pegatron had failed to create "effective grievance channels" so that workers could voice their concerns to management. A pregnant woman was also found logging overtime hours, a violation of Chinese labor laws, the group said.

Pegatron, however, said the company has spent the last two years establishing multiple channels so that workers can communicate their needs. "In addition, Pegatron helps create the educational programs including parenting seminars for pregnant workers, management courses, and accredited higher education classes," the company added.

Apple and its suppliers have for years now faced criticism for working conditions at iPhone and iPad factories in China. But the U.S. company has pledged to protect its workers and provide a fair working environment for them. Last year, Apple invited the Fair Labor Association, to conduct audits of select factories of its supplier Foxconn Technology Group.

In May, the Fair Labor Association said Foxconn was making progress to improve conditions at the factory, but that working hours at the facility still exceed Chinese legal limits.


View the original article here

Dell's Project Ophelia could be more bad news for PCs

Dell is shipping Project Ophelia devices to early beta testers. PC sales are already suffering at the hands of mobile devices, and now Dell’s Android PC-on-a-stick threatens the relevance of traditional PCs from a different angle.

First, a little about Project Ophelia. The device is about the size of a large USB thumb drive. Instead of just flash-based storage, though, Project Ophelia packs a Rockchip RK3066 processor and 1GB of RAM, as well as both Bluetooth and Wi-Fi connectivity into that small space. It also has a microSD card slot to add additional storage if necessary.

It runs on Google’s Android mobile OS. The device demonstrated at Mobile World Congress in Barcelona earlier this year ran Android 4.1 (a.k.a. “Jelly Bean”), but it seems reasonable to assume Dell will ship the device with the current version of Android before its official launch, which is expected to be the end of this year.

Project Ophelia is not a revolution that will make PCs irrelevant overnight. Android is great at what it does, but much of the business world runs on the Microsoft Office productivity suite and line-of-business or custom applications developed for a Microsoft Windows environment. Project Ophelia is rumored to be a meager $100 and plugs into an HDMI or MHL port on a TV or monitor. HDMI doesn’t transmit power, so it requires a separate USB connection when using that input.

If that describes your company, Project Ophelia probably isn’t for you. However, businesses that have embraced virtual servers and virtual PCs and take advantage of cloud-based servers and applications could benefit from a device like Project Ophelia.

Wyse PocketCloud enables Project Ophelia devices to connect with resources and data.

Although Android itself is not a threat to Windows as a desktop operating system, the value of Project Ophelia is that it’s not limited to what you can run on Android or on the device itself. It connects to the Web, which means that it can access and work with just about any cloud-based applications and services, and it connects with Dell’s Wyse PocketCloud, which can be used to run a virtual desktop environment.

Project Ophelia can’t run Microsoft Office natively, but it can connect to services like Google Docs or Office Web Apps, or an Office 365 account. It won’t run Windows-based software, but it can connect to a virtual server or desktop environment, and accomplish the same thing.

A small or medium business with the right infrastructure can shed traditional PCs, and replace them with inexpensive Project Ophelia devices that fit in your pocket and turn any display with an HDMI or MHL port into a functional computer.

Tony is principal analyst with the Bradley Strategy Group, providing analysis and insight on tech trends. He is a prolific writer on a range of technology topics, has authored a number of books, and is a frequent speaker at industry events.
More by Tony Bradley


View the original article here

Five bookmarks every computer user should have

The Boy Scouts got it right: be prepared.

Whether you've just purchased a new PC or you've been using the same one for years, chances are good that at some point, you're going to need help and/or information.

This could be anything from needing to know the wattage of the power supply (so you'll know if a particular video-card upgrade is compatible) to needing a specific driver after reinstalling Windows.

That's why I've prepared this list of handy destinations you'll want to keep bookmarked in your browser. Because when the time comes, you'll be glad to have them at your fingertips.

1. Your PC's support page. This is arguably the most important link to have, because it can usually steer you to at least some of the items listed below. Start with your PC maker's main support page, then search for your particular model. Once you've found it, bookmark it—and consider it home base for everything you might need.

2. Your PC's user guide. These days it's the rare PC that comes with a printed instruction manual. Yours may have come with an electronic guide preloaded on the hard drive. But did you keep it? Do you even know where it is? Fortunately, most PC makers keep copies of the manuals online. It may be available via the support page above, or you may have to do some poking around elsewhere on the vendor's site. Either way, once you've found it, bookmark it.

3. Your PC's driver download page. As I noted last week in "The myth of driver backups," you'll definitely want to know where you can find the proper drivers for your model, just in case one gets corrupted or you need them after a Windows reinstall.

4. A user forum for your PC. When it comes to tech support, nothing beats your fellow users. Some PC makers maintain user forums where you can post questions and, hopefully, get help. If yours doesn't, head to a site like Bleeping Computer or PC Help Forum, which may not be model-specific, but do offer categorical discussion boards (like for "laptops" and "Windows 8").

5. PC World. Aw, you knew I had to include that one, right?

Contributing Editor Rick Broida writes about business and consumer technology. Ask for help with your PC hassles at hasslefree@pcworld.com. Sign up to have the Hassle-Free PC newsletter e-mailed to you each week.

For more than 20 years, Rick Broida has written about all manner of technology, from Amigas to business servers to PalmPilots. His credits include dozens of books, blogs, and magazines. He sleeps with an iPad under his pillow.
More by Rick Broida


View the original article here

Flash breakthrough promises faster storage, terabytes of memory

In the ongoing quest for faster access to data, Diablo Technologies has taken what could be a significant next step.

Diablo's Memory Channel Storage (MCS) architecture, expected to show up in servers shipping later this year, allows flash storage components to plug into the super-fast channel now used to connect CPUs with memory. That will slash data-access delays even more than current flash caching products that use the PCI Express bus, according to Kevin Wagner, Diablo's vice president of marketing.

The speed gains could be dramatic, according to Diablo, helping to give applications such as databases, big data analytics and virtual desktops much faster access to the data they need most. Diablo estimates that MCS can reduce latencies by more than 85 percent compared with PCI Express SSDs (solid-state disks). Alternatively, the flash components could be used as memory, making it affordable to equip servers terabytes of memory, Wagner said.

Other than on-chip cache, the memory channel is the fastest route to a CPU, Wagner said. Not only do bits fly faster over this link, there are also no bottlenecks under heavy use. The connection is designed to be used by many DIMMs (dual in-line memory modules) in parallel, so each component doesn't have to relinquish the bus for another one to use it. That saves time, as well as CPU cycles that would otherwise be used managing the bus, Wagner said.

The parallel design of the memory bus also lets system makers scale up the amount of flash in a server without worrying about diminishing returns, he said. A second MCS flash card will truly double performance, where an added PCIe SSD could not, Wagner said.

Diablo, which has been selling memory controllers for about 10 years, has figured out a way to use the standard DDR-3 interface and protocols to connect flash instead of RAM to a server's CPU. Flash is far less expensive than RAM, but also more compact. The MCS components, which come in 200GB and 400GB sizes, will fit into standard DIMM slots that typically accommodate just 32GB or so of memory. The only adaptation manufacturers will need to make is adding a few lines of code to the BIOS, Wagner said.

Enterprises are more likely to use MCS as high-capacity memory than as low-latency storage, said analyst Jim Handy of Objective Analysis.

"Having more RAM is something that a lot of people are going to get very excited about," Handy said. His user surveys show most IT departments automatically get as much RAM as they can for their servers, because memory is where they can get the fastest access to data, Handy said.

"Basically, you'd like everything to be in the RAM," Handy said. Virtualized data centers, where many servers need to share a large set of data, need a shared store of data. But in other applications, especially with databases and online transaction processing, storage is just a cheaper and more plentiful -- but slower -- alternative to memory. "Everything that's on the storage is there just because it can't fit on the RAM," he said.

To implement the MCS architecture, Diablo developed software and a custom ASIC (application-specific integrated circuit), which it will sell to component vendors and makers of servers and storage platforms. Flash vendor Smart Storage Systems, which earlier this month agreed to be acquired by SanDisk, will be among the companies using the MCS technology, Wagner said. In addition, a tier-one server vendor is preparing about a dozen server models with the technology and will probably ship the first of them this year, Walker said.

For the most part, Diablo doesn't expect consumers or small enterprises to install MCS flash on their own computers. However, Diablo may work directly with enterprises that have very large data centers they want to accelerate, he said.

Using MCS flash to supplement DRAM would dramatically reduce the per-gigabyte cost of memory but also would allow for further consolidation of the servers in a data center, Wagner said. A large social networking company with 25,000 servers analyzed the MCS technology and said it would make it possible to do the same amount of work with just 5,000 servers.

That's because the current DRAM-only servers can be equipped with just 144GB of memory, but MCS would allow each server to have 16GB of DRAM and 800GB of flash. With that much memory, each server can do more work so fewer are needed, Wagner said. Fewer servers would mean savings of space and energy, which would translate into lower costs, he said.

Stephen Lawson covers mobile, storage and networking technologies for The IDG News Service. Follow Stephen on Twitter at @sdlawsonmedia. Stephen's e-mail address is stephen_lawson@idg.com


View the original article here

Global cybercrime costs billions, new estimates suggest

Cybercrime and espionage could be costing the world between $70 billion and $400 billion a year from a total global economy of $70 trillion, a new estimate by the Center for Stategic and International Studies (CSIS) has calculated.

In the context of the U.S. economy, the damage caused by it is possibly equivalent to 500,000 jobs displaced but in truth the McAfee-sponsored study The Economic Impact of Cybercrime and Cyber Espionage admits that even coming up with these numbers is prone to be defeated by a raft of imponderables.

What the researchers were determined to do was calculate the negative effects using something more substantial than the unsatisfactory surveys often used by security vendors to describe cybercrime, the CSIS said.

The first context is, what do other negatives cost economies? In the U.S., for instance, car crashes cost somewhere between $99 billion and $168 billion a year, depending on which official estimate and year is used. Similarly, illegal drug trafficking is a $600 billion global industry.

Set against these vast numbers, the losses from cybercrime look less alarming although in the case of the car industry not all the costs will be losses; fixing cars and buying new ones generates income for other types of business in ways that cybercrime doesn't.

Cybercrime's main unintended economic benefit has been to prime the global security industry, the size of which is a separate topic.

What the CSIS's difficulties in coming up with accurate figures suggest is that the task might be nearly impossible. Direct effects are hard enough to model let alone indirect ones.

A second points is that using selective estimates based on surveys—wheeled out by governments in particular—is almost certainly misleading.

"We believe the CSIS report is the first to use actual economic modeling to build out the figures for the losses attributable to malicious cyber activity," said Mike Fey, executive vice president and chief technology officer at McAfee.

"Other estimates have been bandied about for years, but no one has put any rigor behind the effort. As policymakers, business leaders and others struggle to get their arms around why cyber security matters, they need solid information on which to base their actions."

Or is conceiving of "costs" as losses the wrong way to approach the whole issue? The CSIS suggests that we view cybercrime losses in the same way we view losses from other activities, as something tolerated to access the benefits.

The alternative, then, is to worry less about the sums of money involved so much as the scope of the actual effects themselves. Cybercrime's damage is as much psychological as fixed in dollars.

For example, Chinese espionage and intellectual property theft might not generate huge losses for the U.S. economy per se but could still warp relative economic performance in significant ways.

"Using figures from the Commerce Department on the ratio of exports to US jobs, we arrived at a high-end estimate of 508,000 jobs potentially lost from cyber espionage," said James Lewis, co-author and CSIS director.

"As with other estimates in the report, however, the raw numbers might tell just part of the story. If a good portion of these jobs were high-end manufacturing jobs that moved overseas because of intellectual property losses, the effects could be more wide ranging," he said.

What is clear is that whatever it is costing, cybercrime didn't exist 15 years ago and its rapid rise must be having some effect. A 2012 report from Moscow-based Group-IB found that cybercrime had mushroomed during 2011 into a $12.5 billion industry in terms of its income stream. Russian-speaking countries accounted for around a third of that total.


View the original article here

How to put your DVD library in the cloud

DVDs are so analog. Sure, they’re digitally encoded versions of your favorite movies and TV shows, but they’re trapped on physical platters. If you want to watch something, you have to find the disc, slide it into a DVD player—or a computer with a DVD drive—and flip your TV to the proper input.

As DVD players leave the market and DVD drives disappear from PCs, it’s time to free your films from their shiny silver prisons so they’ll still be watchable in the player-free future. As a bonus, ripping your movies off the disc extends your viewing options to your phone, tablet, Roku box, game console, and more.

Once you convert your movie library to digital files, you can store those files on a server and stream them anytime, from anywhere. Media companies are slowly waking up to how convenient this is and are building services like Flixster, Ultraviolet, and Vudu, which offer DVD-to-digital conversions: Pop a disc into your PC, and the service adds it to your online account—at which point you can stream it to an app or to a set-top box.

Each such conversion will cost you a few bucks, however, and you won’t be able to convert some movies (owing to studio-imposed restrictions). But you can take a more hands-on approach and accomplish the same thing yourself, spending little or no money in the process.

To turn a real-world DVD into a digital file that you can stream to the viewing platform of your choice, you have to rip it from the disc. As with ripping CDs, you’ll copy the contents of a DVD to your computer, and then convert those contents to a cloud-friendly format. In order for this to work, obviously, you must have access to a PC with a DVD drive.

One quick caveat: Though the police won’t break your door down for ripping DVDs that you already own, the process does technically violate copyright law. Still, as long as you’re not sharing movies on BitTorrent or selling copies on the street, it qualifies as fair use—just as ripping CDs (which is technically legal) does.

Digiarty SoftwareIf you want the solace of having a tech support team to call in case of technical difficulties, you can buy decent DVD-ripping software like WinX DVD Ripper Platinum.

That said, you’ll need a DVD-ripping utility that can remove the Content Scrambling System (CSS) or similar built-in protections that prevent straight-up copying. You can buy a commercial ripper like DVDFab DVD Ripper ($50) or WinX DVD Ripper Platinum ($40), both of which circumvent most copy protection schemes and convert the discs to the mobile- or home-theater-friendly format of your choice.

But a free option works nearly as well: perennial favorite HandBrake. It, too, converts DVDs, though it needs a little help to remove copy protection. After installing the program, you have to obtain a file called libdvdcss-2.dll, which you can download from this public archive site.

After downloading libdvdcss-2.dll, copy it to the folder where you installed HandBrake—usually C:\Program Files\Handbrake. That should do the trick. On my system, adding that file enabled HandBrake to read all of the DVDs I tested, including Monty Python and the Holy Grail and Despicable Me.

Rip your favorite movies with HandBrake, and you can watch them whenever and wherever you want, while the discs remain safely stored away.

Now you’re ready to migrate your movies into the all-digital future. Load up HandBrake, and insert one of your DVDs into the PC. Then click the Source button, and choose the location of your disc—on my PC it’s the D: drive. Be patient—it may take a few minutes for HandBrake to read the contents of the DVD.

When HandBrake has completed its preliminary work, the Source field should display the name of the movie, and the Title field should list something with a runtime that matches the movie’s length. If the runtime seems way too short, open the Title drop-down menu and look for an entry that has an appropriately movie-length runtime. Now select Browse, and choose a destination and filename for the video file you’re about to create—a file named with the film’s original title in the Windows Videos folder would be the obvious choice.

HandBrake looks a little austere, but it offers myriad options for converting and formatting your movies for smartphones, tablets, HDTVs, and more.

Finally, choose a preset for the conversion. If you plan to watch the movie mostly on mobile devices, select the formatting option that best matches what you have—or opt for Universal if you want something that can play just about anywhere. Click Start, and then be prepared to wait: The ripping and conversion process can take some time.

Once HandBrake finishes doing its thing, play back the final product to confirm that it looks okay. Then remove that DVD from your PC and park it in storage somewhere—you won’t need it in the foreseeable future.

Once you’ve liberated your movie library, you’ll want to make it available for viewing anytime, anywhere—not just on the PC that houses the files. You have a couple options at this point: You can upload everything to a remote-storage service like Dropbox or SugarSync, and then stream movies on demand to whatever devices support that service; or you can turn your PC into a media server and effectively host your own “cloud.”

Using a cloud service affords you a built-in remote backup of your movie library and lets you stream videos via the service’s mobile app—a nice option when you’re traveling. However, a free account on Dropbox or SugarSync nets you only a few gigabytes of space, so plan to pay a monthly fee for storage if you need lots of space. What’s more, it takes time to upload a big batch of movies, and you won’t be able to stream them to set-top boxes. The limitations aren’t terrible, but I think most people would be better off setting up a personal media server with free streaming software.

PlexPlex is free software that gives you a slick interface for managing your media-streaming PC.

A program like Plex offers the best of both worlds. Plex indexes all of the media on your PC, and then streams it to mobile and connected devices alike—so you can watch movies on your smartphone or tablet while you’re on the go, or on set-top boxes like the Roku, PS3, and Xbox.

The downside is that you need to leave your PC running 24/7, at least during the times when you want to have streaming access to your movie library. The Plex software taxes your system a bit, too, especially when it’s indexing files and transcoding them while streaming.

Back in December, PCWorld’s Alex Castle detailed how to get started with Plex, so I won’t rehash that setup process here. Instead, let’s look at pairing Plex with a Roku box so you can enjoy all of your newly ripped movies on your big-screen TV.

First, you’ll want to make sure that Plex has scanned the folder containing your ripped movie files. Right-click the Plex icon in your System Tray, and click Media Manager. In the My Library section, click the plus sign, choose Movies, and then add the appropriate folder.

While Plex is scanning the contents, head to Roku’s site and add the Plex channel. You’ll need to be signed in to your Roku account. If you own a newish Samsung TV, an Xbox 360, a PlayStation 3, or a Google TV box, you should be able to find a Plex channel on that device as well.

PlexPlex on Android.

Now turn on your TV, fire up the Roku box, and flip to the Plex channel. You should have access to all of your movies! Keep in mind though, that new channels can take a few minutes to appear, and Plex can take even longer to scan and catalog your movie folder. So don’t be alarmed if you don’t see your full library right away. If you take the time to set up the Plex mobile device client, you should also be able to stream your movies to your smartphone or tablet with ease.

And that’s it! As long as Plex is running, you can access your movies via apps and set-top boxes. You’ve just given your DVD library a new, cloud-savvy lease on life.

For more than 20 years, Rick Broida has written about all manner of technology, from Amigas to business servers to PalmPilots. His credits include dozens of books, blogs, and magazines. He sleeps with an iPad under his pillow.
More by Rick Broida


View the original article here

HTC sees more basic smartphones as way to regain market share

HTC plans to introduce a series of mid-tier and entry-level smartphones later this year as a way to regain market share, after posting disappointing financial results in the second quarter. 

The new phones will arrive at the end of the third quarter, or early fourth quarter, HTC CEO Peter Chou said in an conference call on Tuesday. "We suffered a little bit in this mid-tier market share from the end of last year to so far, in terms of competition," he added. "However, with this new range of mid-tier products we will address those challenges."

HTC has struggled to lift its earnings over the past 18 months as its faced an increasingly competitive smartphone market. In response, the Taiwanese company has focused on boosting its brand name with its HTC One flagship series, along with spending more on marketing. 

During this year's second quarter, the company's newest HTC One handset was on sale globally. The high-end handset, priced at US$599 without carrier subsidies, gained many positive reviews. But at the end of the period, the smartphone maker still posted an 83 percent year-over-year decline in net profit.

Despite the recent string of weak quarterly earnings, HTC's CFO Chialin Chang said the company will restore its profitability soon. Its newest HTC One device sold better than the flagship products the company had during the same period last year, he added.

Earlier this month, HTC unveiled a new mini version of its HTC One handset that will arrive globally in September.

The company expects the phone will help maintain its sale momentum, but HTC's CEO also acknowledged that gaining product visibility in today's market would be a challenge.

"The market is a little confused right now. There are too many products coming out," he said. "That's why we are planning to have a new range of products to try and stay competitive in the market".

HTC, however, denied the company will move to purely selling low-end phones, and tried to reassure investors its current flagship phone was still selling well globally. "The HTC One will not just come out and die. The HTC One momentum continues to stay very, very strong," he said.

For this year's third quarter, HTC projects its revenue will amount to NT$50 billion to NT$60 billion, a year-over-year decline of between 14 percent and 28 percent.


View the original article here

Introduction to backup

Rickaber asked the Utilities forum to explain the basics of backing up.

Not backing up is like not wearing a seatbelt. You can go months or even years without a problem, then disaster strikes and you're in serious trouble. Only a few hours before writing this article, I received an email from a reader who couldn't access his hard drive, which contained files vital to his business. His letter didn't even include the word backup.

It's a simple rule: Never have only one copy of anything.

[Email your tech questions to answer@pcworld.com or post them on the PCW Answer Line forum.]

You absolutely must backup your data files every day. And no, you don't have to copy each of those files every day. Any decent file backup program can do an incremental backup--copying only the files that have been created or changed since the last backup.

By data files, I mean your documents, photos, spreadsheets, songs, and so on. If you back up all of your Libraries, or everything in the Users folder, you should get all of these.

This is usually done as a file backup, because it backs up your files.

You might also consider backing up your system--Windows and your applications--although this isn't essential. Should some disaster render Windows unusable, you can always go through the long process of reinstalling the operating system, personalizing the settings, and reinstalling all of your programs (see Reinstall Windows Without Losing Your Data for details). But if you have a system backup, you can simply restore that in much less time and with much less effort.

The only way to reliably backup Windows is with an image backup--which creates a record of everything on the drive or the partition. You don't have to do this regularly. I back up the system four times a year (if I remember to do it).

Windows 7 and 8 both come with decent backup programs capable of both file and image backups. I prefer the free version of Easeus ToDo Backup, which is more versatile and also does both.

What do you back up to? External hard drives are reasonably cheap and fast, and are clearly the best options for the two programs I just recommended.

But you might want to consider online services that will back your files up to the cloud. Online backup puts a great deal of physical distance between your computer and the backup--the same fire, flood, or burglar won't deprive you of both. But it's slower and, in the long run, more expensive.

I've been using MozyHome for cloud backup for years. I can't say it's better than its competitors, but I can say that it works reliably.

Read the original forum discussion.

When he isn't bicycling, prowling used bookstores, or watching movies, PC World Contributing Editor Lincoln Spector writes about technology and cinema.
More by Lincoln Spector


View the original article here

Introduction to backup

Rickaber asked the Utilities forum to explain the basics of backing up.

Not backing up is like not wearing a seatbelt. You can go months or even years without a problem, then disaster strikes and you're in serious trouble. Only a few hours before writing this article, I received an email from a reader who couldn't access his hard drive, which contained files vital to his business. His letter didn't even include the word backup.

It's a simple rule: Never have only one copy of anything.

[Email your tech questions to answer@pcworld.com or post them on the PCW Answer Line forum.]

You absolutely must backup your data files every day. And no, you don't have to copy each of those files every day. Any decent file backup program can do an incremental backup--copying only the files that have been created or changed since the last backup.

By data files, I mean your documents, photos, spreadsheets, songs, and so on. If you back up all of your Libraries, or everything in the Users folder, you should get all of these.

This is usually done as a file backup, because it backs up your files.

You might also consider backing up your system--Windows and your applications--although this isn't essential. Should some disaster render Windows unusable, you can always go through the long process of reinstalling the operating system, personalizing the settings, and reinstalling all of your programs (see Reinstall Windows Without Losing Your Data for details). But if you have a system backup, you can simply restore that in much less time and with much less effort.

The only way to reliably backup Windows is with an image backup--which creates a record of everything on the drive or the partition. You don't have to do this regularly. I back up the system four times a year (if I remember to do it).

Windows 7 and 8 both come with decent backup programs capable of both file and image backups. I prefer the free version of Easeus ToDo Backup, which is more versatile and also does both.

What do you back up to? External hard drives are reasonably cheap and fast, and are clearly the best options for the two programs I just recommended.

But you might want to consider online services that will back your files up to the cloud. Online backup puts a great deal of physical distance between your computer and the backup--the same fire, flood, or burglar won't deprive you of both. But it's slower and, in the long run, more expensive.

I've been using MozyHome for cloud backup for years. I can't say it's better than its competitors, but I can say that it works reliably.

Read the original forum discussion.

When he isn't bicycling, prowling used bookstores, or watching movies, PC World Contributing Editor Lincoln Spector writes about technology and cinema.
More by Lincoln Spector


View the original article here

Is Snowden a Russian citizen? No, it's just a Google Translate trick

The announcement appeared in small text on the Russian president's website: "Let me speak from my heart: Edward Snowden is a Russian Citizen. Thanks to @homakov!"

The Twitter handle belongs to Egor Homakov, a security researcher with a penetration testing group called Sakurity, which does freelance consulting.

Homakov's spoof message didn't actually appear on Vladimir Putin's website. Instead, Homakov found a trick that allowed him to modify content delivered to a user from Google Translate, which he describes on his blog.

Interestingly, Homakov and Google agree that his finding isn't actually a security issue per se. "As the researcher implied at the end of his original blog post, this is really not a security vulnerability," according to a statement from a Google spokeswoman.

Instead, Homakov uses JavaScript to manipulate the way Google serves translated content from an original, untranslated page.

When Google translates something, it returns the content in a hosted, separate sandboxed domain: "translate.googleusercontent.com." It allows third-party scripts to run in that domain, which would allow, for example, Homakov to modify the content.

Google, which rewards security researchers for finding certain kinds of software flaws, advises that it doesn't pay for cross-site scripting vulnerabilities in the ".googleusercontent.com" domain.

The company said it maintains a number of domains that use the same-origin policy, a complicated set of conditions intended to allow interactions between sites in the same domain but prevent meddling from other sources.

In the case of ".googleusercontent.com," Google says that "unless an impact on sensitive user data can be demonstrated, we do not consider the ability to execute JavaScript in that domain to be a bug."

Still, Homakov's trick is amusing, no less because Snowden, a former NSA contractor who has released batches of sensitive material documenting U.S. government surveillance efforts, is still marooned in Russia while he tries to secure asylum.

Homakov's experiment also proves that users may want to be cautious when using Google Translate: If the content is nearly unbelievable, it might be best to find a native speaker to confirm the translation.

Send news tips and comments to jeremy_kirk@idg.com. Follow me on Twitter: @jeremy_kirk


View the original article here

Microsoft accuses Microsoft of copyright infringement, asks Google to scrub search links

Chalk this up in the "funny, but not really" category: Last week, a company working with Microsoft to combat copyright pirates asked Google to remove multiple Microsoft web pages from Google searches—for infringing Microsoft copyrights.

Yep, Microsoft filed a Digital Millenium Copyright Act takedown request against itself, as Torrentfreak first spotted.

This wasn't a case of internal idiocy or revenge, and it's also not quite as amusing as it may appear at first glance. Instead, it highlights the harmful way copyright holders use automatically generated DMCA takedown requests to try to scrub the net of pirated content, casting a wide net that often ensnares innocent webmasters with false infringement claims.

Google's record of LeakID's DMCA takedown request against Microsoft.com.(Click to enlarge.)

If a copyright holder feels that a particular website is ripping off its work, it can send Google a DMCA takedown request and ask for the infringing site to be removed from the search engine. If Google determines that the site does indeed stomp on the copyright holder's intellectual property rights, the site's links disappear from Google Searches. So far, so good, right?

TorrentfreakA detailed look at the Microsoft DMCA takedown request. (Click to enlarge.)

Copyright holders and the companies they hire to manage DMCA takedown requests—in Microsoft's case, a third party called LeakID—frequently automate the process, resulting in a flood of requests that are sometimes erroneous and aren't always checked for accuracy before filing.

These false requests are far from rare. Consider past Microsoft DMCA takedown requests that accidentally targeted the U.S. Environmental Protection Agency, the Department of Health and Human Services, the National Institutes of Health, TechCrunch, Wikipedia, BBC News, Bing.com, Google.com, and many others. Or HBO's attempt to remove links to the open-source VLC media player, or this big list of "DMCA notices so stupid it hurts," or Google's examples of the "inaccurate" DMCA takedown requests it has received over the years, or…

The number of weekly DMCA takedown requests received by Google.

Over the past year, copyright holders such as Microsoft, the Recording Industry Association of America, NBC, Walt Disney, and others have started blasting Google with vast numbers of takedown requests. While Google used to receive around 225,000 DMCA requests per week, according to the company's own Transparency Report, copyright holders now hit the search engine with 3.5 to 4.5 million takedown requests each and every week.

Around the time of the ramp-up—August 2012—Google announced it would start penalizing sites that are repeatedly accused of copyright infringement, ranking them lower in search results.

Between January and July 2013, Google erased more than 100,000,000—that's 100 million—links from the web as a result of DMCA takedown requests. Torrentfreak reports that figure as already being more than twice the total number of links Google erased in all of 2012.

Google's DMCA stats for the past month.

For its part, Google does appear to actively police the DMCA takedown requests it receives. Around three percent of DMCA takedown requests the company receives are rejected, and rejected URLs are listed on the Transparency Report's main copyright page. And yes, the folks in the Googleplex caught LeakID's attempts to scrub the Microsoft.com links before the six Office solutions pages disappeared from search results.

But few companies have Google's resources. The Safe Harbor provision of the DMCA rewards websites that "take down first and ask questions later," and for every amusing story like this one, there are dozens of other, more harmful false takedown requests . Also consider that if even just 1 percent of the 100 million-plus requests for URL removals catches an innocent page in the automated crossfire, that's already 1 million websites affected.

The Electronic Frontier Foundation filed a court brief in 2012 arguing that automated DMCA requests that aren't reviewed by actual humans should be considered negligent, therefore opening the requestor to sanctions. Nothing ever came from the attempt, however—and automated, unreviewed requests generated by Microsoft contractors are still trying to erase parts of the Microsoft.com website to this very day.

Brad Chacos spends the days jamming to Spotify, digging through desktop PCs and covering everything from BYOD tablets to DIY tesla coils.
More by Brad Chacos


View the original article here

Microsoft to connect schools in South African white-spaces project

Microsoft is expanding the push for so-called "white spaces" broadband to South Africa, where it will help to deploy the technology in a pilot project serving five primary and secondary schools.

The pilot project is aimed at getting schools in rural parts of the country's northeastern Limpopo province connected to the Internet. If successful, it could give South Africa a tool that would help the country reach its goal of affordable broadband for 80 percent of the population by 2020.

White spaces are unused frequencies in TV bands, which Microsoft, Google and others advocate making available on an unlicensed basis for wireless broadband. Advocates won approval for that use in 2008 in the U.S., which was the first country to authorize white spaces. To ensure the new networks use only the slivers of spectrum in between licensed uses, devices need to have a database of licensed users and sensors to detect other activity in the band.

Commercial white-spaces networks are just starting to get off the ground in the U.S., but Microsoft has talked with governments in at least 50 other countries about the possibility of making such frequencies available, said Paul Garnett, Microsoft's director for technology policy.

TV channels are in the same general area of the spectrum band worldwide, so widespread use of white spaces could create a market for mass-produced, low-cost wireless devices, Garnett said. Africa, with more than 1 billion, could play a big role in making that happen, he said.

"That's a huge market, so if there are ways for us to expand access in those markets, then yes, that absolutely helps to create that global marketplace that any new technology is looking for to scale," Garnett said.

Just as the U.S. did, countries across Africa are converting their TV networks from analog to digital, which makes broadcasting more efficient and frees up some of the bandwidth for other uses. But in South Africa, there also are more frequencies in that band that haven't been claimed for anything, he said. That might create an easier path for unlicensed white spaces, which in the U.S. faced strong opposition from TV broadcasters and some other wireless users. South Africa is still evaluating whether to authorize unlicensed white-spaces networks, Garnett said.

"It's an even bigger opportunity ... for this kind of access to radio spectrum than exists in the U.S. or the U.K.," he said. For example, while some U.S. residents suffer from slow DSL (digital subscriber line) speeds, they at least have copper phone lines. Some parts of Africa have no connectivity at all, he said.

In the South African project, Microsoft will work with the University of Limpopo, government agencies and a local network builder called Multisource. The project will set up a central white-spaces radio at the university and one at each of the five schools.

At the schools, the project will give laptops to teachers and make tablets available in a classroom for students. Those clients will talk to special Wi-Fi access points that connect on the back end to the local white-spaces radio. Each school's radio will in turn connect to the Internet through the main white-spaces radio at the university, which has a fiber network.

Though each school's white-spaces radio will have a range of about 10 kilometers, initially they are intended only for use in the schools.

The project will also provide projectors, training and educational content, as well as solar panels where electricity is unavailable or unreliable, Garnett said.

The Limpopo project is part of a broader Microsoft initiative called 4Afrika, which has also included a white-spaces effort in Kenya.

Stephen Lawson covers mobile, storage and networking technologies for The IDG News Service. Follow Stephen on Twitter at @sdlawsonmedia. Stephen's e-mail address is stephen_lawson@idg.com


View the original article here

Opponents of NSA surveillance aren't giving up after House vote

Privacy and digital rights groups have dug in for a longer fight against massive surveillance programs at the U.S. National Security Agency, even after the House of Representatives voted last week against an amendment to curtail the agency’s data collection.

The House last Wednesday narrowly defeated an amendment to a defense spending bill that would have prohibited the NSA from the bulk collection of phone records from U.S. carriers and cut off funding for the phone records collection program as currently designed, but digital rights groups have said the close vote gives them hope of weakening support for the NSA programs in Congress.

Lawmakers have introduced several bills to curb the NSA data collection, and privacy advocates may push for another amendment to a bill on the House or Senate floor, said David Segal, executive director at Demand Progress, a digital rights group.

The vote last Wednesday “demonstrated that a majority of rank-and-file members agree with us, while the institutionalists—leadership, committee chairs—disagree,” he said by email. “So it could be difficult to move things through the committee process ... but there’ll be some relevant floor votes in coming months.”

Wednesday’s vote was “unnervingly close,” Sina Khanifar, a digital rights activist and organizer of DefundTheNSA.com, added in an email. “While we lost the vote, the fact that over 200 representatives were in support of the amendment, despite lobbying by the NSA and strong opposition from the White House, sends a really strong message.”

DefundTheNSA.com asks opponents of the NSA surveillance to continue to contact their lawmakers. “This isn’t over yet,” the site said. “The tide is turning against domestic surveillance.”

Members of Congress aren’t tabling the issue, either. The Senate Judiciary Committee will conduct a hearing Wednesday focused on how to strengthen privacy rights and provide more oversight of the NSA programs.

Senator Patrick Leahy, a Vermont Democrat and chairman of the Judiciary Committee, is lead sponsor of a bill that would set a higher standard for the NSA to collect domestic information and would make public more information about surveillance programs.

Testifying at the hearing will be representatives of the NSA, the Federal Bureau of Investigation, as well as surveillance critic the American Civil Liberties Union and Judge James Carr, who formerly served on the U.S. Foreign Intelligence Surveillance Court. Carr has proposed changes to the court’s processes that would allow judges there to appoint lawyers to oppose surveillance requests.

Several other lawmakers have also introduced bills that would limit the NSA’s ability to collect data.

Representative Rush Holt, a New Jersey Democrat, introduced a bill last week that would release the Patriot Act and the FISA Amendments Act, two laws that give the NSA authority to conduct antiterrorism surveillance.

On June 19, Representative Sheila Jackson Lee, a Texas Democrat, introduced the FISA Court in the Sunshine Act, which would require U.S. officials to disclose most orders of the surveillance court that include “significant legal interpretation” of surveillance laws.

And on June 7, Senator Rand Paul, a Kentucky Republican, introduced the Fourth Amendment Restoration Act, which says that the U.S. Constitution shall not be “construed to allow any agency of the United States government to search the phone records of Americans without a warrant based on probable cause.”

Grant Gross covers technology and telecom policy in the U.S. government for The IDG News Service.
More by Grant Gross, IDG News Service


View the original article here