Monthly Archives: April 2017

How to find legitimate deals on tech

We all want to get our hands on the latest in shiny new gadgetry. Unfortunately, the newest tech tends to come with the most premium prices. But it doesn’t necessarily have to be that way. By keeping an eye out for seasonal price changes, annual product cycles, special offers, and refurbished devices, you can make sure you’re buying your hardware at the best price point possible. If you want the best value from your future tech purchases, check out some of the tricks in this guide.

Become a web detective

Good news for eager bargain hunters: Plenty of online retailers are willing to slash prices in order to attract your business. To find these discounts, head to price comparison sites such as Google Shopping and PriceGrabber, which will list where something is selling for the cheapest price. Before you start your purchase though, check to see how extras like shipping charges and warranty costs will add to your total cost.

Don’t forget the biggest online retail behemoth out there. This guide to saving time and money on Amazon has lots of useful advice, such as tracking price changes with CamelCamelCamel. Plenty of the tips apply to other sites as well. For example, sign up for the email newsletters and follow the social media accounts of your favorite stores in order to receive a heads up on special tech deals you wouldn’t otherwise notice.

 On top of individual price comparison sites, you can install price comparison extensions for your web browser. The Shoptimate add-on fits right in your browser; when you visit one of a broad range of shopping sites, it will pop up to share additional price options in real time. InvisibleHand works similarly, and it also covers flight and hotel comparisons in addition to e-tail. Finally, Honey will lead you toward discount coupons and codes to take even more money off your total.

Beyond sites and extensions, you can compare some prices on your own. Scroll down to the bottom of a product listing on Amazon, for example, and you’ll see side-by-side spec and price comparisons of similar products. Every listing shows when the item first went on sale, so you can make sure you’re not comparing TVs or laptops from different years.

Once you’ve finished shopping, you’re almost ready to purchase. Before parting with your credit card or PayPal information, research the history and specs listings of the gadget that’s tempting you. After all your comparing, a low price might have tricked you into selecting an older product, or one that’s not exactly what you’re looking for.

Know your seasons and cycles

The time you shop can make a difference to the price you pay. So if you can hold off on a purchase, you might be able to get it for cheaper. For example, the sales bonanza that kicks off with Black Friday doesn’t really stop until Christmas. The biggest reductions during this period will be on older, mid-range tech rather than the very top-end stuff, so by all means splurge, but make sure you know what you’re getting.

When should you buy to get discounts on the best and newest gadgets? These deals don’t usually hit the scene until immediately before or after an updated version arrives. If you wait for the new model to appear, the current (and soon to be “old” model) is likely to be much cheaper. For the iPhone, for instance, shop in September, while Samsung’s Galaxy phones get less expensive around late February or early March, coinciding with the Mobile World Congress tech expo.

Researchers just figured out how to get robots

Power Rangers had Megazord. Voltron had, well, Voltron. Individual robots that combine to form one larger, cooler—dare we say, more badass—automaton have been a mainstay of science fiction for decades. But a new study in Nature Communications suggests that morphing robots may finally outgrow the limits of fiction and find their way into our reality. The researchers were able to get autonomous modular robots—robots that have the ability to control themselves, like the Roomba vacuum cleaner—to join forces and make one cohesive megabot. The future is now.

Researchers who study swarming insects like termites and ants know that these animals can accomplish things in coordinated groups that they could never manage on their own: carrying large objects, taking out predators, and creating intricate structures. Termites in particular are known for their prodigious ability to build complex homes, or termite mounds, without a blueprint. Swarm robots could potentially do the same.

“Take moving on a very rocky terrain, for example,” says lead author Marco Dorigo, a research director at IRIDIA, the artificial intelligence lab of the University Libre de Bruxelles. “One alone would get stuck, but attached to each other they become more stable and they can move on the rough terrain.”

A single powerful robot needs a redesign every time users come up with a new task for it; a bot built for building things can’t be expected to pivot to search-and-rescue missions. But swarm robots can be more flexible. They’re also less fragile, en masse, than one large bot, and they’re easier to make in large quantities. At the same time, robot swarms provide something a single robot can’t—redundancy.

“Since the swarm is made of many robots, if some of them break down, the others can continue to work,” says Dorigo. It’s the equivalent of investing in a whole block of decent kitchen knives instead of spending the same amount on one absurdly good vegetable peeler.

The problem, however, has been figuring out how to get the autonomous robots to act more like team players. The typical approach has been to program the robots for self-organization, which is how ants and termites operate, so that the bots can make decisions based on local information about their personal surroundings. But that’s a tricky thing to program. Another alternative is to use a kind of central control, where one computer somewhere knows everything about each robot and then makes decisions for each of them.

“The problem with this is that there are communication bottlenecks, in that there’s a single point of failure,” says Dorigo. “If the central computer doesn’t communicate correctly, or if it breaks down, the whole system doesn’t work anymore.”

It’s kind of like building a Death Star with a thermal exhaust port which, if hit with a torpedo, creates a chain reaction that ignites the main reactor and destroys your whole ship. Oops.

Dorigo and his colleagues took something of a middle path. When wobbling around solo, the robots remain autonomous. But when they touch each other to form a bigger unit, they cede control to a single comrade in the swarm (the robot that continues to glow red in the video below). The mess of individuals becomes one single powerhouse—automatically.

Apple’s new Face ID system uses a sensing strategy

On Tuesday, in addition to three shiny new iPhone models, Apple announced Face ID, a slick new way for people to biometrically unlock their phones by showing it their, well, face. The system relies not only on neural networks—a form of machine learning—but also on a slew of sensors that occupy the real estate near the selfie camera on the front of the handset.

The kind of facial recognition that Apple is doing is different from what, say, Facebook does when it identifies a photo of you and suggests a tag—that’s taking place in the two-dimensional landscape of a photograph, while the latest iPhone is considering the three dimensions of someone’s face and using it as a biometric indicator to unlock (or not) their phone.

Alas, you’ll need to pony up the $999 for an iPhone X, as this feature only works on the company’s new flagship smartphone. Among the sensors that comprise what the company calls the TrueDepth camera system that enable Face ID are an infrared camera and a dot projector. The latter of those projects a pattern of more than 30,000 infrared dots on the user’s face when they want to unlock their phone, according to Phil Schiller, a senior vice president at Apple who described the technology yesterday.

One step in the facial-identification process is that the TrueDepth camera system takes an infrared image; another piece of hardware projects those thousands of infrared dots on the face, Schiller explained. “We use the IR image and the dot pattern, and we push them through neural networks to create a mathematical model of your face,” he said. “And then we check that mathematical model against the one that we’ve stored that you set up earlier to see if it’s a match and unlock your phone.”

Structured light

The technique of projecting something onto a three-dimensional object to help computer vision systems detect depth dates back decades, says Anil Jain, a professor of computer science and engineering at Michigan State University and an expert on biometrics. It’s called the structured light method.

Generally, Jain says, computer vision systems can estimate depth using two separate cameras to get a stereoscopic view. But the structured light technique substitutes one of those two cameras for a projector that shines light onto the object; Apple is using a dot pattern, but Jain says that other configurations of light, like stripes or a checkerboard pattern, have also been used.

“By doing a proper calibration between the camera and the projector, we can estimate the depth” of the curved object the system is seeing, Jain says. Dots projected onto a flat surface would look different to the system than dots projected onto a curved one, and faces, of course, are full of curves.

During the keynote, Schiller also explained that they’d taken steps to ensure the system couldn’t be tricked by ruses like a photograph or a Mission Impossible-type mask, and had even “worked with professional mask makers and makeup artists in Hollywood.” Jain speculates that what makes this possible is the fact that the system makes use of infrared light, which he says can be used to tell the difference between materials like skin or a synthetic mask.

Finally, the system taps into the power of neural networks to crunch the data it gathers during the face identification process. A neural network is a common tool in artificial intelligence; in broad strokes, it’s a program that computer scientists teach by feeding it data. For example, a researcher could train a neural network to recognize an animal like a cat by showing it lots of labeled cat pictures—then later, the system should be able to look at new photos and estimate whether those images have cats or not in them. But neural networks are not just constrained to images—Facebook, for example, uses multiple types of neural networks to translate text from one language to another.

Other phones on the market already have a face-identification system, notably Samsung’s S8 phones and their new Note8 device; that uses the handset’s front-facing camera, but the company cautions that the face ID feature is not as secure as using the fingerprint reader, for example. You can’t use it for Samsung pay, for instance, but Apple says that their FaceID system can indeed verify Apple Pay transactions.

Apple’s biometric Face ID system “pushes the tech a notch higher, because not everybody can make a biometric neural engine,” says Jain, or train a face-recognition system using, as Apple said, using more than one billion images. “So I think this will be a difficult act to follow by other vendors.”

The best camera gear for making hyperlapse video

A single image can capture a discreet moment, but stringing dozens or hundreds together into a time-lapse can tell an hourslong story in one spectacular sequence. Start with something simple, like tracing a flower’s bloom over the course of a morning, and, with a little practice, you’ll be able to catch more complex and captivating motion, such as the stars wheeling across the night sky. Here’s what you need to fast-forward time like a pro.

1. Camera

The 24.2-megapixel sensor on Nikon’s D5600 DSLR is large enough to capture spectacular night skies that won’t be overwhelmed by ugly pixel noise, and the included zoom lens is ideal for covering landscapes. $900

2. Control

The Pulse Camera Remote sits atop your camera and communicates via Bluetooth with a phone app. Use it to dial in detailed commands, like the interval between each shot and the time frame you want to shoot. $99

3. Rotating Mount

Add an extra layer of motion to your time-lapse videos with the Syrp Genie Mini, a motorized turntable that rotates the camera as it’s shooting. It’ll make even a static scene, like a cityscape, look more dramatic. $249

 4. Tripod

Few things ruin a well-shot sequence quicker than a wobbly camera. The aluminum MeFoto RoadTrip Classic weighs just 3.6 pounds and supports more than 17 pounds of gear, making it burly enough for your whole rig. $200