Breakthroughs in Understanding Social Hierarchies Lead to Advanced AI

A graphic that illustrates computer chips in a human brain.

Image credit: Shutterstock

Social hierarchies are important, especially in the workplace where understanding the chain of command is crucial. Workers need to know who they can turn to for help, who they have to watch out for, and who they need to take orders from. This is a learning process that can take a while, but a study by researchers from London College and DeepMind have found that it is a process that makes significant use of the prefrontal cortex of the brain.

Researchers had participants undergo an fMRI while they imagined themselves as employees at a fictional company. Researchers then had the participants watch video interactions between “coworkers” to determine who “won” those interactions. Whoever won the interaction was determined to have more power in the hierarchy. Participants also watched similar videos but this time, they were asked to imagine their friend as an employee there. The findings show that we’re better at understanding the hierarchies to which we belong than those of others, which makes sense.

So what good is this research? Knowing what part of the brain is used in learning something that we pick up more or less “by instinct” may not sound immediately useful, but that’s because it’s part of a long-term project to help develop better artificial intelligence. That’s what DeepMind works on, actually.

DeepMind is trying to develop AI that can be applied to “some of the world’s most intractable problems.” If you’ve ever seen a movie about a robot, you know how hard it is for them to understand humans. By having a better idea of how our brains process human interactions, we can develop AI systems that better understand human interactions. Along the way, perhaps future research in this area will help us to better understand how we interact and maybe get a head start on fixing those problems before the robots are ready to help.

Researchers Discover New Flexible Material with Numerous Applications

A researcher experimenting on a substance in a lab.

Photo courtesy of Brookhaven National Laboratory at Flickr Creative Commons.

The phrase “near-perfect broadband absorption from hyperbolic metamaterial nanoparticles” sounds like some Star Trek “technobabble.” But believe it or not, it’s actually the title of a real paper. In the paper, researchers from UC San Diego’s Jacob’s School of Engineering describe a new material that is thin, flexible, and transparent, with some pretty cool capabilities.

The material absorbs light and, more than that, can essentially be “programmed” when made to absorb different wavelengths of light. This could allow for “transparent window coatings that keep buildings and cars cool on sunny days,” or “devices that could more than triple solar cell efficiencies.”

Imagine a window that keeps a building cool, cutting down on AC costs, while still allowing through the kinds of radio waves that we use for TV, radio, and broadband. Alternatively, the window could be used to block these radio waves, or to prevent heat generated inside the building from escaping. There are a lot of potential uses for the material, which is still early in the development phase. But for now it’s only being made in very small quantities to test out the various capabilities of the material.

Researchers are still figuring out how to scale up production. Because they’re using complex nanotubes and silicon substrates and other advanced technologies, scaling up will take some effort. While these kinds of techniques are becoming increasingly common, they’ve been limited to nanomaterials, which are called that for a reason.

So far we haven’t used these technologies on anything near so big as a plate glass window, but there’s no reason to think that they wouldn’t work. Most likely, as the researchers figure out how to scale up and make things like windows and such, they’ll run into some production issues, but that’s what research and experimentation is for, to figure out how to make something like this work.

New Control Techniques Add Versatility to Smartwatches

A photo of a man wearing a smartwatch.

Photo credit: Shutterstock

Smartwatches are cool, but they’ve been slower to catch on than smartphones because they’re not the most convenient way to interact with a device. Their small size can make it hard to use them with maps or complex menus, limiting their value compared to a regular sized phone. Plus, if you have your hands full when you get a call, it’s hard to do anything about it.

But technology is constantly evolving, and smartwatches are no different. Researchers at Georgia Tech have been working on several different projects that can make using smartwatches easier, allowing them to provide more robust user experiences.

One project, called WatchOut, uses scrolling and swiping gestures to improve control, which might sound pretty typical, except with this system you don’t swipe and scroll on the watch’s screenyou use the band. Using the built in gyroscopes and accelerometers of such watches, engineers were able to develop a system that gives users more control while not having to worry about hitting the wrong button with their fingers.

Then there’s Whoosh, which allows users to control their phones by breathing on them. Shushing the watch can decline a call, while blowing on it twice can accept. A sequence of short and long breaths can be used to unlock the device, while different breath techniques can be used to erase words in a text message or to send it. You can even move an app from your phone to your watch by “sipping it off the watch and puffing it on the phone.”

And don’t forget TapSkin, which allows users to use the back of their hand as a number pad, sending commands to the watch based on where the user taps. These aren’t “theoretical” developments either; they’ve all be designed, tested, and shown at a number of conferences. They all make use of existing technology, which means that these options could be hitting the market in the near future.

 

One Small Step Toward Science Fiction Holograms

A human hologram.

Image credit: Shutterstock

Australian researchers, with the help of colleagues from the United States and Japan, have made a huge step forward in developing the Holy Grail of imaging: the hologram. Popularized in media like Star Wars and Star Trek, the concept of the hologram is a simple one: to project 3D images using light, as opposed to the 2D images we can create now with cameras and monitors.

The new device developed at the Australian National University uses a collection of “millions of tiny silicon pillars, each up to 500 times thinner than a human hair” to capture 3D images. By being transparent, the material doesn’t lose much energy from the light that passes through it, allowing it to perform some pretty complex stuff with that light, like storing 3D information in infrared.

While this device doesn’t create a hologram that humans can interact with like in Star Trek, it’s still a vital step toward achieving that. While research into holographic technology is most often associated with augmented reality systems, that’s still a ways down the road.

But this new device could still be beneficial even in its current form. This device, according to lead researcher Lei Wang, “could replace bulky components to miniaturize cameras and save costs in astronomical missions by reducing the size and weight of optical systems on space craft.”

And that’s not even to mention their use in terrestrial cameras and crafts. Drones are already pretty small, but by making cameras even lighter, we could have room for more storage and battery life, allowing us to use such devices to better explore hard to reach or dangerous parts of the world. Nature documentaries are already accomplishing incredible feats with small cameras. Imagine if they were even smaller.

This could also be the next big step in cameras for smart devices, which are already leagues ahead of the camera technology of even thirty years ago. Consumers won’t have access to such devices for a while, but they’re going to want them when they become available.

Adapting in the Future Requires Adapting in the Present

A note pad with the word "adaptable" written on it. There is a laptop and a cell phone in the picture as well.

Photo credit: Shutterstock

There’s an published in Forbes about what we can expect from employees of the future, extrapolating from current technological trends. The general argument is that those employees will have to be flexible, in order to keep up with and use increasingly powerful technology like artificial intelligence systems.

But this isn’t science-fiction. The article isn’t about the future employees of a century from now; it’s about employees from the last few decades. The phrase “mid-to-late 21st century” gets used, and for some business owners, their companies might still be up and running by that time.

So are you flexible enough to adapt to the changing technology of the future? The article brings up Millennials and Generation Z, who are already pretty well versed at adapting to technology. They’re also going to be the people starting new businesses, which are almost certainly going to be better at adapting to new technology until the next generations come along and think Generation Z is old because they don’t have computers in their brains or whatever.

The point is, hiring employees who are flexible and/or ahead of the technology curve is valuable and, increasingly, necessary. But that alone might not be enough. These employees will need to be under the guidance of bosses who understand what’s going on, at least enough to give them the freedom to be flexible in the first place.

Employees can’t do their best work if their bosses don’t understand the basics of the tools they need, and therefore won’t give them those tools. These employees know they can probably go find a hipper, younger boss who can and will give them access to those tools. So it’s worth asking yourself if you, or your company, are flexible enough to adapt to rapidly changing technology. If the answer is no, it’s time you learn how to be.

Boosting Application Speeds

An image of a screen loading on an iPad.

Image credit: Shutterstock

A process called Dense Footprint Cache may help significantly speed up applications on computers and mobile devices. When a processor is running an application, it has to retrieve data which is stored “off processor” in device memory. This process takes time. In order to make it faster, processors have a cache of die-stacked dynamic random access memory that allows the processor to go to the cache and retrieve the data more quickly.

However, there is a decent chance that the data in question won’t be stored there, in which case the processor has to go to the memory anyway, which slows the whole process down. Under this new system though, the processor learns where data is stored, allowing it to access that data faster. Tests have shown that the average system is 9.5% faster than other, state-of-the-art computing methods, which is a noticeable improvement. Furthermore, the system allows the processor to skip over data that it knows is not in the cache, reducing “last level cache miss ratios” by 43%. As a bonus, the whole process uses 4.3% less energy than normal, which means slightly longer battery life, too.

While this technology (which is still new and not ready for the market) is unlikely to drastically change the way we compute, it will help, especially if it ends up in consumer devices like smart phones and tablets. Being able to use apps with even 9.5% more speed will improve customer use, and it will also allow more applications to function more quickly. This is the kind of thing that, bundled within next-generation smart phones, for example, makes people want to actually upgrade their devices. It’s the kind of improvement that actually means something, not like, say, removing the 3.5mm audio jack so that customers are forced to buy more expensive, more complicated headphones.

New “Bradio” System Greatly Extends Battery Life on Small Devices

A close-up photo of a man wearing a smartwatch. The smartwatch screen shows a low battery.

Image: Shutterstock

Mobile devices have transformed our lives in a lot of ways over the last decade. Given time, wearable devices might be able to do the same, and are certainly more in line with the kind of futurism made popular by science fiction. But, while wearable devices like smartwatches or fitness trackers are gaining popularity, there is one common complaint about them that we need to resolve: battery life.

Battery life is a thorn in the side of mobile devices in general, because the size of a battery and the power it provides are directly correlated. The smaller the battery, the less the power provided. But devices like smartwatches have high-energy demands. You can either have a short battery life with a smaller, more compact device, or a larger battery with a bulkier device.

But scientists from the University of Massachusetts at Amherst have been working on a new system that they have dubbed “Bradio.” Bradio allows devices that are connected with one another to share the energy load. So a smartwatch, which works through its connection with a smartphone, can make use of that phone’s larger battery in order to get attain more battery life. It works kind of like the cloud: the larger device provides more energy that the smaller device can tap into in order to extend it’s own battery life. Plus, the cloud allows you to access a much bigger storage space than you could conveniently keep on hand.

So far, tests have shown that they can get about 400 times the battery life of a Bluetooth system, which operates on a somewhat similar principle. The device sending the signal to the Bluetooth headset does most of the work in the relationship, and the same is true of Bradio. Although it’s in the beginning stages of development, Bradio, or a system like it, could revolutionize wearable technology.

Remembering GeoCities and How It Helped Shape the Internet

A photo of a man jumping from inside of an old, boxy computer to a new, high-tech laptop.

Image: Shutterstock

Long before the existence of Facebook (even before MySpace, iPods, and Y2K) and before the first dotcom bubble burst, there was the Internet. Unlike newer technologies, the Internet had no single “inventor.”

However, there was GeoCities, which helped shape the Internet.

Once the Internet’s third most-visited domain, GeoCities was responsible for the development of millions of websites. Years after the free web-hosting service was launched in 1994, Thom Weisel’s wealth management firm advised Yahoo! to acquire GeoCities for $3.5 billion.

The company’s goal was to give everyone who had Internet access a free place on the web. Although there were just a few million people online during that time, the idea of owning an online space was a strange (and exciting) new medium. Other free web-hosting services such as Tripod and Angelfire also launched around the same time, but these platforms proved to be far less popular than GeoCities.

”We are not an in-and-out service like a search engine. It’s a place for people to meet. We allow for self-expression through self-publishing. We’re it, in terms of being a major content-entertainment site whose editorial strategy is solely based on the members creating the content themselves,” said GeoCities co-founder David Bohnett.

In its original form, GeoCities users selected a “city” in which to launch their web pages. GeoCities wasn’t sure how to handle the whole idea of an online community and decided to divide the content up into “cities” or “neighborhoods” where you and your neighbors should ideally have the same core interest. The “cities” were named after actual cities or regions according to their content. For example, many computer-related websites were placed under “SilliconValley” and those in the entertainment industry were assigned to “Hollywood.”

Eventually, however, the “home page” fad was overshadowed by blogs and social-networking websites. In 2009, approximately ten years after the merge with Yahoo!, GeoCities announced that it would shut down its 38 million free user-built pages in the United States.

Although many people thought the platform inspired a lot of terrible web design, GeoCities was the first big venture built on what is now considered the Web 2.0 boom of user-generated content. It gave people tools to do amazing things on their websites, including adding animation, music, graphics, and other HTML wizardry.

Imagine yourself back in 1996. You’ve created your free GeoCities account, and you’ve been given a blank page with 15 megabytes to tell the world about yourself. What would be on your page?

Americans Are Apprehensive About “Enhancing” Human Abilities

A computer generated image of an x-ray of a human head. Inside the head is a computer chip that is transmitting waves of information. There is a galaxy in the background.

For years, there’s been speculation about scientists being able to enhance human abilities through advanced technology. But what was once dismissed as mere conspiracy theory may become a reality sooner than we think.
Image: Shutterstock

For fans of science fiction, the idea of humans artificially enhancing their abilities (by implanting computer chips or genetically modifying embryos to protect against various diseases or disorders) is a pretty familiar idea. And some of those technologies are likely to arise within the next few decades, but with that technology comes some serious concerns.

According to a recent survey conducted by the Pew Research Center, while many Americans believe that we’ll be able to transplant artificial organs, cure most cancers, or implant computer chips into our bodies within the next fifty years, they aren’t quite sold on whether or not we should actually do those things. The survey asked people how likely they would be to have computer chips installed into their brains, get synthetic blood transfusions, or edit their babies’ genes. About a third of respondents said they would consider having a computer chip installed into their brain or get a synthetic blood transfusion. Meanwhile, just about half said they would consider editing their babies’ genes.

Among the findings, the Pew Research Center concluded that Americans with strong religious identities were less likely to want such procedures, and more likely to think that they were a bad idea in general, stating that such procedures were crossing the line by interfering with nature. However, people were also more likely to considering undergoing an enhancement if it were controllable or reversible, or if those enhancements would bring about a sort of health equality. People were less likely to think it is okay for synthetic blood to make people faster or stronger than they are, or to let computer chips improve cognitive abilities.

The survey tells us that overall, Americans are confident that science will continue to advance human capabilities as we move forward, but our fears about such procedures might outweigh the potential benefits. While curing cancer seems like an easy sell, implanting computer chips into our brains seems like it might be slightly harder to pull off.

Then and Now: IPOs, Private Equity, and the Next Generation of the Tech Boom

Over the shoulder shot of person working on laptop

IPOs and the kinds of technology behind them have changed since the golden days of 1990s Silicon Valley.
Image: Unsplash.com

The Dot Com bubble of the 90s changed the face of tech and finance in ways that are still affecting these realms today. As the hot new kind of business, tech companies proliferated in the 90s, with the IPO as a rite of passage into the “adulthood” of a “real” business. Some companies, like Apple, Yahoo, and eBay, live on; others crashed and burned when the bubble burst.

Today, tech companies shift to IPOs in different ways and for different reasons than they did in the 90s. Silicon Valley is still booming, but startups are far more likely to turn to individual investors as opposed to IPOs when trying to fund growth. The number and the value of technology IPOs are both way down from the 90s, more resembling what the market saw in the early 80s, albeit with higher amounts of money raised.

Funding in the heyday of the 90s tech bubble came from sources like Thom Weisel’s Montgomery Securities, a private equity firm built on the idea of supporting smaller, more individualized businesses. Like many of those tech superstars of the 90s, however, Montgomery Securities no longer exists—though Weisel himself has moved on to other private equity endeavors in the same vein as the company that started it all.

Part of the reason there was so much energy and enthusiasm behind tech companies of the 90s is that their stock prices soared without any real plan on how to live up to the related, absurdly high expectations. Nowadays, stock prices for tech companies rise or fall based on company profits. In fact, tech company stock is now a bit cheaper than it was then.

Modern investors are also different from their 90s counterparts in that they seem statistically more interested in investing in companies that aren’t already profiting by the time they reach their IPO. According to Bloomberg, of the 206 companies that had IPOs in the US in 2014, 71% had had no profits in the year before their offering.

Unlike the 90s, biotech seems to be where it’s at in terms of rising tech companies these days. Biotech companies tend to have IPOs similar to what you’d see in the 90s: small companies with no revenue but lots of promise, going public to raise the money they need to bring a product to market. That’s pretty specific to today’s biotech IPOs, though; in the rest of the IPO market, Bloomberg says, companies are waiting longer to go public, which is why there are fewer IPOs over all.

We may not be experiencing the sort of tech boom that became an emblem of the 90s, but there are still plenty of opportunities for small companies to make their mark on the world. Whether it’s through individual investors or IPOs, cutting edge tech will always have a place in the market. It’s just that the details of that place are likely to change over time.

%d bloggers like this: