White Paper

Hundreds of worldwide destinations at your reach.

EU bets on 5G to catch up in mobile technology race

BY JULIA FIORETTI AND LEILA ABBOUD

(Reuters) - The European Union is looking to sign agreements with China and Japan and the United States to cooperate on developing the next generation of mobile broadband as it seeks to help its companies catch up in the race to develop such technologies. Europe, once a leader in the 1990s in the second-generation GSM technology standard for mobile phone networks moving into the digital era, has fallen behind the United States,Japan and South Korea in the deployment of the latest 4G standard for mobile broadband services. The region's network operators including Britain's Vodafone and Spain's Telefonica were slower to move to 4G than Japan, Korea and the United States and adoption in Europe remains lower compared to other advanced economies.

European policymakers are now trying not to repeat the mistakes of the past and are seeking to be at the forefront of developing the standards for 5G, which promises much faster video downloads, denser network coverage and the possibility of connecting billions of everyday electronic objects to create "the internet of things".

"With 5G, Europe has a great opportunity to reinvent its telecom industrial landscape," Guenther Oettinger, the EU's Commissioner for the Digital Economy and Society, told the Mobile World Congress in Barcelona on Tuesday.

In June last year the European Commission signed an agreement with South Korea in which the two sides committed to cooperating on setting technical standards and ensure the necessary radio frequencies are able to support the new network.

"It is our intention to sign similar agreements with other key regions of the world, notably Japan, China, and the United States," Oettinger said.

The Commission will soon start formal discussions on 5G with China, according to a person familiar with the matter, which is also keen to have its say on what 5G should do. China is home to the world's second-biggest maker of mobile network equipment, Huawei [HWT.UL], and ZTE, the fifth biggest. But the chief executive of France's Orange said work remained to be done on 4G, whose rollout across Europe has been patchy and slow.

"We need to prepare for 5G but let's not jump too fast. We should enjoy 4G," he said. Most industry experts expect the first commercial deployments of 5G in the run-up to the Tokyo Olympics in 2020.

Much work remains to be done to set technical standards for the technology, and figure out exactly what it is supposed to do that current 4G gear cannot, experts say.

In the meantime, companies that make mobile network equipment such as Sweden's Ericsson, Huawei, Finland's Nokia and France-based Alcatel-Lucent are jockeying for position and carrying out experiments with operators to prepare for 5G.

Japan's NTT DoCoMo is already working with Nokia and Ericsson to develop networks running at high frequencies for use in the 5G wireless era - technology expected to be showcased at the 2020 Tokyo Olympics.

Meanwhile Huawei has said it will invest $600 million in 5G research and expects to have a network ready for deployment by 2020.

"We are closely working with our customers to get to 5G. It is the only way to fully meet the demand of machine to machine technology," said Huawei's chief executive Ken Hu.

The head of Nokia, Rajeev Suri, said that he thinks the drive to develop 5G technology promises to be a "three-horse race" between Ericsson, Huawei and his firm, leaving out the fourth biggest equipment maker, Alcatel-Lucent.
"I don't aspire only to be third," said Suri on a panel on Tuesday. "We will move up."

LG claims first LTE smartwatch

By Scott Bicheno

Korean electronics giant has launched a premium addition to its recently announced new smartwatch range called the LG Watch Urbane LTE. As the name implies, LG is claiming this is the first smartwatch to feature an LTE modem, although Huawei is also talking up its LTE-M chip.

The other novel feature of this new addition is that is apparently doesn’t run Android. As you can see from the spec list below it runs a proprietary ‘LG Wearable Platform’, which LG assured The Verge is based on neither Android nor webOS, which LG acquired to power its smart TVs. The spec list also reveals LTE is the only protocol supported, which will mean VoLTE if you want to use it as a phone. Whether or not you might want to is a matter at the heart of the nascent smartwatch industry. It was clear from the start that that there would be two main branches of smartwatch: dumb accessory and standalone smart device. The dumb accessory would rely on being paired to a smartphone for its connectivity while the standalone device wouldn’t.

Thus far smartwatches have mainly been accessories, but have failed to gain much traction as, despite being the cheaper option, they have failed to demonstrate either why they’re worth the extra expense when you already have a smartphone or why they are preferable to a regular watch.

A standalone device could, in theory, replace a smartphone, but then you would need to use it for voice. This would mean either talking to your wrist, which could be awkward, or pairing with a Bluetooth headset. Even then the user experience would be significantly diminished by the small screen, so the market for standalone smartwatches would seem to be small. LG seems to be trying to address the UX issue by including three buttons which facilitate navigation.

The company clearly sees this as a flagship device, designed as much to justify a spot of corporate gloating as to sell to punters. “The LG Watch Urbane LTE is an example of the kind of innovation that’s possible when you’re the industry leader in LTE technology,” said Juno Cho, CEO of LG Mobile. “This smartwatch breaks new ground with the world’s first LTE connectivity and the real-watch styling of a high-end modern classic watch. It embodies our philosophy, innovation for a better life.”

There were no details on pricing and availability, but it will likely be a pretty expensive, niche product. Having said that it’s good to see companies trying to develop a category already dominated by ‘me too’ products with minimal unique appeal, and it should be noted that the latest Pebble smartwatch raised over $10 million in its first couple of days on Kickstarter.

LG Watch Urbane LTE specs:
• Chipset: 1.2GHz Qualcomm Snapdragon 400
• Operating System: LG Wearable Platform
• Display: 1.3-inch P-OLED (320 x 320 / 245ppi)
• Network: LTE
• Memory: 4GB eMMC / 1GB LPDDR3
• Battery: 700mAh
• Sensors: 9 Axis / Barometer / PPG / GPS
• Connectivity: WiFi 802.11 b, g, n / Bluetooth 4.0LE / NFC
• Color: Silver
• Other: Dust and Water Resistant (IP67) / Speaker / Microph

What will happen when the internet of things becomes artificially intelligent?

From Stephen Hawking to Spike Jonze, the existential threat posed by the onset of the ‘conscious web’ is fuelling much debate – but should we be afraid?

When Stephen Hawking, Bill Gates and Elon Musk all agree on something, it’s worth paying attention. All three have warned of the potential dangers that artificial intelligence or AI can bring. The world’s foremost physicist, Hawking said that the full development of artificial intelligence (AI) could “spell the end of the human race”. Musk, the tech entrepreneur who brought us PayPal, Tesla and SpaceX described artificial intelligence as our “biggest existential threat” and said that playing around with AI was like “summoning the demon”. Gates, who knows a thing or two about tech, puts himself in the “concerned” camp when it comes to machines becoming too intelligent for us humans to control.

What are these wise souls afraid of? AI is broadly described as the ability of computer systems to ape or mimic human intelligent behavior. This could be anything from recognizing speech, to visual perception, making decisions and translating languages. Examples run from Deep Blue who beat chess champion Garry Kasparov to supercomputer Watson who outguessed the world’s best Jeopardy player. Fictionally, we have Her, Spike Jonze’s movie that depicts the protagonist, played by Joaquin Phoenix, falling in love with his operating system, seductively voiced by Scarlet Johansson. And coming soon, Chappie stars a stolen police robot who is reprogrammed to make conscious choices and to feel emotions.

An important component of AI, and a key element in the fears it engenders, is the ability of machines to take action on their own without human intervention. This could take the form of a computer reprogramming itself in the face of an obstacle or restriction. In other words, to think for itself and to take action accordingly.

Needless to say, there are those in the tech world who have a more sanguine view of AI and what it could bring. Kevin Kelly, the founding editor of Wired magazine, does not see the future inhabited by HAL’s – the homicidal computer on board the spaceship in 2001: A Space Odyssey. Kelly sees a more prosaic world that looks more like Amazon Web Services: a cheap, smart, utility which is also exceedingly boring simply because it will run in the background of our lives. He says AI will enliven inert objects in the way that electricity did over 100 years ago. “Everything that we formerly electrified, we will now cognitize.” And he sees the business plans of the next 10,000 startups as easy to predict: “Take X and add AI.”

Advertisement

While he acknowledges the concerns about artificial intelligence, Kelly writes: “As AI develops, we might have to engineer ways to prevent consciousness in them – our most premium AI services will be advertised as consciousness-free.” (my emphasis).

Running parallel to the extraordinary advances in the field of AI is the even bigger development of what is loosely called, the internet of things (IoT). This can be broadly described as the emergence of countless objects, animals and even people with uniquely identifiable, embedded devices that are wirelessly connected to the internet. These ‘nodes’ can send or receive information without the need for human intervention. There are estimates that there will be 50 billion connected devices by 2020. Current examples of these smart devices include Nest thermostats, wifi-enabled washing machines and the increasingly connected cars with their built-in sensors that can avoid accidents and even park for you. The US Federal Trade Commission is sufficiently concerned about the security and privacy implications of the Internet of Things, and has conducted a public workshop and released a report urging companies to adopt best practices and “bake in” procedures to minimise data collection and to ensure consumer trust in the new networked environment.

Tim O’Reilly, coiner of the phrase “Web 2.0” sees the internet of things as the most important online development yet. He thinks the name is misleading – that IoT is “really about human augmentation”. O’Reilly believes that we should “expect our devices to anticipate us in all sorts of ways”. He uses the “intelligent personal assistant”, Google Now, to make his point.

So what happens when these millions of embedded devices connect to artificially intelligent machines? What does AI + IoT = ? Will it mean the end of civilisation as we know it? Will our self-programming computers send out hostile orders to the chips we’ve added to our everyday objects? Or is this just another disruptive moment, similar to the harnessing of steam or the splitting of the atom? An important step in our own evolution as a species, but nothing to be too concerned about? The answer may lie in some new thinking about consciousness. As a concept, as well as an experience, consciousness has proved remarkably hard to pin down. We all know that we have it (or at least we think we do), but scientists are unable to prove that we have it or, indeed, exactly what it is and how it arises.

Dictionaries describe consciousness as the state of being awake and aware of our own existence. It is an “internal knowledge” characterized by sensation, emotions and thought. Just over 20 years ago, an obscure Australian philosopher named David Chalmers created controversy in philosophical circles by raising what became known as the Hard Problem of Consciousness. He asked how the grey matter inside our heads gave rise to the mysterious experience of being. What makes us different to, say, a very efficient robot, one with, perhaps, artificial intelligence? And are we humans the only ones with consciousness?

Some scientists propose that consciousness is an illusion, a trick of the brain. Still others believe we will never solve the consciousness riddle. But a few neuroscientists think we may finally figure it out, provided we accept the remarkable idea that soon computers or the internet might one day become conscious.

In an extensive Guardian article, the author Oliver Burkeman wrote how Chalmers and others put forth a notion that all things in the universe might be (or potentially be) conscious, “providing the information it contains is sufficiently interconnected and organized.” So could an iPhone or a thermostat be conscious? And, if so, could we in the midst of a ‘Conscious Web’?

Back in the mid-1990s, the author Jennifer Cobb Kreisberg wrote an influential piece for Wired, A Globe, Clothing Itself with a Brain. In it she described the work of a little known Jesuit priest and paleontologist, Teilhard de Chardin, who 50 years earlier described a global sphere of thought, the “living unity of a single tissue” containing our collective thoughts, experiences and consciousness. Teilhard called it the “nooshphere” (noo is Greek for mind). He saw it as the evolutionary step beyond our geosphere (physical world) and biosphere (biological world). The informational wiring of a being, whether it is made up of neurons or electronics, gives birth to consciousness. As the diversification of nervous connections increase, de Chardin argued, evolution is led towards greater consciousness. Or as John Perry Barlow, Grateful Dead lyricist, cyber advocate and Teilhard de Chardin fan said: “With cyberspace, we are, in effect, hard-wiring the collective consciousness.”

So, perhaps we shouldn’t be so alarmed. Maybe we are on the cusp of a breakthrough not just in the field of artificial intelligence and the emerging internet of things, but also in our understanding of consciousness itself. If we can resolve the privacy, security and trust issues that both AI and the IoT present, we might make an evolutionary leap of historic proportions. And it’s just possible Teilhard’s remarkable vision of an interconnected “thinking layer” is what the web has been all along.

Huawei focuses on wearables at MWC 2015

Written by Scott Bicheno

The consumer-facing arm of Chinese giant Huawei chose not to launch any new smartphones at its big press event at Mobile World Congress 2015. Instead it launched not one but three wearable devices. Pausing only to stress how much progress its overall consumer brand is making at exhaustive length, Huawei’s first product launch was the TalkBand B2. At first glance the B2 looks like a fitness band, and that does account for a major chunk of its functionality. But the clever bit is that it’s also a wrist dock for a mini Bluetooth headset, so you get two accessories for the price of one.

The second launch was a stereo Bluetooth headset, which consists of two ear buds linked by a cord. By itself this isn’t especially innovative, but Huawei stressed the high quality of the audio and has also equipped them with 4GB of their own storage, so they can operate independently of any other device if you want. Huawei also suggested they make for an alternative piece of jewellery when dangling around your neck, which may have been one USP too far.

The crowning launch was left to last, with Huawei unveiling its new smartwatch. The Huawei Watch features a circular face and is designed to look as much as possible like a regular watch, in a manner reminiscent of the recently-launched LG smartwatches. There was a focus on premium components, but Huawei doesn’t think it’s time to introduce a modem yet. There are 40 custom watch faces, but most of the features are standard ones derived from Android Wear.

Huawei is as determined as ever to establish itself as a consumer brand. This press event seemed to borrow more than previous ones from the now established Jobsian MO, with hyperbole thick on the ground and a suspicious amount of cheering and clapping from the audience. They even got their top brand professional up on stage to talk about how Huawei wants to transcend meter products in the way Nike and Coke (and by inference Apple) do. None of these products may end up selling in huge numbers but they all add to Huawei’s consumer credibility.

This Is the Key to an $80 Billion Wearables Market

BY ANDREW NUSCA FROM FORTUNE MAGAZINE

Several factors have dogged the nascent wearable technology market. The lack of breakthrough innovation around batteries, for one, requiring wearers to plug in on-the-go gadgets more than they’d like. The lack of sophistication around tiny user interfaces is another, though that will no doubt improve over time.

But a big one? The social factor. Beyond the geeks of Silicon Valley and elsewhere, it’s just not cool to wear a watch, glasses, or headset that’s as big as a hood ornament.

That’s going to change, according to Juniper Research. The British market observer believes that the wearable technology market will grow to $80 billion by 2020—and the key will be making the connected gadgets virtually indistinguishable from their disconnected peers.
v That means that Apple must be on to something as it continues to make atypical hires from the fashion and apparel world. Observers, including Fortune‘s own Philip Elmer-Dewitt, believe the new talent will help smooth the rough edges of a technology that’s as personal as a bracelet, watch, or ring. (So, apparently, does Google.) The best wearables, and the ones best positioned for profitability, may be those that allow their technology to completely recede into the background.

Nevertheless, wearables will be a diverse growth market that’s not merely Internet-connected jewelry. Wearables that attach to the skin, such as MC10’s Biostamp, are also part of this category—though they’re in a “more embryonic state” and require a much larger shift in consumer habits than a smart watch, Juniper says.

Many technology companies—including Apple, ARM, Google, Intel, Lenovo-Motorola, LG, MC10, Microsoft, Omate, Qualcomm, Sony, and Withings, plus wearables-savvy design firms like Gadi Amit’s NewDealDesign and Yves Béhar’s Fuseproject—are well-positioned to benefit from the boom. With the right features, consumers are, too.

8 Simple Ways to Minimize Online Risk
As more of our lives and business is conducted online, the risk of having our information compromised or used against us increases proportionately. Strengthening online security doesn't mean lowering your risk to zero, but you can plug the main gaps to reduce the largest potential issues. Here are eight simple tips that can help anyone minimize their risks.

1. Change social media settings
Posting photos on Facebook while out of town may seem harmless, but it’s a big sign that your house or your family is alone while you are away. Make sure you change your privacy settings so that not everyone can see your posts. Ideally, restrict it so only your friends or direct connections can see. Even better, don’t post those photos until you return from the trip.

2. Use a VPN
VPN stands for “virtual private network,” which is just a fancy way of saying “protect my profile when I’m online.” With free wi-fi in coffee shops, hotels and airports, more and more hackers are using simple “man in the middle” attacks to trick people into logging onto their fake networks. From there, it’s easy for them to steal your information. Using VPN services such as NordVPN (where I work) keeps you safe from these hackers when out and about, masking your online presence. Bonus: using VPN from your home keeps you anonymous as well.

3. Know the risks of using cloud services
Online "cloud" services have brought a lot of convenience. Cloud services are simply those that allow you to access or share information online from anywhere and not just one computer. A perfect example is Google Docs vs. the traditional Microsoft Office. While the convenience is obvious, less obvious are the security risks that come from having all your information in the cloud. Even the largest cloud providers are hacked on a regular basis. The rule of thumb is to never put something on the cloud that you wouldn’t mind being stolen. Remember that popular file sharing services such as Dropbox come with their risks as well -- a Dropbox link is unsecured and can be accessed by anyone.

4. Read the fine print
Whenever you sign up for online services, you likely have had to scroll through a few pages of legal disclaimers. Included in the fine print might be important statements about how that company can use your information. Make sure to understand if your data can be provided to third parties for marketing purposes. If there are options to opt out, note what those are. If the privacy or terms of use policies are unacceptable to you, don’t sign up.

5. Smart password practices
It goes without saying that your online passwords should never be “password,” “abc123,” “admin” or anything easily guessed. If you use one password for all your online services, should a hacker breach one of your accounts, he could easily take down all of your accounts. That means your email, online bank accounts, mileage points … everything. Minimize this risk by using different passwords for different services and changing them every six months. It might seem like a hassle, but it’s less of a hassle than having your bank account breached.

6. Use secured websites
Most web browsers such as Google Chrome will show a green icon in the URL address bar whenever you are on a website that is secured. Another indicator of security is if the website address starts with “https” vs.“http.” That little addition of the “s” means the site you are on is secured and safe to use. If you are shopping online or doing anything that requires you to provide sensitive data, make sure the website address starts with https.

7. Bypass phishing attacks
Scammers often will use emails that look like legitimate companies in the hopes of tricking you into clicking on links and providing them your password, social security number and more. These are called “phishing” attacks. The best way to avoid this is to simply bypass the email and go directly to the website by opening a browser. For example, if a bank sent you an email, don’t click on the link in the email. Instead, open a web browser and go to the bank’s website directly.

8. Don’t forget anti-malware software
Almost everyone is familiar with antivirus software. Less common is “anti-malware” software. Many anti-virus software includes the ability to scan and prevent malware, but not all of them. To be safe, supplement your existing antivirus software with quality anti-malware software such as Malwarebytes. Remember to always keep your security software up to date with the latest versions.

Will managed cloud stacks win out?

BY LARRY DIGNAN

Royal Dutch Shell CIO Alan Matula argues that the industry will move to managed cloud stacks and right now there are too many players focused on various parts. Here's a look at a few large vendors to see where they line up.

Alan Matula, CIO of Royal Dutch Shell, reckons that his relationship with his company's core vendors will change dramatically in the years to come due to cloud computing. Innovation, speed and agility will ultimately be the measuring stick for IT vendors. Ultimately, Matula is looking for a vertical cloud stack that is managed. Is lock-in a concern in the cloud? "If you look at your current footprint you'll realize you're already locked in somewhere," said Matula.

Matula was speaking at a recent SAP event unveiling S4/HANA. Matula spoke during SAP's analyst meeting about how he had dozens of HANA projects underway mostly for new applications. Royal Dutch Shell is a key example of how large enterprises are still betting on one or two core vendors to run their businesses.

Royal Dutch Shell is also looking at the cloud as a way to be more agile and transform the business. When I caught up with Matula, the most interesting thread was his take on the cloud market. When you combine the private and public cloud vendors it's clear in a hurry that the space is crowded. He said: The cloud space is crowded and there are many players offering various parts of the stack. The winners will offer a vertical stack from infrastructure to applications. Some will do it all themselves and others will partner. But the component model will break down. Every time you add a player you add margin. You can't have 30 partners in the cloud because of margin stacking. Right now it's interesting because there are vendors who have control of a part of the stack and they'll have to run the whole stack.

If Matula's theory is on target it's worth surveying your technology supplier landscape and view it through Matula's lens. Will the players with the applications ultimately win? Oracle and SAP sure hope so and both Salesforce and Workday are pitching all-cloud bets. Will infrastructure cloud winners move up the stack? Amazon is moving up for sure. Can hardware vendors move beyond infrastructure? IBM and HP are betting that way. What does a software defined world look like? Notice how VMware has moved well beyond virtualization. Here's a starter set on how to see your vendors through this managed stack meets cloud lens. We'll start with the big guns.

1. Amazon Web Services. AWS made its way into the enterprise via infrastructure as a service, but is now offering mail, collaboration tools and other apps upstream. Those applications are table stakes, but it wouldn't be too surprising to see AWS move upstream. AWS is also likely to be a partner for other major vendors upstream in applications. 2. Cisco. Cisco provides the infrastructure and will be a clear enabler of private clouds. Cisco can offer a lot of parts of the stack from its networking perch. Cisco has moved to make its software easier to consume without hardware. That move makes it more of a contender in the managed stack environment.

3. EMC/VMware/Pivotal. These three suppliers all have the same parent and when you put them together, the managed stack picture comes into view. The EMC-owned trio has multiple parts of the stack. What remains to be seen is whether the trio of private and hybrid cloud vendors can really turn on the public services.

4. IBM. IBM has also put together infrastructure, platform and application parts. Not surprisingly, IBM is going into the stack through analytics, Watson and other key specialties such as e-commerce. IBM's stack---cloud or on premise---will remain key in large enterprises because the company is predictable and reliable.

5. Microsoft. The software giant is probably in your data center via Windows Server, on the front end with Office and sprinkled throughout your enterprise. Azure changes the cloud equation for Microsoft, which can bridge hybrid environments well and provide infrastructure to apps. Like Oracle, Microsoft is likely to play the managed stack game going forward.

6. Oracle. Clearly the company is betting on offering a red stack of stuff. Oracle is likely to be the largest cloud player with infrastructure-, platform- and software-as-a-service. The win (and profit margins) for Oracle will remain databases and applications so it will play in the infrastructure space to be able to upsell you later. From private to public cloud offerings, Oracle's strategy makes a lot of sense through the managed stack lens.

7. Salesforce/Workday. These two companies are likely to a) pitch all-cloud bets and b) increasingly be partners on deals. Given that both can abstract the infrastructure layer it's possible that the two vendors team up more and potentially add a player like AWS as a cloud trio.

8. SAP. SAP has the applications. S4/HANA will be a managed service soon and SAP will likely deliver most of its applications as a service. However, SAP is mostly an on-premise play despite the chatter to the contrary. What's unclear is whether SAP can offer a stack of infrastructure and platform if you don't also bet on HANA. SAP will likely have to partner. Amazon Web Services is a key partner for SAP.

Today, none of these aforementioned vendors---or ones I'm omitting at the moment such as Google and HP---have the complete picture. They certainly don't have industry clouds---a retail, healthcare or utility stack---nailed.

Over the next five years, it's a safe bet that a lot of these players will go to the cloud dance together. What to beware of is the margin stacking scenario in the cloud.

Tablets, Smartphones May Interfere with Children's Social Development (by NATALIE SHOEMAKER)

We've all seen it at restaurants before: parents propping their smartphones or tablets in front of their toddlers to pacify them long enough to get through the meal. It's a wonder we have pacifiers anymore. However, Joanna Walters from The Guardian highlights a new study that speculates on the detrimental effects this kind of distraction may have on a child's ability to learn self-control.

The researchers ponder:

“If these devices become the predominant method to calm and distract young children, will they be able to develop their own internal mechanisms of self-regulation?”

Gadgets and tablets are still relatively new. Psychologists and scientists are making discoveries on how this technology impacts our day-to-day life and our development as functioning human beings, seemingly, every day. However, tablets aren't just portable televisions; there are games and interactive media where children can learn. But Jenny Radesky, a clinical instructor in Developmental-Behavioral Pediatrics at Boston University, doesn't see the difference. She said in a statement:

"It has been well-studied that increased television time decreases a child's development of language and social skills. Mobile media use similarly replaces the amount of time spent engaging in direct human-human interaction."

The findings were published in the journal Pediatrics, where researchers suggest parents opt for person-to-person interaction over media (interactive or not). Radesky voices her concern over the use of gadgetry as a substitute for learning, as she fears it could impair a child's ability to empathize and problem-solve—social nuances that are learned during unstructured play. There's research that shows educational television and interactive media can help a child learn vocabulary and reading, but only once they begin to approach school age. Radesky suggests: "At this time, there are more questions than answers when it comes to mobile media. Until more is known about its impact on child development quality family time is encouraged, either through unplugged family time, or a designated family hour." Using a smartphone or iPad to pacify a toddler may impede their ability to learn self-regulation, according to researchers.

In a commentary for the journal Pediatrics, researchers at Boston University School of Medicine reviewed available types of interactive media and raised “important questions regarding their use as educational tools”, according to a news release.

The researchers said that though the adverse effects of television and video on very small children was well understood, society’s understanding of the impact of mobile devices on the pre-school brain has been outpaced by how much children are already using them.

The researchers warned that using a tablet or smartphone to divert a child’s attention could be detrimental to “their social-emotional development”.

“If these devices become the predominant method to calm and distract young children, will they be able to develop their own internal mechanisms of self-regulation?” the scientists asked.

Use of interactive screen time below three years of age could also impair a child’s development of the skills needed for maths and science, they found, although they also said some studies suggested benefits to toddlers’ use of mobile devices including in early literacy skills, or better academic engagement in students with autism.

Jenny Radesky, clinical instructor in developmental-behavioural pediatrics at Boston University School of Medicine, published her team’s findings. She urged parents to increase “direct human to human interaction” with their offspring.

Radesky encouraged more “unplugged” family interaction in general and suggested young children may benefit from “a designated family hour” of quality time spent with relatives – without any television and mobile devices being involved. Advertisement

The researchers pointed out that while there is plenty of expert evidence that children under 30 months cannot learn as well from television and videos as they can from human interaction, there has been insufficient investigation into whether interactive applications on mobile devices produce a similar result.

Radesky questioned whether the use of smartphones and tablets could interfere with the ability to develop empathy and problem-solving skills and elements of social interaction that are typically learned during unstructured play and communication with peers. Playing with building blocks may help a toddler more with early maths skills than interactive electronic gadgets, she said.

“These devices may replace the hands-on activities important for the development of sensorimotor and visual-motor skills, which are important for the learning and application of maths and science,” Radesky said.

There is evidence that well-researched early-learning television programmes, such as Sesame Street, and electronic books and learn-to-read applications on mobile devices can help vocabulary and reading comprehension, the team found, but only once children are much closer to school age. Radesky recommended that parents try applications before considering allowing a child to use them. “At this time there are more questions than answers when it comes to mobile media,” she said.

Your Smartphone May Be Robbing You of Your Best Ideas (by STEVEN MAZIE)

OK, smartphone user (yes, we know that most of you, at this very moment, are now peering down onto a rectangular screen), have you ever wasted time on your phone? Of course you have. Have you gone a day recently without devoting an excessive number of minutes to your phone? Reading Big Think is hardly “wasting time,” and there are plenty of other productive and worthy things to do on your phone. But those minutes add up, and a body of research shows that they may come at a grave cost.

Charles Townes, the Nobel Prize-winning physicist who died last month at the age of 99, credits a stint on a park bench in 1951 with the “epiphany” that led him to invent one of the most ubiquitous technologies of the 20th century: the laser. “On the morning of the last day of a futile meeting in Washington, D.C.,” according to the Los Angeles Times obituary, “Townes sat on a park bench and contemplated the issue.”

"So I took out a piece of paper and just scratched it out," he later said. Ultimately, he concluded, "Hey, this looks like it might work.” Excited, he returned to his hotel room and consulted with physicist Arthur Schawlow, a collaborator and friend who later became his brother-in-law. "I told him about it and he said, 'OK, well, maybe.' And so that's how the idea started," Townes said. "It was like a sudden revelation."

Would Townes have had this epiphany in the age of the smartphone? Maybe instead of spinning through the problem in his head on that bench, an iPhone-toting Townes would have caught a breather from the futility of his meeting by swiping through his email, tweeting or playing Words With Friends. We’ll never know, of course. But if recent research into the value of “mind-wandering” is any indication, the laser beams inside your DVD player and the bar-code scanner at your grocery store may owe their existence to Townes’ few minutes of quiet contemplation on a Washington, D.C. park bench.

In a series of experiments, cognitive psychologist Jonathan Smallwoodhas found a troubled relationship between distraction and creativity. It turns out that an idle mind is a wandering mind, and the more the mind wanders, the more likely it is to come up with novel ideas. Smallwood calls this experience “perceptual decoupling”: when your mind breaks free from constant attention to immediate perceptions in the here-and-now—like those provided non-stop from bright shiny screens—and goes, well, somewhere else.

The phenomenon is revealed in a study in which subjects “were given a number of everyday objects (such as a brick) and were asked to generate as many uses for them as possible.” After everyone spent a few minutes on this task, the group was divided in four. One group was given a 10-minute rest; a second was asked to perform a relatively challenging task involving working memory; a third had an “easy choice reaction time task” (something like this); while the fourth just “moved on to the next phase of the experiment.” The most creative ideas about how to use the brick came from the third group, the one assigned to the easy, mindless task. The upshot? “[E]ngaging in simple external tasks that allow the mind to wander may facilitate creative problem solving.”

In a recent interview with Manoush Zomorodi of WNYC, Smallwood describes the “close link between originality, novelty, and creativity on the one hand, and the sort of spontaneous thoughts that we generate when our minds are idle.” Zomorodi, host of the show New Tech City, had noticed she was spending hours a day staring at her phone—checking it upwards of 100 times a day—and decided to launch the Bored and Brilliant project, a series of podcasts with a listener-participation component designed “to help you detach from your phone and spend more time thinking creatively.”

The week of February 2nd, Bored and Brilliant is issuing daily phone-use-reduction challenges to its subscribers, 84 percent of whom say they spend “too much” or “way too much” time on their phones. You can, if you like, join the 15,000 Bored and Brilliant participants by downloading theMoment app for the iPhone and keeping track of your phone usage. But be forewarned: the app, for me, was an infuriatingly blunt instrument, ringing alarms when you go over a standard quota of 90 minutes per day no matter whether you're playing Candy Crush, listening to music, or navigating with the GPS. It also drains your battery.

But signing up forBored and Brilliant may help you become more thoughtful about how you use your phone, whether you opt to use the app or not. Monday's challenge is to leave your phone in your pocket or bag while in transit to preserve the possibility of creative discovery during your commute. "The smartphone," Smallwood says, "takes away the boredom, but it also denies us a chance to see and learn about where we are in terms of our goals.”

I can relate. I came to the smartphone relatively late in the game, buying my first (and current) iPhone just over two years ago. Increasingly a creature of online journalism, I use the phone all the time to read, write, Facebook, tweet, and correspond with my editors. But I've found recently that I lean on my productivity with the phone as an excuse for also using it unproductively: scanning headlines on The New York Times app when my paper copy is still sheathed in its blue plastic bag, reading friends' Facebook updates while walking down the street, checking e-mail a few dozen more times a day than I need to.

One way I counteract the overuse tendency is by doing a once-a-week iPhone cleanse, leaving my phone at home the day I run from work on the Lower East Side to my home in Brooklyn. It's a nice breather to know there is no technology to interrupt my train of thought during down times of the day. I find myself thinking more expansively on these days. I also find myself slowing down and noticing more around me.

Bad Habits to Blame on Technology

Jeff Hindenach

Technology pretty much runs our lives these days. From our work life to our home life, we have computers, phones, and gadgets helping us with our daily routine. But is technology taking over and shifting our societal norms? Is too much technology a bad thing? Sometimes we are so used to relying on technology to help us out, we forget how to act in a world void of it. We tend to blame technology for all those daily faux pas we commit. If you’ve ever heard or used (or even thought of!) any of these excuses, you’re not alone. Here are six of the most common bad habits we blame on technology.“Hey, I know we’re having a conversation, but someone more important just texted me.”

You know the person: They take their phone out at the beginning of the meal or a conversation, and every 5 minutes or so they are checking to make sure they haven’t missed a text or email. Maybe that person is you! Nothing is more annoying than not having the attention of the person you are with, especially in a one-on-one situation. If you are in a group, it’s a little more acceptable. Regardless, your annoying habit is saying that you really don’t care about the conversation or company, and you have better things to worry about. If that’s not the message you want to portray, put your phone away.

“I’m breaking up with you over a text message so I don’t have to deal with this in person.”

Accountability has gone out the window with the rise of technology and the web. The Internet gives us the option of not dealing with the immediate fallout of a situation. If you are mad at someone, you can leave a nasty comment on their Facebook wall. If you want to break up with your boyfriend, but don’t want to deal with the tears, you can shoot her a text. The truth is, this solution only delays and amplifies the fallout. Now you have to deal with the original fight and explain the nasty Facebook post. Or you get the bad rap of being the girl who breaks up with guys via text. Hiding behind technology shows a lack of courage, and will only come back to haunt you in the end.

“Sorry I rear-ended your car, but I HAD to send this funny text to my friend.”

Texting and emailing while driving has become a dangerous pastime in this country. Actually, there’s a wide range of distracting activities people do while driving, but texting seems to be the most rampant. A whopping 81% of Americans admit to texting while driving, while around 30% of accidents are caused by texting while driving.

Bottom line: if your eyes aren’t on the road, you are being a reckless driver. You’re controlling a large, heavy piece of machinery, one that can crush an old lady or a group of girl scouts in a split second. If a message is so important that you have to send it right now, pull over to the side of the road before you text. It’s just safer.

“I know all my friends and family can see my Facebook updates, but INEED to tell everyone how drunk I am right now.”

Facebook and Twitter have expanded the definition of TMI. Over-sharing has become a way of life for most, with little concern about how it might affect them later in life. Everything is searchable online these days. Want to post a status about how you were drunk and danced on a table at happy hour? Think about how it might affect a job search down the road. If you really must share every detail of your life with the world, at least set up a filter system within Facebook to limit what your family and coworkers can see. Set your privacy settings on all your social networking tools to the highest setting. You will save face with your family and possibly save your job.

“OMG, LOL!! That is crazeeeee! TTYL!”

What does that even mean? It seems that more and more these days, the English language is being passed over for phonetic spelling and a random string of letters. We’re all for being efficient and quick communicators, but does needing a decoder ring to decipher your message really save me any time? If you are texting good friends who understand your random acronyms, then feel free to keep using them. But if you’re sending texts or emails to family, coworkers, or, heaven forbid, your boss, keep the random spelling, shorthand, and emoticons out of the message.

“Sorry I’m an hour late, but I texted you to tell you I was running behind.”

You need to meet your friend in 15 minutes, and you haven’t even jumped in the shower. Oops. No problem, you can just text them and tell them you’re running late, right? Wrong. What if they are already at the place, because they like to show up early? Or what if they are already in transit? They still have to wait for you.

All concerns with being punctual have disappeared since you can now send a quick text saying that you aren’t going to make it on time. But texts don’t cover forgive all lateness sins. If you do it once in a while, you may be forgiven, but if you are texting “late” messages constantly, your friends might start to regard you as a flake.

What are your biggest technology pet peeves? What common courtesy do you wish people still abided by? Sound off in the comments!

Relying on Technology to Remember for Us Frees Up Cognitive Space

by NATALIE SHOEMAKER

As our minds move to the cloud, people fear that our reliance on storing our personal information and memories on external devices is making us weaker. Indeed, without our smartphones to help us remember birthdays and phones numbers, our internal memories become worse in these respects. However, BPS Research Digest writes on a study that argues there's a positive side to offloading this information: we make room to learn new things.

In a paper, published in Psychological Science, Benjamin Storm and Sean Stone have evidence indicating how humans can free up cognitive resources to learn more. The study involved 12 undergraduate students — quite a small group — in several experiments. The researchers asked them to study two documents on the computer with 10 words on them that they would be tested on later. Once they finished studying the first list, they saved the file and studied the second list.

The students were able to recall the words on the second list better than the first. It could be argued that the students' were influenced by the order of the lists — the second list was fresher in their minds. But the researchers attribute the participants' higher recall of the second list to the fact that students were able to offload (i.e., save) the first file to the computer, enhancing their ability to etch the second into their minds. They controlled for this scenario by making some of the saving processes unreliable on computers.

They write:

“... saving one file before studying a new file significantly improved memory for the contents of the new file. Notably, this effect was not observed when the saving process was deemed unreliable or when the contents of the to-be-saved file were not substantial enough to interfere with memory for the new file."

The researchers conclude:

“These results suggest that saving provides a means to strategically offload memory onto the environment in order to reduce the extent to which currently unneeded to-be-remembered information interferes with the learning and remembering of other information.”

Subscribe

We promise not to send you spam.