Today Google sat with a number of its Glass employees to answer a few formal questions, and a number from the audience that packed Room 7 of the Moscone Center in San Francisco. The session was a brief 40 minutes, as have been other moots at the I/O event, excepting that mammoth 215 minute keynote that kicked the event off.
The following text is a rebuilt rendition of the questions asked, and the answers tendered during the session. Do note that responses from Google staff are paraphrased from their answers, while the final section is a summary of the responses given to audience questions.
Directed at Isabelle Olsson, lead industrial designer of Glass: You have been on glass “since the prototype was still a phone attached to a scuba mask,” how did we get form there to here? And what is next?
It was very clear from the start that this was not incremental improve on what already existed. Instead, Glass is something new. It’s a new kind of wearable technology.
Building something like Glass is very messy at points. I never forget my first day on the team. I walked into a room full of people with crazy things on their head. How do you go from something like that to what we have today? We have taken a reductionist principle, removing everything that isn’t required.
We focus on three key areas:
By lightness what we mean is pretty straight forward: we are obsessed with weight. Not like the fashion industry, but we do care about every single gram. If Glass isn’t light, we won’t want to wear it for more than 10 minutes. This implicates balance as well, meaning how heavy is Glass on your nose?
But lightness is also visual, which is part of the reason that we hid components behind the frame of the device itself. Regarding simplicity, we initially thought that it would require dozens of adjustment points. The current version sports just one.
Glass is built to be scalable. In this stage what that means is that you can remove the frame from the main board. The two are separable by a screw. Other companies can therefore create hardware for the guts of Glass to help prop it up on your face.
So, Glass is a hardware project as well as a software project.
Directed at Charles Mendis, an engineer working on Glass: The idea that Glass is both a device and a platform, could you expand on that?
We want Glass to have a big footprint. A core principle of Glass is that we build on the same APIs that you do. All Google products that run on Glass are built on top of the same APIs that are open to others. So, Google Now is built on the same API as the Facebook application.
A similar concept will be in effect for the Glass Development Kit, or GDK. However, we are looking for feedback in terms of what to build. We need to know what people are doing with Glass, what the Explorers are up to. We will build an API that suits what is needed. The GDK, however, is not right around the corner in terms of release. Instead, Google wants to incorporate feedback from the community.
Directed at Steve Lee, Glass product director: The Explorer has been an almost unprecedented way to introduce products. Why did Google pick that path?
Why are we here? Because we believe that Glass can transform the way that people use technology. The Explorer program is an important part of how Glass is being taken to the world. Who are the first people to get Glass? Developers!
To fully realize the potential of Glass we needed developers to work on on the platform. I’m happy to say that by earlier this week we had invited all 2,000 Explorers to pick up their device. The next group will be those who signed up using the hashtag #IfIHadGlass.
We picked 8,000 people from more than 100,000 applications. Invites will be rolling out soon to this non-developer group. It’s comprised of educators, dentists, and the like. We’re excited to see what they will do with Glass.
We’re going to update Glass every month. This will include both new features and bug fixes.
Directed at each in turn: If you could ask the developers to build on thing on glass what would it be?
Steve: I am an exercise fanatic, and would love to have a fitness app on Glass, and to have it integrate with my heart rate monitor. With that app, I could have information relevant to my workout fed to me, without breaking my stride. This would also make cycling a much safer activity.
Charles: I would love to be able to pay with Glass. To just say, ok, pay, and then move on.
Isabella: I’m really in to karaoke. If there was a way to sing and have the lyrics located in Glass, so that you could face your drunk friends as you scream, that would be awesome.
Question: I’m curious what you think about the privacy questions that Glass raises, and if it is different from a smartphone in terms of etiquette, and how much data will be tapped [stored] by Google?
The social implications and etiquette have been at the top of Google’s mind since the beginning. And not just for people who buy Glass, but for the people around them as well. Google is proud of how seriously the team is taking it.
There are examples of how the technology has been designed with social implications in mind: The display is above the eye, for example. Google learned early on the importance of eye contact among humans. I know if you are paying attention to me, as we have eye contact. And I know if you are looking up and not paying attention to me, but instead to Glass.
Many privacy questions relate specifically to the camera. Google, knowing that it mattered, decided that you had to either push a button, or speak to it, to activate it. This provides a social cue as to what you are doing, akin to holding up a smartphone when you want to take a picture.
A third example: When Glass is active, its display is lit. Observers can see that. The display will be active when Glass is active, period. That will be part of the GDK, and is part of Google policy. Apps will not be allowed to fail this requirement.
Finally, you have to stare at someone to record them. If you stare at someone in the bathroom, they are going to notice.
Follow up question: Facial recognition?
Facial recognition is something that Google has worked on. They can imagine it existing through a third party. The company appeared to decline stating that they would build it themselves, likely to avoid painfully ignorant headlines.
The company is “not scared” of it, but wants to ensure that it has clear user benefit.
Question: Is the side of Glass multitouch?
It is! And Google claims that it intends to improve it. Right now, for example, a two-finger swipe down motion causes a different action than a one-finger swipe down action.
Question: Are you planning on making Glass less noticeable?
Google built five different colors of Glass in order to satisfy different personalities. After wearing them around for almost a year, Google “started seeing how important colors are. In the company’s view, they are “more important than you would ever imagine.” The company intends to release new colors in the future.
Question: How does the display work? Is it a small projector? An LCD display? And, do you see people using it for short-term interaction, or for longer-term interactions?
Google had no comment on the details of how its technology worked. It did note, however, that the screen is ‘projected’ out around six or seven feet, or that is at least how it feels.
Regarding the duration of interaction with Glass, the company does not intend it to be used to be used watch a full length movie or book. That, Google said, would be uncomfortable. The company instead imagines that interactions with Glass will be on the “micro” level.
Question: How did you pick the first five colors for Glass, and how did you test them?
Google built monthly prototypes of Glass, which gave them the benefit of being able to product a number of color options. They then watched to see which colors were fought over. This, Google said, was a good way to see what was “resonating” with people.
The company wanted “poppy” colors for folks that wanted to be spoken to, and “bland” colors for those who did not
Question: How do you reach the mainstream with the promise of Glass?
Google claims to be surprised at the reaction to Glass, not only on Silicon Valley, but also among normal people. Regular folks, like the rest of us, are still trying to figure out what Glass is, and what it might become.
The company, however, does in fact think that Glass will become part of the mainstream. That was a surprisingly bold statement.
Question: What does Google need in terms of better components for Glass?
Given that Glass shares much with the smartphone industry, its needs are similar: it needs better, higher-capacity batteries. That technology, unlike processors is not “doubling ever year.”
Question: Bluetooth headsets enjoyed a time in the sun, but later became reviled as a negative cultural marker. Does Glass risk the same fate?
This is something that Google has specifically thought about. People, Google stated, never say that they enjoy being around folks wearing a Bluetooth headset. By contrast, they want people who spend time with people who have Glass to have their lives enriched.
Read other stories from Google I/O 2013 here.
Read more from the original source: Google’s Glass fireside chat: Ugly prototypes, privacy and its potential to go mainstream
The following is an excerpt from my new book Don’t Go Back to School: a handbook for learning anything.
To someone who has never tried, it’s not obvious how to learn the things you want to learn outside of school. I’m on a mission to show you how. To do that, I became obsessed with how other people learn best, and how they do it without going to school.
My research based on interviews with 100 independent learners revealed four facts shared by almost every successful form of learning outside of school:
This interview with Harper Reed is a great example of how independent learning works. Reed served as the Chief Technology Officer for Obama for America during the 2012 election; before that, he was CTO at Threadless. He is an engineer who builds paradigm-shifting technology and leads others to do the same.
I love computers and I’ve always been around computers. I can’t really talk about education without talking about computers. I went to high school and I actually really loved it. I took all the classes I could, I was prom king, student council president. I did everything I could to be more involved in high school and that is obviously not the normal path you’d expect for a computer geek.
But, along with that, I was constantly getting into trouble with computers. Never with the cops, but I was always getting banned from all the computers in the school district. Then, they would let me back in, and I would mess up again for whatever reason. It happened over and over. I was caught in this dichotomy of trying to be involved, but whenever I was trying to get involved with computers, I messed it up because I was curious and experimenting outside what was allowed. After that, I went to a small liberal arts college. I studied history along with computer science, because I knew ultimately I was going to work with computers and I wanted to learn something else, too. I studied Catholic history and the history of science, which overlap a lot. I’m not Catholic. I’m not a religious person at all, but it was really fascinating to learn all of the idiosyncrasies of Galileo and Bruno and all these different weird scientists who got burned at the stake for their discoveries.
I realized about probably three-quarters of the way through my education that in terms of computers, I actually wasn’t learning anything I needed to learn to get a job later on. I did learn some coding concepts in college, but more importantly I figured out that I’m an experiential learner. I need to put my hands on things and really see them, and really chew on them. It was better to do it in a real context, where it mattered if I did it right. Like where there was money at stake. So, I did an internship in Iowa City, IA. I worked for a real company that was trying to make a profit. The company built ecommerce apps. As an intern I started learning web apps to build web pages. Given my way of learning, it was fascinating to see how the management dealt with me. I was a child. I asked questions like a child does. “Why is the sky blue?” They just said, “It’s just blue. Go with that.” I said, “No! Tell me why we’re doing it this way. What is this?” It was client services, so we were just doing it because the client wanted it done, with no thought behind it. But all the questions I asked gave me this opportunity to see how things worked and the value of asking things that seemed obvious to everyone else. It gave me a lot of hope. It really kicked off the career that I have now.
The methods I used to learn technology don’t work for everything. I’m struggling with learning Japanese. My wife is Japanese and I want to learn the language, but I don’t know how. I take classes, I fail, it doesn’t work out. I have to figure that out. With technology, I immediately find a problem I want to solve. It’s usually about learning a new programming language or learning a new technology. If it’s a real problem, I want to get to where I can actually picture the solution and be able to see it through from the beginning to the end. For me, I can’t learn from videos. That just doesn’t do it for me, although there’s a lot of video learning right now. I find it very frustrating. So usually what I do is I just go through a tutorial of some sort and then really start iterating, doing it over and over. I start trying to be creative on top of that, and say okay, now that I can figure out how to do this, how would I use it? So I set a new goal pretty close in difficulty, and when I achieve that, I do that again, until suddenly I’ve learned something. When you’re in that process, it can also be the best time to teach someone else. A tech writer named Mark Pilgrim, who writes manuals for learning coding languages including Dive into Python, and Dive into HTML5 said, “The best time to write a book about something is while you’re learning it yourself.” So you know what’s hard to learn and can talk in an excited, confident, honest way about how you got to the place where it’s not hard anymore.
For me this whole process is really collaborative. I treat everything like I’m the CEO of my life. CEOs have boards of directors and boards of advisors and these are groups of people who they’re using to really rely on for help and advice to be successful. I think every person should treat their life like that. So, if I’m stuck, I know I can reach out to a buddy, or I can reach out to my brother. I know I can reach out to these people who are experts in whatever I’m trying to do. I try to surround myself with incredibly smart people who are often, if not always, smarter than me. Because other people are so important to learning, I also think one of the most significant things about the internet is democratization of access. Anyone can email you about self-learning and you’re probably going to respond. Probably. I think it’s about how you phrase it. We are all very busy, but we’re probably going to respond if you approach it efficiently.
You can learn a lot about this from a really good book called Team Geek by Brian W. Fitzpatrick. It’s actually about project managing software development geeks, but it applies to most things with communication. It should really be called “Interacting with People,” because all it is, is just little tricks on how to interact with people, how to make those interactions better. There’s a section called “Interacting with an Executive,” and that part should be called “Interacting with Busy People.” It says if you want to connect with someone who is very busy, tell them three bullets and then a call to action.
So if someone wanted help from me, it might go like this: “Harper, I’m interested in what you’re doing with the campaign. I’m going to be doing technology for a campaign in the coming election. Do you have a hint for product management or project management software that you guys use?” I can answer that quickly. It’s very simple. Then all of a sudden there’s this person who probably wouldn’t have had an opportunity to talk with me, and I can help them out. I love what that kind of efficient communication does for you.
Kio Stark is a writer, researcher, teacher, and passionate activist for independent learning. She teaches at NYU’s Interactive Telecommunications Program. She is also the author of the novel Follow Me Down. You can find out more about her work at KioStark.com.
Glympse has been in the news for its deals with the likes of Ford, Mercedes Benz and BMW/Mini to integrate its location-sharing and tracking technology into in-car systems on connected automobiles. Today it’s taking its expansion strategy one step further, with the release of a new software development kit, giving app developers and others the ability to include Glympse-powered location-sharing technology into their services with a few lines of code.
The news comes during a time when social-mapping technology is in the news, with Facebook reportedly in the process of acquiring Waze for up to $1 billion, and Alibaba investing nearly $300 million into AutoNavi in a strategic alliance to develop location-based commerce and other mobile navigation and mapping services.
While Waze has developed a way to collate crowdsourced mapping and traffic data, Glympse doesn’t create the maps themselves — as you can see in the example below, the map data can come from Google, but also Microsoft’s Bing, Open Streetmap and others — but its location-tracking technology effectively lets you create a real-time trail showing your route to a particular location.
The resulting maps are animated routes tracking your movements and other data like the speed at which you’re travelling, travel time, and expected arrival time. A person can also make the data ephemeral (like Snapchat!) by giving it an expiration date for how long it can be accessed look something like this:
Bryan Trussel, CEO and co-founder of Glympse, says that already there are a number of companies approaching Glympse for ways to integrate its technology into new applications — areas that the company itself just doesn’t have the resources to tackle itself right now. One of these involves integration into apps around air travel: tracking where a person is as his plane flies from point A to B, useful for someone waiting to pick up that person from the airport.
Trussel says that the SDK will effectively be a version of the private APIs that Glympse already provides to partners like the car companies and others like Garmin.
It comes at a time when Glympse will continue to expand that partner list, and expand out to other verticals. “We’ve done a major partnership every six months, and we plan more, at the rate of one every couple of months,” he said in an interview. “Some car partners but the majority will be outside the automotive space.” This could also extend to licensing deals for the Glympse technology to start appearing on mobile devices as well. And in fact, there are already a number of companies in non-automotive using Glympse’s technology already. They include Gripwire (app development), PetHub (pet protection) and Runtriz (for hospitality solutions).
Glympse will be offering use of the API free of charge to implementations of 300,000 users or less, in the form of a Lite SDK. That free SDK will include the ability to add Glympse functionality to a mobile app as well as a Map Tool, for developers to create and host a custom Glympse Map. The SDK will let users add GPS and location management, contact integration and viewer permissions as well as the coding for a user interface for users to share location from within the third-party app.
Glympse says that a further, paid commercial SDK is designed for developers and enterprises that expect more than 300,000 monthly active users, or need more support, flexibility with user experience flow, or the ability to create more custom features.
So why the delay of offering an API only now? Trussel says that Glympse has had a lot of incoming requests to use the platform from the beginning, but “we decided not to lead with the platform because we wanted to have it stable and documented. Having an SDK means dealing with support and questions, and we spent our resources working with customers directly and refining platform. Now we are at the point where our partners are using the platform in identical ways so we can handle a variation of people using in a lot of different ways. The timing will be right for us.”
Glympse has to date raised $7.5 million from investors that include Menlo Ventures and Ignition Partners.
Originally posted here: Glympse Launches Its First API To Put Location Sharing Into Any App Or Platform
If you’re attending Google I/O this week, you will be a part of an experiment from the Google Cloud Platform Developer Relations team. On its blog today, the team outlined its plan to gather a bunch of environmental information happening around you as you meander around the Moscone Center.
In the blog post, Michael Manoochehri, Developer Programs Engineer, outlines his team’s plan to place hundreds of Arduino-based environmental sensors around the conference space to track things like temperature, noise levels, humidity and air quality in real-time. This was spawned due to a fascination with wanting to know which areas of the conference were the most popular, so it will be interesting to see what the information the team gathers actually tells us.
At first glance, this seems a little bit creepy, but it’s no different than a venue adjusting the cooling system based on the temperature inside at any given moment. As with anything that Google does, this could have implications for tracking indoor events or businesses in the future, as Manoochehri shared:
Networked sensor technology is in the early stages of revolutionizing business logistics, city planning, and consumer products. We are looking forward to sharing the Data Sensing Lab with Google I/O attendees, because we want to show how using open hardware together with the Google Cloud Platform can make this technology accessible to anyone.
Notice the wrap-up of wanting to show people how open hardware combined with Google’s Cloud Platform benefits everyone. Ok, sure. What could data like this mean for businesses, though? Well, a clothing store would be able to track how many people came in and browsed, which areas of the store were hot-spots for interest and then figure out how their displays converted. It’s like real-world ad-tracking. It makes sense, but still seems a long way off.
What will be interesting is not each dataset that is collected, but what all of them tied together tell us about our surroundings:
Our motes will be able to detect fluctuations in noise level, and some will be attached to footstep counters, to understand collective movement around the conference floor.
Of course, none of this information is personally identifiable, but the thought of our collective steps, movements and other ambient output being turned into something usable by Google is intriguing to say the least…and yes, kind of creepy.
If this particular team can share all of the data it collects in an easy to digest way, then businesses will be clamoring to toss sensors all over their stores and drop the data on whatever cloud platform that will host it the cheapest. Google would like to be that platform.
During the event, the team will hold a workshop on what it calls the “Data Sensing Lab,” so if you’re interested on learning more about what the team is gathering as you walk around, this would be the place to go. You’ll also be able to see some of the real-time visualizations on screens set up throughout the conference floor.
We’ll be covering all of the action as we’re being covered by Google.
Cloud storage company Box has acquired HTML5 document embedding service and Y Combinator alum Crocodoc, both companies announced in a press briefing today. Financial terms of the deal, which was a cash and stock transaction, were not disclosed; however, Box CEO and co-founder Aaron Levie said that it was a successful exit for investors. Crocodoc has raised a little over $1 million in funding from Y Combinator, SV Angel, Paul Buchheit, Joshua Schachter, Dave McClure, Steve Chen and XG Ventures.
What Is Crocodoc?
Crocodoc was founded in 2007 by four MIT engineers, but eventually pivoted in 2010 to kill off Acrobat. The startup’s initial Flash-based technology allowed you to upload a PDF, and receive a version of the same document in your browser, which you could then share with coworkers and annotate with notes, highlighting, text, and a pen tool, with changes that show up to other users in real time. In 2011, Crocodoc launched this technology in HTML5 for mobile embedding.
More than 100 companies, including Dropbox, LinkedIn, Yammer, Facebook and SAP, license (and pay for) the startup’s document-embedding technology, and Levie says the company has been able to build a “strong business model.”
For example, Dropbox has used Crocodoc’s HTML5 document viewing solution to allow their users to view documents in their web browsers and mobile devices without having to download large files or use desktop software (you can see an example here). Via LinkedIn’s Recruiter product, Crocodoc enables recruiters to upload candidates’ resumes in Word and PDF formats without having to download files and open them using desktop software.
Customers can also customize the appearance and behavior of Crocodoc’s viewer and access built-in commenting, annotations, highlighting and drawing tools. Crocodoc, which now has seven employees, says that it has powered 189 million document previews and 14 million document annotations.
Also worth noting — earlier this year, Crocodoc launched a new version of its converter, which uses both HTML5 and scalable vector graphics (SVG). With the last version of the player, text was overlaid on top of the image using HTML web fonts. The newer version displays everything in the document as HTML5 and SVG, making for crisper lines and shapes in the converted documents. Documents also load significantly faster, as the browser won’t have to load a large image to display.
As Levie explained today, Box acquired Crocodoc because the company wants to reimagine what documents look like in the cloud. “We’re focused on building the simplest way to let businesses store and manage documents anywhere, and were looking for ways to change how users interact with content,” he says.
We’re told that Crocodoc will continue to be operated and licensed to existing and new users, but Box will integrate Crocodoc’s technology into its own cloud storage platform to allow customers to have a seamless use of the embedder and viewer. And there’s much more that Box and Crocodoc CEO Ryan Damico want to do with the product within the Box family. Damico, who will become Box’s director of platform, will be running content services for the company, and the entire Crocodoc team will be joining Box.
Next up for the product? Damico explains that more secure documents viewing, mobile collaboration, real-time presentation, form-filing and document authoring will all be added in the coming year. Levie says there will also be a new version launching later this year with new viewers like a flip-book-like technology, as well as a carousel experience for documents. There will also be new branding around the Box Platform, he added.
Sam Schillace, Box’s VP of engineering who was also one of the founders of Google Docs, explains that Crocodoc’s technology doesn’t look or feel like enterprise software. “It looks so beautiful and polished, and it is a standard all have to shoot for when viewing documents,” he says.
With 15 million users, and 150,000 businesses across retail, health care, financial services and more, Box is growing fast as it eyes a potential public offering in the next year. Part of growing further will be around adding compelling experiences to the user experience. Levie says that 2 billion content events happened in Q1 alone, so thinking about new ways to improve content experiences makes sense. And Crocodoc is an interesting move considering that its technology is used by one of Box’s main competitors, Dropbox.
It’s no secret that Dropbox has its own ambitions around content, as explained by AllThingsD earlier this year.
But Box believes that they, along with Crocodoc’s technology, can be the leader in improving every experience you have with documents on the Internet. Similar to the way that YouTube remade the online video experience and Facebook and Flickr reimagined the photo experience, Box wants to make embedding documents less clunky.
You can check out Crocodoc’s experience below: