I need someone who is able to quickly help me troubleshoot an issue I’m having with an Optimizepress page and implementing GAE experiment code. It should be a simple fix for someone familiar with both programs.
This is a one time job, with possib…
Category: IT & Programming > Technical Support
Type and Budget: Hourly (Not Sure)
Time Left: 2 d, 23 h (Ends Feb 8, 2013 16:35 pm ET)
Start Date: Feb 5, 2013
Client Info: 6 jobs posted, 50% awarded, $94 total purchased, Payment Method Verified
Client Location: , United States
Preferred Job Location: Anywhere
Desired Skills: WordPress Google Analytics Content Experiments Optimizepress
Job ID: 37559478
In the few years I’ve been in Silicon Valley, if someone asked me to sum up — in one word — what defined and dominated consumer technology applications during that time, I’d have no choice but to answer: “Photos.” Now, it’s easy for others to sit back and roll their eyes at the thought of it. “Why not solve big problems?” an aggravated chorus might wail. Looking back over this time period, the big events touching on digital pictures gained outsized attention: The launch of iPhone 4, with its incredible camera; the meteoric rise and acquisition of Instagram; the technical achievement unlocked by Lytro; the influence of the Pinterest design on nearly every e-commerce site; our narcissistic addiction to Timehop or delight in depositing checks through our bank’s mobile app; today’s fascination with exploding pictures, courtesy of Snapchat; and on the horizon, one of the most anticipated interface advancements: Google Glass.
Because our digital photographs are inherently social objects, and because mobile platforms now create a massive global audience for companies to directly provide tools and software to, digital pictures have transformed into a strange, fierce battleground where bigger platform giants test the limits of privacy, throw friendly punches at their rivals, and experiment with new business models. Most coverage and analysis of this trend focuses on the end consumers, but for the purposes of this post, let’s consider digital photographs as being a byproduct of the advances in mobile camera technology, as the camera itself is perhaps the single most important sensor on our phones and tablets.
As the camera sensor improves over time, the opportunities around photosharing increase with it. Therefore, my belief is that we are still in the early innings of this digital photography craze, so if you’re tired of the meme, brace yourself because it will take years to unfold, and if you’re excited about this future, it’s a great time to get your hands dirty. First, let’s briefly consider what hardware changes we can anticipate: device battery life and processing speeds will only improve with time, perhaps opening the door for richer image capture, video capture, and so forth, and RGBd (“D” for depth) cameras will allow cameras to capture not only more of the the light field, but also the depth of objects from the sensor and in relation to each other which has big implications for the advancement of 3-D modeling. (Google Glass will eventually, we hope, have huge implications for passive image capture, both for consumers and for commercial activities.)
As the hardware advances the camera sensor capabilities, software will not be far behind. So far, consumers have glommed onto applications that filter, organize, and/or erase photographs. Lytro’s technology enables the capture of the entire light field with for post-capture focus (though I’m not sure if their technology will be incorporated or licensed in other devices). Moving forward, I’d expect more camera applications to offer more context around each photograph, such as auto-image/object detection, auto-tagging and classification (especially around location, such as Findery is working on), and auto-arrangement or organization. I’d also expect more technologies (like Stipple) to auto-fingerprint photographs to preserve their provenance as they’re shared across the web or before being screenshot and manipulated on mobile devices.
The tricky thing here is that only in hindsight do filters or boards or exploding pictures make sense. “Of course teenagers will want to make their pictures disappear like Tiger texts!” Therefore, in the future, we won’t really know what consumers will want until we see the new applications and experiments live, in the wild, and monitor their usage. What is certain, however, is that all of these advances will create opportunities for developers to build new applications and advance the collection, documentation, recognition (and more manipulation, including distortion), and sharing of digital pictures in ways that we’ve yet to imagine.
While I’m unable to articulate just what this future will look like, I do feel we will see more and more activity in this space, more experiments, more applications, and more things that perhaps start out looking like toys that may ultimately be new channels and modes of communication — as well as the societal implications we can expect when nearly everything “can” be captured. And, we don’t even know how the world will react to new image-based interfaces that will surely come to market, starting with Google Glass, perhaps a touch-enabled television with its own camera, and so much more. When it comes to images, it’s early innings, indeed.
Photo Credit: Tony Ratanen / Flickr Creative Commons
Go here to see the original: Iterations: It’s Early Innings For Digital Pictures
Here’s something quite interesting coming from Facebook on a day when it launched its SnapChat competitor, “Poke.” It seems like the company is testing out a design that is a bit more…personal.
We’re familiar with the prompts within the status box when we log into Facebook and sit on our news feed. Today, I noticed something different. It asked me, by name, what was going on. We’ve reached out to Facebook for comment, but these are the types of design and user experience tests that we see in the wild all of the time.
This might be an interesting technique to get people to share more. Seeing it was like getting a call from someone and them saying “Hi, Drew” rather than “May I speak to Mr. Olanoff?” That slight difference might increase the number of status messages that people share on a daily basis.
At least, that’s what Facebook is hoping, I’m sure. Either way, it’s an interesting experiment, and one that makes Facebook look a little more human, which is something that it’s been lacking of late. Sure, it’s a social network, but when your friends’ content is surrounded by ads all of the time, you tend to lose that personal touch.
Having said that, the more items that are shared, the more you’ll come back to the site and to more opportunities for Facebook ads.
Have you seen this on Facebook?
Update: A Facebook spokesperson told us: “This is something we’re testing … We’re testing different variations to see how people like them.”
[Photo credit: Flickr]
Read more from the original source: Facebook’s Status Box Gets More Personal, Asks You “What’s Going On?” By Name
Adobe’s recently launched Edge Web Fonts service has our attention, as it extends the software giant’s offering of free and open source fonts to users that aren’t interested in the restrictions of a Typekit trail membership or the Adobe-less Google Web Fonts.
The service is great because it doesn’t require you to sign up at all, is hosted for free by Typekit and is easy to implement on your site (see our beginner’s guide), but there’s one glaring issue that Adobe has yet to resolve: discovery.
Right now, in order to pick a typeface you have to use this:
Above: Not cool.
To combat the poor font-picker tool that Adobe provides on its site, designer/developer Tony Stuck created Edge Fonts Preview, which lays out all of the available typefaces so anyone can compare and contrast what’s available, instead of fumbling with a drop-down menu.
Stuck told TNW that, in addition to needing a better way to browse Edge Web Fonts, he used this project to experiment with tools like Yeoman, RequireJS and Backbone.Marionette. Additionally, since there was no available API, Stuck had to scrape the Edge Web Fonts site “to pull out all of the fonts, names, and their associated styles (via the WebKit console).”
Stuck says that more features are on the way, but for now we’re happy with this simple-yet-helpful site and hope that Adobe doesn’t take any offense to it (they shouldn’t). Check the site out via the link below, or visit the project on GitHub and learn more about how it was created on Stuck’s blog.
Image credit: Thinkstock
The team said it reached 500 experiments, hoping to help move web innovation further. Here’s what they had to say about it:
Today marks our 500th experiment, and in celebration, we created Experiment 500 as a thank you note to everyone who submitted their work to the site. It’s an array of interactive particles, each one of them corresponding to a different submission. You can sort them by date or by category.
As you browse the experiments, you’ll notice that Chrome Experiments has evolved along with the web in the last 3.5 years. After Google Chrome added support for WebGL, for example, we started seeing beautiful 3D graphics experiments like Evan Wallace’s WebGL Water Simulation and HelloEnjoy’s Lights.
With so much buzz about mobile these days, the desktop often gets forgotten. Remember though, the Chrome web browser is now the #1 browser, who would have thunk it?
You should head over to the site, there are some pretty awesome things to check out and waste a Friday on.
View original post here: Google Chrome Team Celebrates 500 “Experiments”, Web Innovation Continues To Move Forward