Skip to content

Software Development News: .NET, Java, PHP, Ruby, Agile, Databases, SOA, JavaScript, Open Source

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Google Open Source Blog
Syndicate content
News about Google's Open Source projects and programs.
Updated: 1 hour 29 min ago

The latest round of Google Open Source Peer Bonus winners

2 hours 51 min ago
Google relies on open source software throughout our systems, much of it written by non-Googlers. We’re always looking for ways to say “thank you!” so 5 years ago we started asking Googlers to nominate open source contributors outside of the company who have made significant contributions to codebases we use or think are important. We’ve recognized more than 500 developers from 30+ countries who have contributed their time and talent to over 400 open source projects since the program’s inception in 2011.

Today we are pleased to announce the latest round of awardees, 52 individuals we’d like to recognize for their dedication to open source communities. The following is a list of everyone who gave us permission to thank them publicly:

.pb_table tbody td { vertical-align: top; font-size: 14px; } .pb_table thead tr td { vertical-align: bottom; font-weight: 900; }
Name Project Name Project Philipp Hancke Adapter.js Fernando Perez Jupyter & IPython Geoff Greer Ag Michelle Noorali Kubernetes & Helm Dzmitry Shylovich Angular Prosper Otemuyiwa Laravel Hackathon Starter David Kalnischkies Apt Keith Busch Linux kernel Peter Mounce Bazel Thomas Caswell matplotlib Yuki Yugui Sonoda Bazel Tatsuhiro Tsujikawa nghttp2 Eric Fiselier benchmark Anna Henningsen Node.js Rob Stradling Certificate Transparency Charles Harris NumPy Ke He Chromium Jeff Reback pandas Daniel Micay CopperheadOS Ludovic Rousseau PCSC-Lite, CCID Nico Huber coreboot Matti Picus PyPy Kyösti Mälkki coreboot Salvatore Sanfilippo Redis Jana Moudrá Dart Ralf Gommers SciPy John Wiegley Emacs Kevin O'Connor SeaBIOS Alex Saveau FirebaseUI-Android Sam Aaron Sonic Pi Toke Hoiland-Jorgensen Flent Michael Tyson The Amazing Audio Engine Hanno Böck Fuzzing Project Rob Landley Toybox Luca Milanesio Gerrit Bin Meng U-Boot Daniel Theophanes Go programming language Ben Noordhuis V8 Josh Snyder Go programming language Fatih Arslan vim-go Brendan Tracey Go programming language Adam Treat WebKit Elias Naur Go on Mobile Chris Dumez WebKit Anthonios Partheniou Google Cloud Datalab Sean Larkin Webpack Marcus Meissner gPhoto2 Tobias Koppers Webpack Matt Butcher Helm Alexis La Goutte Wireshark dissector for QUIC
Congratulations to all of the awardees, past and present! Thank you for your contributions.

By Helen Hu, Open Source Programs Office
Categories: Open Source

Dispatches from the latest Mercurial sprints

Fri, 03/24/2017 - 17:51
On March 10th-12th, the Mercurial project held one of its twice-a-year sprints in the Google Mountain View office. Mercurial is a distributed version control system, used by Google, W3C, OpenJDK and Mozilla among others. We had 40 developers in attendance, some from companies with large Mercurial deployments and some individual contributors who volunteer in their spare time.

One of the major themes we discussed was user-friendliness. Mercurial developers work hard to keep the command-line interface backwards compatible, but at the same time, we would like to make progress by smoothing out some rough edges. We discussed how we can provide a better user interface for users to opt-in to without breaking the backwards compatibility constraint. We also talked about how to make Mercurial’s Changeset Evolution feature easier to use.

We considered moving Mercurial past SHA1 for revision identification, to enhance security and integrity of Mercurial repositories in light of recent SHA1 exploits. A rough consensus on a plan started to emerge, and design docs should start to circulate in the next month or so.

We also talked about performance, such as new storage layers that would scale more effectively and work better with clones that only contain a partial repository history, a key requirement for Mercurial adoption in enterprise environments with large repositories, like Google.

If you are interested in finding out more about Mercurial (or perhaps you’d like to contribute!) you can find our mailing list information here.

By Martin von Zweigbergk and Augie Fackler, Software Engineers
Categories: Open Source

An Upgrade to SyntaxNet, New Models and a Parsing Competition

Wed, 03/22/2017 - 16:43
At Google, we continuously improve the language understanding capabilities used in applications ranging from generation of email responses to translation. Last summer, we open-sourced SyntaxNet, a neural-network framework for analyzing and understanding the grammatical structure of sentences. Included in our release was Parsey McParseface, a state-of-the-art model that we had trained for analyzing English, followed quickly by a collection of pre-trained models for 40 additional languages, which we dubbed Parsey's Cousins. While we were excited to share our research and to provide these resources to the broader community, building machine learning systems that work well for languages other than English remains an ongoing challenge. We are excited to announce a few new research resources, available now, that address this problem.

SyntaxNet Upgrade
We are releasing a major upgrade to SyntaxNet. This upgrade incorporates nearly a year’s worth of our research on multilingual language understanding, and is available to anyone interested in building systems for processing and understanding text. At the core of the upgrade is a new technology that enables learning of richly layered representations of input sentences. More specifically, the upgrade extends TensorFlow to allow joint modeling of multiple levels of linguistic structure, and to allow neural-network architectures to be created dynamically during processing of a sentence or document.

Our upgrade makes it, for example, easy to build character-based models that learn to compose individual characters into words (e.g. ‘c-a-t’ spells ‘cat’). By doing so, the models can learn that words can be related to each other because they share common parts (e.g. ‘cats’ is the plural of ‘cat’ and shares the same stem; ‘wildcat’ is a type of ‘cat’). Parsey and Parsey’s Cousins, on the other hand, operated over sequences of words. As a result, they were forced to memorize words seen during training and relied mostly on the context to determine the grammatical function of previously unseen words.

As an example, consider the following (meaningless but grammatically correct) sentence:
This sentence was originally coined by Andrew Ingraham who explained: “You do not know what this means; nor do I. But if we assume that it is English, we know that the doshes are distimmed by the gostak. We know too that one distimmer of doshes is a gostak." Systematic patterns in morphology and syntax allow us to guess the grammatical function of words even when they are completely novel: we understand that ‘doshes’ is the plural of the noun ‘dosh’ (similar to the ‘cats’ example above) or that ‘distim’ is the third person singular of the verb distim. Based on this analysis we can then derive the overall structure of this sentence even though we have never seen the words before.

ParseySaurus
To showcase the new capabilities provided by our upgrade to SyntaxNet, we are releasing a set of new pretrained models called ParseySaurus. These models use the character-based input representation mentioned above and are thus much better at predicting the meaning of new words based both on their spelling and how they are used in context. The ParseySaurus models are far more accurate than Parsey’s Cousins (reducing errors by as much as 25%), particularly for morphologically-rich languages like Russian, or agglutinative languages like Turkish and Hungarian. In those languages there can be dozens of forms for each word and many of these forms might never be observed during training - even in a very large corpus.

Consider the following fictitious Russian sentence, where again the stems are meaningless, but the suffixes define an unambiguous interpretation of the sentence structure:
Even though our Russian ParseySaurus model has never seen these words, it can correctly analyze the sentence by inspecting the character sequences which constitute each word. In doing so, the system can determine many properties of the words (notice how many more morphological features there are here than in the English example). To see the sentence as ParseySaurus does, here is a visualization of how the model analyzes this sentence:
Each square represents one node in the neural network graph, and lines show the connections between them. The left-side “tail” of the graph shows the model consuming the input as one long string of characters. These are intermittently passed to the right side, where the rich web of connections shows the model composing words into phrases and producing a syntactic parse. Check out the full-size rendering here.

A Competition
You might be wondering whether character-based modeling are all we need or whether there are other techniques that might be important. SyntaxNet has lots more to offer, like beam search and different training objectives, but there are of course also many other possibilities. To find out what works well in practice we are helping co-organize, together with Charles University and other colleagues, a multilingual parsing competition at this year’s Conference on Computational Natural Language Learning (CoNLL) with the goal of building syntactic parsing systems that work well in real-world settings and for 45 different languages.

The competition is made possible by the Universal Dependencies (UD) initiative, whose goal is to develop cross-linguistically consistent treebanks. Because machine learned models can only be as good as the data that they have access to, we have been contributing data to UD since 2013. For the competition, we partnered with UD and DFKI to build a new multilingual evaluation set consisting of 1000 sentences that have been translated into 20+ different languages and annotated by linguists with parse trees. This evaluation set is the first of its kind (in the past, each language had its own independent evaluation set) and will enable more consistent cross-lingual comparisons. Because the sentences have the same meaning and have been annotated according to the same guidelines, we will be able to get closer to answering the question of which languages might be harder to parse.

We hope that the upgraded SyntaxNet framework and our the pre-trained ParseySaurus models will inspire researchers to participate in the competition. We have additionally created a tutorial showing how to load a Docker image and train models on the Google Cloud Platform, to facilitate participation by smaller teams with limited resources. So, if you have an idea for making your own models with the SyntaxNet framework, sign up to compete! We believe that the configurations that we are releasing are a good place to start, but we look forward to seeing how participants will be able to extend and improve these models or perhaps create better ones!

Thanks to everyone involved who made this competition happen, including our collaborators at UD-Pipe, who provide another baseline implementation to make it easy to enter the competition. Happy parsing from the main developers, Chris Alberti, Daniel Andor, Ivan Bogatyy, Mark Omernick, Zora Tung and Ji Ma!

By David Weiss and Slav Petrov, Research Scientists
Categories: Open Source

Google Summer of Code 2017 student applications are open!

Mon, 03/20/2017 - 17:30
Are you a university student looking to learn more about open source software development? Consider applying to Google Summer of Code (GSoC) for a chance to spend your break coding on an open source project.

vertical GSoC logo.jpg

For the 13th straight year GSoC will give students from around the world the opportunity to learn the ins and outs of open source software development while working from their home. Students will receive a stipend for their successful contributions to allow them to focus on their coding during the program.

Mentors are paired with the students to help address technical questions and to monitor their progress throughout the program. Former GSoC participants have told us that the real-world experience they’ve gained during the program has not only sharpened their technical skills, but has also boosted their confidence, broadened their professional network and enhanced their resumes.

Interested students can submit proposals on the program site now through Monday, April 3 at 16:00 UTC. The first step is to search through the 201 open source organizations and review the “Project ideas” for the organizations that appeal to you. Next, reach out to the organizations to introduce yourself and determine if your skills and interests are a good match with their organization.




Since spots are limited, we recommend writing a strong project proposal and submitting a draft early to receive feedback from the organization which will help increase your chances of selection. Our Student Manual, written by former students and mentors, provides excellent helpful advice to get you started with choosing an organization and crafting a great proposal.

For information throughout the application period and beyond, visit the Google Open Source Blog, join our Google Summer of Code discussion lists or join us on Internet Relay Chat (IRC) at #gsoc on Freenode. Be sure to read the Program Rules, Timeline and FAQ, all available on the program site, for more information about Google Summer of Code.

Good luck to all the open source coders who apply, and remember to submit your proposals early — you only have until Monday, April 3 at 16:00 UTC!

By Stephanie Taylor, Google Summer of Code Program Manager
Categories: Open Source

Announcing Guetzli: A New Open Source JPEG Encoder

Fri, 03/17/2017 - 21:59
Crossposted on the Google Research Blog

At Google, we care about giving users the best possible online experience, both through our own services and products and by contributing new tools and industry standards for use by the online community. That’s why we’re excited to announce Guetzli, a new open source algorithm that creates high quality JPEG images with file sizes 35% smaller than currently available methods, enabling webmasters to create webpages that can load faster and use even less data.

Guetzli [guɛtsli] — cookie in Swiss German — is a JPEG encoder for digital images and web graphics that can enable faster online experiences by producing smaller JPEG files while still maintaining compatibility with existing browsers, image processing applications and the JPEG standard. From the practical viewpoint this is very similar to our Zopfli algorithm, which produces smaller PNG and gzip files without needing to introduce a new format; and different than the techniques used in RNN-based image compression, RAISR, and WebP, which all need client changes for compression gains at internet scale.

The visual quality of JPEG images is directly correlated to its multi-stage compression process: color space transform, discrete cosine transform, and quantization. Guetzli specifically targets the quantization stage in which the more visual quality loss is introduced, the smaller the resulting file. Guetzli strikes a balance between minimal loss and file size by employing a search algorithm that tries to overcome the difference between the psychovisual modeling of JPEG's format, and Guetzli’s psychovisual model, which approximates color perception and visual masking in a more thorough and detailed way than what is achievable by simpler color transforms and the discrete cosine transform. However, while Guetzli creates smaller image file sizes, the tradeoff is that these search algorithms take significantly longer to create compressed images than currently available methods.

orig-libjpeg-guetzli.pngFigure 1. 16x16 pixel synthetic example of  a phone line  hanging against a blue sky — traditionally a case where JPEG compression algorithms suffer from artifacts. Uncompressed original is on the left. Guetzli (on the right) shows less ringing artefacts than libjpeg (middle) and has a smaller file size.And while Guetzli produces smaller image file sizes without sacrificing quality, we additionally found that in experiments where compressed image file sizes are kept constant that human raters consistently preferred the images Guetzli produced over libjpeg images, even when the libjpeg files were the same size or even slightly larger. We think this makes the slower compression a worthy tradeoff.
montage-cats-zoom-eye2.pngFigure 2. 20x24 pixel zoomed areas from a picture of a cat’s eye. Uncompressed original on the left. Guetzli (on the right) shows less ringing artefacts than libjpeg (middle) without requiring a larger file size.It is our hope that webmasters and graphic designers will find Guetzli useful and apply it to their photographic content, making users’ experience smoother on image-heavy websites in addition to reducing load times and bandwidth costs for mobile users. Last, we hope that the new explicitly psychovisual approach in Guetzli will inspire further image and video compression research.
By Robert Obryk and Jyrki Alakuijala, Software Engineers, Google Research Europe
Categories: Open Source

Announcing Google Radio PHY Test, aka “Graphyte”, as part of the Chromium Project

Fri, 03/17/2017 - 18:00
With many different Chromebook models for sale from several different OEMs, the Chrome OS Factory team interfaces with many different Contract Manufacturers (CMs), Original Device Manufacturers (ODMs), and factory teams. The Google Chromium OS Factory Software Platform, a suite of factory tools provided to Chrome OS partners, allows any factory team to quickly bring up a Chrome OS manufacturing test line.

Today, we are announcing that this platform has been extended to remove the friction of bringing up wireless verification test systems with a component called Google Radio PHY Test or “Graphyte.” Graphyte is an open source software framework that can be used and extended by the wireless ecosystem of chipset companies, test solution providers, and wireless device manufacturers, as opposed to the traditional approach of vendor-specific solutions. It is developed in Python and capable of running on Linux and Chrome OS test stations with an initial focus on Wi-Fi, Bluetooth, and 802.15.4 physical layer verification.

Verifying that a wireless device is working properly requires chipset- and instrument-specific software which coordinate transmitting and measuring power and signal quality across channels, bandwidths, and data rates. Graphyte provides high-level API abstractions for controlling wireless chipsets and test instruments, allowing anyone to develop a “plugin” for a given chipset or instrument.

Graphyte architecture.
We’ve worked closely with industry leaders like Intel and LitePoint to ensure the Graphyte APIs have the right level of abstraction, and it is already being used in production on multiple manufacturing lines and several different products.

To get started, use Git to clone our repository. You can learn more by reading the Graphyte User Manual and checking out the example of how to use Graphyte in a real test. You can get involved by joining our mailing list. If you’d like to contribute, please follow the Chromium OS Developer Guide.

To get started with the LitePoint Graphyte plugin, please contact LitePoint directly. To get started with the Intel Graphyte plugin, please contact Intel directly.

Happy testing!

By Kurt Williams, Technical Program Manager
Categories: Open Source

Getting ready for Google Summer of Code 2017

Fri, 03/10/2017 - 17:48
Spring is just around the corner here in the Northern Hemisphere and Google Summer of Code is fast approaching. If you are a student interested in participating this year, now is the time to prepare -- read on for tips on how to get ready.

This year we’ve accepted 201 open source organizations into the program, nearly 40 of which are new to the program. The organizations cover a wide range of topics including (but certainly not limited to!):

  • Operating systems
  • Web application frameworks
  • Healthcare and bioinformatics
  • Music and graphic design
  • Machine learning
  • Robotics
  • Security




How should you prepare for Google Summer of Code?While student applications don’t open until March 20th at 16:00 UTC, you need to decide which projects you’re interested in and what you’ll propose. You should also communicate with those projects to learn more before you apply.

Start by looking at the list of participating projects and organizations. You can explore by searching for specific names or technologies, or filtering by topics you are interested in. Follow the “Learn More” link through to each organization’s page for additional information.

Once you’ve identified the organizations that you’re interested in, take a look at their ideas list to get a sense of the specific projects you could work on. Typically, you will choose a project from that list and write a proposal based on that idea, but you could also propose something that’s not on that list.

You should reach out to the organizations after you’ve decided what you want to work on. Doing this can make the difference between a good application and a great application.

Whatever you do, don’t wait until March 20th to begin preparing for Google Summer of Code! History has shown that students who reach out to organizations before the start of the application period have a higher chance of being accepted into the program, as they have had more time to talk to the organizations and understand what they are looking for with the project.

If you have any questions along the way, take a look at the Student Manual, FAQ and Timeline. If you can’t find the answer to your question, try taking your question to the mailing list.

By Josh Simmons, Open Source Programs Office
Categories: Open Source

Another option for file sharing

Wed, 03/08/2017 - 17:59
Originally posted on the Google Security Blog

Existing mechanisms for file sharing are so fragmented that people waste time on multi-step copying and repackaging. With the new open source project Upspin, we aim to improve the situation by providing a global name space to name all your files. Given an Upspin name, a file can be shared securely, copied efficiently without "download" and "upload", and accessed by anyone with permission from anywhere with a network connection.

Our target audience is personal users, families, or groups of friends. Although Upspin might have application in enterprise environments, we think that focusing on the consumer case enables easy-to-understand and easy-to-use sharing.

File names begin with the user's email address followed by a slash-separated Unix-like path name:

ann@example.com/dir/file.
Any user with appropriate permission can access the contents of this file by using Upspin services to evaluate the full path name, typically via a FUSE filesystem so that unmodified applications just work. Upspin names usually identify regular static files and directories, but may point to dynamic content generated by devices such as sensors or services.

If the user wishes to share a directory (the unit at which sharing privileges are granted), she adds a file called Access to that directory. In that file she describes the rights she wishes to grant and the users she wishes to grant them to. For instance,

read: joe@here.com, mae@there.com
allows Joe and Mae to read any of the files in the directory holding the Access file, and also in its subdirectories. As well as limiting who can fetch bytes from the server, this access is enforced end-to-end cryptographically, so cleartext only resides on Upspin clients, and use of cloud storage does not extend the trust boundary.

Upspin looks a bit like a global file system, but its real contribution is a set of interfaces, protocols, and components from which an information management system can be built, with properties such as security and access control suited to a modern, networked world. Upspin is not an "app" or a web service, but rather a suite of software components, intended to run in the network and on devices connected to it, that together provide a secure, modern information storage and sharing network. Upspin is a layer of infrastructure that other software and services can build on to facilitate secure access and sharing. This is an open source contribution, not a Google product. We have not yet integrated with the Key Transparency server, though we expect to eventually, and for now use a similar technique of securely publishing all key updates. File storage is inherently an archival medium without forward secrecy; loss of the user's encryption keys implies loss of content, though we do provide for key rotation.

It’s early days, but we’re encouraged by the progress and look forward to feedback and contributions. To learn more, see the GitHub repository at Upspin.

By Andrew Gerrand, Eric Grosse, Rob Pike, Eduardo Pinheiro and Dave Presotto, Google Software Engineers
Categories: Open Source

By maintainers, for maintainers: Wontfix_Cabal

Mon, 03/06/2017 - 19:00
The Google Open Source Programs Office likes to highlight events we support, organize, or speak at. In this case, Google’s own Jess Frazelle was responsible for running a unique event for open source maintainers.

This year I helped organize the first inaugural Wontfix_Cabal. The conference was organized by open source software maintainers for open source software maintainers. Our initial concept was an unconference where attendees could discuss topics candidly with their peers from other open source communities.

The idea for the event stemmed from the response to a blog post I published about closing pull requests. The response was overwhelming, with many maintainers commiserating and sharing lessons they had learned. It seemed like we could all learn a lot from our peers in other projects -- if we had the space to do so -- and it was clear that people needed a place to vent.

Major thanks to Katrina Owen and Brandon Keepers from GitHub who jumped right in and provided the venue we needed to make this happen. Without their support this would’ve never become a reality!

It was an excellent first event and the topics discussed were wide ranging, including:
  • How to deal with unmaintained projects
  • Collecting metrics to judge project health
  • Helping newcomers
  • Dealing with backlogs
  • Coping with, and minimizing, toxic behavior in our communities

Never have I seen so many open source maintainers in one place. Thanks @wontfix_, this is amazing pic.twitter.com/GdYXUjZds3— Gregor (@gr2m) February 15, 2017
The discussion around helping newcomers focused on creating communities with welcoming and productive cultures right from the start. I was fascinated to learn that some projects pre-fill issues before going public so as to set the tone for the future of the project. Another good practice is clearly defining how one becomes a maintainer or gets commit access. There should be clear rules in place so people know what they have to do to succeed.

Another discussion I really liked focused on “saying no.” Close fast and close early was a key takeaway. There’s no sense in letting a contribution sit waiting when you know it will never be accepted. Multiple projects found that having a bot give the hard news was always better than having the maintainer do it. This way it is not personal, just a regular part of the process.

One theme seen in multiple sessions: “Being kind is not the same as being nice.” The distinction here is that being nice comes from a place of fear and leads people to bend over backwards just to please. Being kind comes from a place of strength, from doing the right thing.

Summaries of many of the discussions have been added to the GitHub repo if you would like to read more.

After the event concluded many maintainers got right to work, putting what they had learned into practice. For instance, Rust got help from the Google open source fuzzing team.

Flurry of internal emails following up on ideas from @wontfix_: all sent! Now it's time to start on some PRs.— Rainer Sigwald (@Tashkant) February 22, 2017
Our goal was to put together a community of maintainers that could support and learn from each other. When I saw Linux kernel maintainers talking to people who work on Node and JavaScript, I knew we had achieved that goal. Laura Abbott, one of those kernel developers, wrote a blog post about the experience.

Not only was the event useful, it was also a lot of fun. Meeting maintainers, people who care a great deal about open source software, from such a diverse group of projects was great. Overall, I think our initial run was a success! Follow us on Twitter to find out about future events.

By Jess Frazelle, Software Engineer
Categories: Open Source

Introducing Python Fire, a library for automatically generating command line interfaces

Thu, 03/02/2017 - 19:00
Today we are pleased to announce the open-sourcing of Python Fire. Python Fire generates command line interfaces (CLIs) from any Python code. Simply call the Fire function in any Python program to automatically turn that program into a CLI. The library is available from pypi via `pip install fire`, and the source is available on GitHub.

Python Fire will automatically turn your code into a CLI without you needing to do any additional work. You don't have to define arguments, set up help information, or write a main function that defines how your code is run. Instead, you simply call the `Fire` function from your main module, and Python Fire takes care of the rest. It uses inspection to turn whatever Python object you give it -- whether it's a class, an object, a dictionary, a function, or even a whole module -- into a command line interface, complete with tab completion and documentation, and the CLI will stay up-to-date even as the code changes.

To illustrate this, let's look at a simple example.

#!/usr/bin/env python
import fire

class Example(object):
def hello(self, name='world'):
"""Says hello to the specified name."""
return 'Hello {name}!'.format(name=name)

def main():
fire.Fire(Example)

if __name__ == '__main__':
main()

When the Fire function is run, our command will be executed. Just by calling Fire, we can now use the Example class as if it were a command line utility.

$ ./example.py hello
Hello world!
$ ./example.py hello David
Hello David!
$ ./example.py hello --name=Google
Hello Google!

Of course, you can continue to use this module like an ordinary Python library, enabling you to use the exact same code both from Bash and Python. If you're writing a Python library, then you no longer need to update your main method or client when experimenting with it; instead you can simply run the piece of your library that you're experimenting with from the command line. Even as the library changes, the command line tool stays up to date.

At Google, engineers use Python Fire to generate command line tools from Python libraries. We have an image manipulation tool built by using Fire with the Python Imaging Library, PIL. In Google Brain, we use an experiment management tool built with Fire, allowing us to manage experiments equally well from Python or from Bash.

Every Fire CLI comes with an interactive mode. Run the CLI with the `--interactive` flag to launch an IPython REPL with the result of your command, as well as other useful variables already defined and ready to use. Be sure to check out Python Fire's documentation for more on this and the other useful features Fire provides.

Between Python Fire's simplicity, generality, and power, we hope you find it a useful library for your own projects.

By David Bieber, Software Engineer on Google Brain
Categories: Open Source

Operation Rosehub

Thu, 03/02/2017 - 18:19
Twelve months ago, a team of 50 Google employees used GitHub to patch the “Apache Commons Collections Deserialization Vulnerability” (or the “Mad Gadget vulnerability” as we call it) in thousands of open source projects. We recently learned why our efforts were so important.

The San Francisco Municipal Transportation Agency had their software systems encrypted and shut down by an avaricious hacker. The hacker used that very same vulnerability, according to reports of the incident. He demanded a Bitcoin ransom from the government. He threatened to leak the private data he stole from San Francisco’s citizens if his ransom wasn’t paid. This was an attack on our most critical public infrastructure; infrastructure which underpins the economy of a major US city.

Mad Gadget is one of the most pernicious vulnerabilities we’ve seen. By merely existing on the Java classpath, seven “gadget” classes in Apache Commons Collections (versions 3.0, 3.1, 3.2, 3.2.1, and 4.0) make object deserialization for the entire JVM process Turing complete with an exec function. Since many business applications use object deserialization to send messages across the network, it would be like hiring a bank teller who was trained to hand over all the money in the vault if asked to do so politely, and then entrusting that teller with the key. The only thing that would keep a bank safe in such a circumstance is that most people wouldn’t consider asking such a question.

The announcement of Mad Gadget triggered the cambrian explosion of enterprise security disclosures. Oracle, Cisco, Red Hat, Jenkins, VMWare, IBM, Intel, Adobe, HP and SolarWinds all formally disclosed that they had been impacted by this issue.

But unlike big businesses, open source projects don’t have people on staff to read security advisories all day and instead rely on volunteers to keep them informed. It wasn’t until five months later that a Google employee noticed several prominent open source libraries had not yet heard the bad news. Those projects were still depending on vulnerable versions of Collections. So back in March 2016, she started sending pull requests to those projects updating their code. This was easy to do and usually only required a single line change. With the help of GitHub’s GUI, any individual can make such changes to anyone’s codebase in under a minute. Given how relatively easy the changes seemed, she recruited more colleagues at Google to help the cause. As more work was completed, it was apparent that the problem was bigger than we had initially realized.

For instance, when patching projects like the Spring Framework, it was clear we weren’t just patching Spring but also patching every project that depended on Spring. We were furthermore patching all the projects that depended on those projects and so forth. But even once those users upgraded, they could still be impacted by other dependencies introducing the vulnerable version of Collections. To make matters worse, build systems like Maven can not be relied upon to evict old versions.

This was when we realized the particularly viral nature of Mad Gadget. We came to the conclusion that, in order to improve the health of the global software ecosystem, the old version of Collections should be removed from as many codebases as possible.

We used BigQuery to assess the damage. It allowed us to write a SQL query with regular expressions that searched all the public code on GitHub in a couple minutes.


#standardSQL
SELECT pop, repo_name, path
FROM (
SELECT id, repo_name, path
FROM `bigquery-public-data.github_repos.files` AS files
WHERE path LIKE '%pom.xml' AND
EXISTS (
SELECT 1
FROM `bigquery-public-data.github_repos.contents`
WHERE NOT binary AND
content LIKE '%commons-collections<%' AND
content LIKE '%>3.2.1<%' AND
id = files.id
)
)
JOIN (
SELECT
difference.new_sha1 AS id,
ARRAY_LENGTH(repo_name) AS pop
FROM `bigquery-public-data.github_repos.commits`
CROSS JOIN UNNEST(difference) AS difference
)
USING (id)
ORDER BY pop DESC;


We were alarmed when we discovered 2,600 unique open source projects that still directly referenced insecure versions of Collections. Internally at Google, we have a tool called Rosie that allows developers to make large scale changes to codebases owned by hundreds of different teams. But no such tool existed for GitHub. So we recruited even more engineers from around Google to patch the world’s code the hard way.

Ultimately, security rests within the hands of each developer. However we felt that the severity of the vulnerability and its presence in thousands of open source projects were extenuating circumstances. We recognized that the industry best practices had failed. Action was needed to keep the open source community safe. So rather than simply posting a security advisory asking everyone to address the vulnerability, we formed a task force to update their code for them. That initiative was called Operation Rosehub.

Operation Rosehub was organized from the bottom-up on company-wide mailing lists. Employees volunteered and patches were sent out in a matter of weeks. There was no mandate from management to do this—yet management was supportive. They were happy to see employees spontaneously self-organizing to put their 20% time to good use. Some of those managers even participated themselves.

Patches were sent to many projects, avoiding threats to public security for years to come. However, we were only able to patch open source projects on GitHub that directly referenced vulnerable versions of Collections. Perhaps if the SF Muni software systems had been open source, we would have been able to bring Mad Gadget to their attention too.

Going forward, we believe the best thing to do is to build awareness. We want to draw attention to the fact that the tools now exist for fixing software on a massive scale, and that it works best when that software is open.

In this case, the open source dataset on BigQuery allowed us to identify projects that still needed to be patched. When a vulnerability is discovered, any motivated team or individual who wants to help improve the security of our infrastructure can use these tools to do just that.

By Justine Tunney, Software Engineer on TensorFlow

We’d like to recognize the following people for their contributions to Operation Rosehub: Laetitia Baudoin, Chris Blume, Sven Blumenstein, James Bogosian, Phil Bordelon, Andrew Brampton, Joshua Bruning, Sergio Campamá, Kasey Carrothers, Martin Cochran, Ian Flanigan, Frank Fort, Joshua French, Christian Gils, Christian Gruber, Erik Haugen, Andrew Heiderscheit, David Kernan, Glenn Lewis, Roberto Lublinerman, Stefano Maggiolo, Remigiusz Modrzejewski, Kristian Monsen, Will Morrison, Bharadwaj Parthasarathy, Shawn Pearce, Sebastian Porst, Rodrigo Queiro, Parth Shukla, Max Sills, Josh Simmons, Stephan Somogyi, Benjamin Specht, Ben Stewart, Pascal Terjan, Justine Tunney, Daniel Van Derveer, Shannon VanWagner, and Jennifer Winer.
Categories: Open Source

Introducing the Google Summer of Code 2017 Mentor Organizations

Mon, 02/27/2017 - 18:19
Today’s the day! We are excited to announce the mentor organizations accepted for this year’s Google Summer of Code (GSoC). Every year we receive more applications than we can accept and 2017 was no exception. After carefully reviewing almost 400 applications, we have chosen 201 open source projects and organizations, 18% of which are new to the program. Please see the program website for a complete list of the accepted organizations.

Interested in participating as a student? We will begin accepting student applications on Monday, March 20, 2017 at 16:00 UTC and the deadline is Monday, April 3, 2017 at 16:00 UTC.

Over the next three weeks, students who’d like to participate in Google Summer of Code should research the organizations and their Ideas Lists to explore which organizations are a good fit for their interests and skills and learn how they might contribute. Some of the most successful proposals have been completely new ideas submitted by students, so if you don’t see a project that appeals to you, don’t hesitate to suggest a new idea to the organization! There are contacts listed for each organization on their Ideas List — students should contact the organization directly to discuss their ideas. We also strongly encourage all interested students to reach out to and become familiar with the organization before applying.

You can find more information on our website, including a full timeline of important dates and program milestones. We also highly recommend all interested students read the Student Manual, FAQ and the Program Rules.

Congratulations to all of our mentor organizations! We look forward to working with all of you during Google Summer of Code 2017.

By Josh Simmons, Open Source Programs Office

Categories: Open Source

Google Code-in 2016: even more young developers

Thu, 02/23/2017 - 20:04
Google Code-in (GCI), our contest introducing 13-17 year olds to open source software development, wrapped up last month with our largest contest to date: 1,340 students from 62 countries completed an impressive 6,379 tasks! Working with 17 open source organizations, students wrote code, created and edited documentation, designed UI elements and logos, conducted research, developed screencasts and videos teaching others about open source software, and helped find (and fix!) hundreds of bugs.
General statistics
  • 56.4% of students completed three or more tasks (earning themselves a fun Google Code-in 2016 t-shirt)
  • 21% of students were female
  • 30% of the participants from the USA were female
  • This was the first Google Code-in for 1,143 students (85.3%)
Student age2017-02-23_07-48-36.png

Participating schoolsStudents from 550 schools competed in this year’s contest. While Google Code-in is a program for individuals, every year some schools emerge as hot spots of participation. This year, these five schools had the most students taking part:

School NameCountryNumber of ParticipantsDunman High SchoolSingapore185Sacred Heart Convent Senior Secondary SchoolIndia29Jayshree Periwal International SchoolIndia26Colegiul National Aurel VlaicuRomania23Ly Tu Trong Specialized High SchoolsVietnam14
CountriesWe are pleased to have a new country participating in GCI this year: Mauritius! The chart below displays the ten countries with the most students completing at least 1 task.




In June we will welcome all 34 grand prize winners (along with a mentor from each participating organization) for a fun-filled trip to the Bay Area. The trip will include meeting with Google engineers to hear about new and exciting projects, tours of the Google campuses and a fun day exploring San Francisco.

Keep an eye on the Google Open Source Blog in coming weeks for more stats on Google Code-in 2016, plus posts from the mentoring organizations describing some of their experiences with the contests and the work done by “their” students.

We are thrilled that Google Code-in was so popular this year. We hope to continue to grow and expand this contest in the future to introduce even more teenagers to the world of open source software development.

By Stephanie Taylor, Google Code-in Program Manager
Categories: Open Source

Announcing TensorFlow 1.0

Thu, 02/16/2017 - 18:34
Originally posted on the Google Developer Blog

In just its first year, TensorFlow has helped researchers, engineers, artists, students, and many others make progress with everything from language translation to early detection of skin cancer and preventing blindness in diabetics. We're excited to see people using TensorFlow in over 6000 open source repositories online.

Today, as part of the first annual TensorFlow Developer Summit, hosted in Mountain View and livestreamed around the world, we're announcing TensorFlow 1.0:

It's faster: TensorFlow 1.0 is incredibly fast! XLA lays the groundwork for even more performance improvements in the future, and tensorflow.org now includes tips & tricksfor tuning your models to achieve maximum speed. We'll soon publish updated implementations of several popular models to show how to take full advantage of TensorFlow 1.0 - including a 7.3x speedup on 8 GPUs for Inception v3 and 58x speedup for distributed Inception v3 training on 64 GPUs!

It's more flexible: TensorFlow 1.0 introduces a high-level API for TensorFlow, with tf.layers, tf.metrics, and tf.losses modules. We've also announced the inclusion of a new tf.keras module that provides full compatibility with Keras, another popular high-level neural networks library.

It's more production-ready than ever: TensorFlow 1.0 promises Python API stability (details here), making it easier to pick up new features without worrying about breaking your existing code.

Other highlights from TensorFlow 1.0:
  • Python APIs have been changed to resemble NumPy more closely. For this and other backwards-incompatible changes made to support API stability going forward, please use our handy migration guide and conversion script.
  • Experimental APIs for Javaand Go
  • Higher-level API modules tf.layers, tf.metrics, and tf.losses - brought over from tf.contrib.learnafter incorporating skflowand TF Slim
  • Experimental release of XLA, a domain-specific compiler for TensorFlow graphs, that targets CPUs and GPUs. XLA is rapidly evolving - expect to see more progress in upcoming releases.
  • Introduction of the TensorFlow Debugger (tfdbg), a command-line interface and API for debugging live TensorFlow programs.
  • New Android demos for object detection and localization, and camera-based image stylization.
  • Installation improvements: Python 3 docker images have been added, and TensorFlow's pip packages are now PyPI compliant. This means TensorFlow can now be installed with a simple invocation of pip install tensorflow.
We're thrilled to see the pace of development in the TensorFlow community around the world. To hear more about TensorFlow 1.0 and how it's being used, you can watch the TensorFlow Developer Summit talks on YouTube, covering recent updates from higher-level APIs to TensorFlow on mobile to our new XLA compiler, as well as the exciting ways that TensorFlow is being used:

Click here for a link to the livestream and video playlist
(individual talks will be posted online later in the day).The TensorFlow ecosystem continues to grow with new techniques like Foldfor dynamic batching and tools like the Embedding Projector along with updatesto our existing tools like TensorFlow Serving. We're incredibly grateful to the community of contributors, educators, and researchers who have made advances in deep learning available to everyone. We look forward to working with you on forums like GitHub issues, Stack Overflow, @TensorFlow, the discuss@tensorflow.orggroup, and at future events.
By Amy McDonald Sandjideh, Technical Program Manager, TensorFlow

Categories: Open Source

Open-sourcing Google Earth Enterprise

Fri, 02/10/2017 - 21:09
(originally posted on the Geo Developers blog)

We are excited to announce that we are open-sourcing Google Earth Enterprise (GEE), the enterprise product that allows developers to build and host their own private maps and 3D globes. With this release, GEE Fusion, GEE Server, and GEE Portable Server source code (all 470,000+ lines!) will be published on GitHub under the Apache2 license in March.
Screen Shot 2017-01-26 at 2.51.24 PM.pngOriginally launched in 2006, Google Earth Enterprise provides customers the ability to build and host private, on-premise versions of Google Earth and Google Maps. In March 2015, we announced the deprecation of the product and the end of all sales. To provide ample time for customers to transition, we have provided a two year maintenance period ending on March 22, 2017. During this maintenance period, product updates have been regularly shipped and technical support has been available to licensed customers.

Feedback is important to us and we’ve heard from our customers that GEE remains in-use in mission-critical applications. Many customers have not transitioned to other technologies. Open-sourcing GEE allows our customer community to continue to improve and evolve the project in perpetuity. Note that the implementations for Google Earth Enterprise Client, Google Maps JavaScript® API V3 and Google Earth API will not be open sourced. The Enterprise Client will continue to be made available and updated. However, since GEE Fusion and GEE Server are being open-sourced, the imagery and terrain quadtree implementations used in these products will allow third-party developers to build viewers that can consume GEE Server Databases.

We’re thankful for the help of our GEE partners in preparing the codebase to be migrated to GitHub. It’s a lot of work and we cannot do it without them. It is our hope that their passion for GEE and GEE customers will serve to lead the project into its next chapter.

Looking forward, GEE customers can use Google Cloud Platform (GCP) instead of legacy on-premises enterprise servers to run their GEE instances. For many customers, GCP provides a scalable and affordable infrastructure as a service where they can securely run GEE. Other GEE customers will be able to continue to operate the software in disconnected environments. However, we believe that the advantages of incorporating even some of the workloads on GCP will become apparent (such as processing large imagery or terrain assets on GCP that can be downloaded and brought to internal networks, or standing up user-facing Portable Globe Factories).

Moreover, GCP is increasingly used as a source for geospatial data. Google’s Earth Engine has made available over a petabyte of raster datasets which are readily accessible and available to the public on Google Cloud Storage. Additionally, Google uses Cloud Storage to provide data to customers who purchase Google Imagery today. Having access to massive amounts of geospatial data, on the same platform as your flexible compute and storage, makes generating high quality Google Earth Enterprise Databases and Portables easier and faster than ever.

We will be sharing a series of white papers and other technical resources to make it as frictionless as possible to get open source GEE up and running on Google Cloud Platform. We are excited about the possibilities that open-sourcing enables, and we trust this is good news for our community. We will be sharing more information when we launch the code in March on GitHub. For general product information, visit the Google Earth Enterprise Help Center. Review the essential and advanced training for how to use Google Earth Enterprise, or learn more about the benefits of Google Cloud Platform.

Posted by Avnish Bhatnagar, Senior Technical Solutions Engineer, Google Cloud
Categories: Open Source

Google Summer of Code 2016 wrap-up: LabLua

Wed, 02/08/2017 - 18:00
This is the final guest post from the students, mentors and organization administrators that participated in Google Summer of Code (GSoC) 2016. We’ve seen recaps of student work and lessons learned, which you can check out the rest of the series as we gear up for this year’s program.


LabLua is a lab at PUC-Rio dedicated to research on programming languages, with emphasis on the Lua language. Lua is a powerful, fast, lightweight, embeddable scripting language that is used in many industrial applications, and on many embedded systems and games.
We were very happy to participate in Google Summer of Code (GSoC) for the third time, and to mentor eight fine students that all completed their projects successfully. We thank them, and Google, for this extraordinary contribution to our research and development work.
Here is a brief summary of this year's projects:
Next Generation of the LuaRocks test suite - Robert KarasekLuaRocks is the package manager for Lua modules. Its test suite was implemented as a big shell script that performed only black-box testing and ran only on Linux. The goal for this project was to port the test suite to Lua, improving its portability and allowing more types of tests so we could improve test coverage.
Robert ported the test suite to Lua using Busted. His new test suite, now merged into LuaRocks, runs on Linux and Mac OS X, accessible via Travis CI, as well as Windows, accessible via AppVeyor. 
This was a welcome addition, bringing greater confidence to developers. Robert improved the checks in existing tests and wrote many new ones, including a new mock-server for testing a client API for uploading packages to the repository.
Typed Lua Typechecker - Tomasz Dyczek 
Typed Lua provides static type checking for the Lua language. Typed Lua extends the syntax of Lua 5.3 to introduce type annotations, and performs local type inference for more precise detection of unannotated expressions.
Tomasz implemented the core of Typed Lua in Haskell. Tomasz's implementation parses code written in a syntax close to the abstract syntax of Typed Lua, then type checks the generated AST. Besides providing a support for testing and reasoning about new features, Tomasz's typechecker can be also used to validate tests to be included in Typed Lua's test suite.
Classes and Generics for Typed Lua - Kevin Clancy
Kevin worked on the implementation of a class system for Typed Lua. He also added parametric polymorphism (generics) for classes and existing Typed Lua types, such as functions and tables.
Kevin's work currently lives in its own branch, but will be merged into the main branch soon. Meanwhile, Kevin has written a detailed post explaining all the features he implemented
Improving Error Reporting in PEG Parsers - Matthew Allen Go 
LPegLabel is an extension of LPeg, a pattern matching tool for Lua, based on Parsing Expression Grammars (PEGs). LPegLabel supports labeled failures, a facility that improves error reporting and recovery for PEG-based parsers.
The goal of this project was to use LPegLabel to write parsers with good error reporting. These parsers could then be used by the Lua community and also serve as a guide for LPegLabel users. Because LPegLabel is a young tool, another important contribution was to improve the tool's usability.
Matthew achieved both goals. He developed a parser for Lua 5.3, which has been incorporated into the new release of lua-parser (1.0.0), and improved LPegLabel’s usability with work on its API and documentation.
Improving elasticsearch-lua tests and build - Dhaval Kapil
Elasticsearch is a distributed and scalable search engine written in Java that offers a REST API accessed through JSON. During GSoC 2015, Dhaval implemented elasticsearch-lua, a client for the Lua language following a model similar to clients written in Python and PHP.
During GSoC 2016, Dhaval worked on improving elasticsearch-lua. He added a test suite, documented the entire codebase, and updated the current client to work with the newest version of Elasticsearch.
Dhaval went above and beyond, creating a new library called luaver. This work was motivated by having to frequently switch between different versions of Lua while developing the test suite. A full blog post about his project can be found here.
Admin Center and Elasticsearch integration for Sailor - Nikhil Ramesh 
Sailor is a web framework with a model-view-controller (MVC) architecture. Like other web frameworks, such as Ruby on Rails and Django, it is designed to make development faster by making some assumptions and conventions and encouraging principles like Don’t Repeat Yourself (DRY).
Nikhil focused on extending Sailor. The first feature he worked on was an Admin Center, which is a web interface for configuring an application. He also integrated Sailor and elasticsearch-lua, allowing Elasticsearch indexes to be stored as Sailor Models. His work is currently pending as a pull request and will soon be merged.
Extending the online tutorial of Céu with Emscripten and SDL - Margarit Vicentiu
Céu is a language for developing reactive applications such as video games and embedded systems. Its compiler generates output in plain C to integrate easily with the underlying platform (e.g. Arduino, SDL). For this project, we wanted to integrate Céu with Emscripten in order to run applications in a web browser.
Vicentiu started with Céu’s online tutorial, which is a server-side application: the user types code in a text area and hits the send button; the server receives the code, executes it, and sends the output back to the user. During the summer, Vicentiu made most of the examples compile with Emscripten and run in real-time on the user’s screen.
Our next goal is to make the graphical examples with user interactions also work in the browser, and Vicentiu plans to continue contributing to the project to achieve this goal.
An automatic generator of WSDL documents for LuaSOAP - Victor Dias
LuaSOAP is a library for working with the Simple Object Access Protocol (SOAP). WSDL is an XML format for describing network services; it is used to describe operations, messages and types offered by Web Services.
This summer Victor extended LuaSOAP's WSDL support by building a software layer for the automatic generation of WSDL documents. This new layer eases the description of most WSDL "bureaucracy" -- types, operations, ports, messages -- which have no counterparts in Lua. He also improved the test suite and the documentation. Victor's work will be integrated into the next version of LuaSOAP.
By Ana Lúcia de Moura, Organization Administrator for LabLua
Categories: Open Source

Google Summer of Code 2016 wrap-up: CloudCV

Mon, 02/06/2017 - 18:00
This guest post is part of our ongoing series of posts from the students, mentors and organization administrators who participated in Google Summer of Code (GSoC), a program which gets university students contributing to open source software.

Google Summer of Code 2016 was a memorable one for CloudCV. Despite being a relatively “young” organization (this is just our second year as a mentor organization), there were many excellent applicants who put a tremendous amount of effort into their proposals and ramp-up tasks. It was difficult to choose!

CloudCV began in the summer of 2014 as a research project within the Machine Learning and Perception Lab at Virginia Tech, with the ambitious goal of democratizing computer vision and machine learning. We’re run exclusively by students and are working to enable developers, researchers, and fellow students to leverage artificial intelligence technology as a service and to share state of the art algorithms with the research community.

In line with this goal, we decided to build two tools that cater to computer vision researchers and hobbyists alike: CloudCV-fy your code and CloudCV-IDE. Though building two new platforms from the ground up was going to be challenging, our students’ motivation was overwhelming and their performance surpassed all expectations. We even demonstrated their work at CVPR 2016, a top-tier computer vision conference!

CloudCV-fy

A recurring use case for computer vision researchers, and many others, is to build a web-based demo and REST API to demonstrate the capabilities of their creations to the world. But web development involves writing hundred of lines of additional code across multiple languages (HTML, CSS, JavaScript, etc), which takes time away from research.


Our first student, Ashish Chaudhary, took on this problem by building CloudCV-fy. Over many iterations of design and development, Ashish delivered a tool that allows a user to simply write lightweight wrappers around their machine learning model/library and be done. CloudCV-fy automatically builds web-based interactive demos for them -- no need to tinker with HTML, CSS or JavaScript. Code to demo. Done.

The demo can be hosted on our servers, the user’s own server or any third party cloud service. As a result of this, researchers can focus on what they do best: designing and training models. CloudCV handles the rest. You can learn more in the write-up Ashish did on his blog.

CloudCV-IDE

There has been an explosion in the number of deep learning frameworks and it is difficult for researchers to keep up with all the latest tools. CloudCV-IDE, built by student Gaurav Gupta, addresses this by allowing a user to build a deep learning network with a drag-and-drop interface, then export to the deep learning framework of their choice (Caffe, TensorFlow, etc).

Gaurav also added support to import model configuration files in order to visualize any architecture. This is one of the first attempts to do this.



By the end of the summer, Gaurav delivered a great UI to visualize models with robust support for Caffe and TensorFlow back-ends. This was a successful start that we plan to build on by supporting more frameworks and facilitating collaborative building of deep learning models.

Overall, this was a highly productive GSoC for CloudCV. Our tools are under active development and we welcome contributions and ideas for new features.

We will definitely apply for GSoC 2017. If you are a student interested in participating we encourage you to get involved early! Feel free to reach out to us on our Gitter channel or on our mailing list.

By Viraj Prabhu, Organization Administrator for CloudCV
Categories: Open Source

Googlers on the road: FOSDEM 2017

Wed, 02/01/2017 - 19:22
The new year is off to an excellent start as we wrap up the 7th year of Google Code-in, ramp up for the 13th year of Google Summer of Code, and return from connecting with our compatriots in the open source community down under at Linux.conf.au. Next up? We’re headed to FOSDEM, Europe’s famed non-commercial and volunteer-organized open source conference.

FOSDEM_logo.pngFOSDEM logo licensed under CC BY.
FOSDEM is hosted in Brussels on the Université libre de Bruxelles campus and runs the weekend of February 4-5. It’s a unique event in the spirit of the free and open source software and is free to the public. This year they are expecting 8,000+ attendees.

We’re looking forward to talking face-to-face with some of the thousands of former students, mentors and organization administrators who have participated in our student programs. A few of them will even be giving talks about their recent Google Summer of Code experience.

If you’d like to say hello or chat about our programs, you’ll be sure to find a Googler or two at our table. You’ll also find a number of Googlers in the program schedule:

Saturday, February 4th

2:00pm    Bazel: How to build at Google scale by Klaus Aehlig
3:25pm    Copyleft in Commerce: How GPLv3 keeps Samba relevant in the marketplace by Jeremy Allison

Sunday, February 5th

10:40am  gRPC 101: Building fast and efficient microservices by Ray Tsang
10:50am  Is the GPL a copyright license or a contract under U.S. law? by Max Sills
12:45pm  The state of Go: What to expect in Go 1.8 by Francesc Campoy
1:00pm    Analyze terabytes of OS code with one query by Felipe Hoffa (more info)
2:50pm    Like the ants: Turn individuals into a large contributing community by Dan Franc

See you there!

By Josh Simmons, Open Source Programs Office

Categories: Open Source

Introducing Draco: compression for 3D graphics

Mon, 01/30/2017 - 23:00
3D graphics are a fundamental part of many applications, including gaming, design and data visualization. As graphics processors and creation tools continue to improve, larger and more complex 3D models will become commonplace and help fuel new applications in immersive virtual reality (VR) and augmented reality (AR).  Because of this increased model complexity, storage and bandwidth requirements are forced to keep pace with the explosion of 3D data.

The Chrome Media team has created Draco, an open source compression library to improve the storage and transmission of 3D graphics. Draco can be used to compress meshes and point-cloud data. It also supports compressing points, connectivity information, texture coordinates, color information, normals and any other generic attributes associated with geometry.

With Draco, applications using 3D graphics can be significantly smaller without compromising visual fidelity. For users this means apps can now be downloaded faster, 3D graphics in the browser can load quicker, and VR and AR scenes can now be transmitted with a fraction of the bandwidth, rendered quickly and look fantastic.


Sample Draco compression ratios and encode/decode performance*
Transmitting 3D graphics for web-based applications is significantly faster using Draco’s JavaScript decoder, which can be tied to a 3D web viewer. The following video shows how efficient transmitting and decoding 3D objects in the browser can be - even over poor network connections.


Public domain Discobolus model from SMK National Gallery of Denmark.

Video and audio compression have shaped the internet over the past 10 years with streaming video and music on demand. With the emergence of VR and AR, on the web and on mobile (and the increasing proliferation of sensors like LIDAR) we will soon be swimming in a sea of geometric data. Compression technologies, like Draco, will play a critical role in ensuring these experiences are fast and accessible to anyone with an internet connection. More exciting developments are in store for Draco, including support for creating multiple levels of detail from a single model to further improve the speed of loading meshes.

We look forward to seeing what people do with Draco now that it's open source. Check out the code on GitHub and let us know what you think. Also available is a JavaScript decoder with examples on how to incorporate Draco into the three.js 3D viewer.

By Jamieson Brettle and Frank Galligan, Chrome Media Team

* Specifications: Tests ran with textures and positions quantized at 14-bit precision, normal vectors at 7-bit precision. Ran on a single-core of a 2013 MacBook Pro.  JavaScript decoded using Chrome 54 on Mac OS X.
Categories: Open Source

Google Summer of Code 2016 wrap-up: GitHub

Mon, 01/30/2017 - 21:45
Every year open source organizations, mentors, and university students come together to build and improve open source software through Google Summer of Code (GSoC). This guest post is part of a series of blog posts from people who participated in GSoC 2016.

GitHub-Mark-120px-plus.png

Open source maintainers at GitHub mentored 5 students in Google Summer of Code last year. The students did great work that we’d like to highlight and congratulate them on:
Updates to GitHub Classroom GitHub Classroom helps teachers automate their work and interact with students in issues and pull requests. Last summer two students took on projects to help teachers work more efficiently and with greater insight into their classrooms.
Classroom Project #1 Cheng-Yu Hsu is a student who worked to implement new features suggested by teachers using GitHub Classroom, including due dates for assignment submissions and visualizations of classroom activities. In reflecting on the project, Cheng-Yu said:

"Having a great community is one of the most important factors of a successful open source project, so participating [in] the community is also a huge part of this project. It is great to have chances responding to user feedback, helping people resolve issues and brainstorming new features with them."
Classroom Project #2Shawn Ding worked on student identifiers and team management for GitHub Classroom. This means that teachers using GitHub Classroom can use things such as student emails to identify their assignments. Teachers can also now manage their students and teams of students using GitHub Classroom via drag and drop in the settings page which then updates the data on GitHub.
Front-end controls for JekyllJekyll Admin is a Jekyll plugin that provides users with a traditional CMS-like graphical interface to author content and administer Jekyll sites from the comfort of their browser. GSoC student Mert Kahyaoğlu has been using Facebook’s React framework to create the front-end that will allow you to write a new post, edit existing pages or add new files. And it will all work with GitHub Pages.

Best of all, Mert's plugin allows people to author content and administer Jekyll sites without knowledge of command line or installing an external text editor like Atom. Once installed, Jekyll users start their site as they would normally and simply append “/admin” to their site's URL to launch the WordPress-like administrative interface. Jekyll Admin's initial release is ready for use on your own site.
Octokit.net Alexander Efremov added support to Octokit.net for interacting with the GitHub API using a repository ID, alongside the existing support for providing the owner and repository name. This means integrators do not have to update their systems when a repository changes ownership. The changes to support these APIs were rolled out incrementally over a number of pull requests, and 0.21 release of Octokit.net made these new APIs available to the public.

We had a great time mentoring these students on their projects last year!

By Carol Smith, John Britton and Brandon Keepers, Organization Administrators for GitHub

Categories: Open Source