Skip to content

Software Development News: .NET, Java, PHP, Ruby, Agile, Databases, SOA, JavaScript, Open Source

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Google Open Source Blog
Syndicate content
News about Google's Open Source projects and programs.
Updated: 57 min 1 sec ago

Open sourcing the Firebase SDKs

Wed, 05/17/2017 - 22:00
Today, at Google I/O 2017, we are pleased to announce that we are taking our first steps towards open sourcing our client libraries. By making our SDKs open, we’re aiming to show our commitment to greater transparency and to building a stronger developer community. To help further that goal, we’ll be using GitHub as a core part of our own toolchain to enable all of you to contribute as well. As you find issues in our code, from inconsistent style to bugs, you can file issues through the standard GitHub issue tracker. You can also find our project in the Google Open Source directory. We’re really looking forward to your pull requests!
What’s open?We’re starting by open sourcing several products in our iOS, JavaScript, Java, Node.js and Python SDKs. We'll be looking at open sourcing our Android SDK as well. The SDKs are being licensed under Apache 2.0, the same flexible license as existing Firebase open source projects like FirebaseUI.

Let's take a look at each repo:
Firebase iOS SDK 4.0https://github.com/firebase/firebase-ios-sdk

With the launch of the Firebase iOS 4.0 SDKs we have made several improvements to the developer experience, such as more idiomatic API names for our Swift users. By open sourcing our iOS SDKs we hope to provide an additional avenue for you to give us feedback on such features. For this first release we are open sourcing our Realtime Database, Auth, Cloud Storage and Cloud Messaging (FCM) SDKs, but going forward we intend to release more.

Because we aren't yet able to open source some of the Firebase components, the full product build process isn't available. While you can use this repo to build a FirebaseDev pod, our libraries distributed through CocoaPods will continue to be static frameworks for the time being. We are continually looking for ways to improve the developer experience for developers, however you integrate.

Our GitHub README provides more details on how you build, test and contribute to our iOS SDKs.
Firebase JavaScript SDK 4.0https://github.com/firebase/firebase-js-sdk

We are excited to announce that we are open sourcing our Realtime Database, Cloud Storage and Cloud Messaging (FCM) SDKs for JavaScript. We’ll have a couple of improvements hot on the heels of this initial release, including open sourcing Firebase Authentication. We are also in the process of releasing the source maps for our components, which we expect would really improve the debuggability of your app.

Our GitHub repo includes instructions on how you can build, test and contribute.
Firebase Admin SDKsNode.js: https://github.com/firebase/firebase-admin-node
Java: https://github.com/firebase/firebase-admin-java
Python: https://github.com/firebase/firebase-admin-python

We are happy to announce that all three of our Admin SDKs for accessing Firebase on privileged environments are now fully open source, including our recently-launched Python SDK. While we continue to explore supporting more languages, we encourage you to use our source as inspiration to enable Firebase for your environment (and if you do, we'd love to hear about it!)

We're really excited to see what you do with the updated SDKs - as always reach out to us with feedback or questions in the Firebase-Talk Google Group, on Stack Overflow, via the Firebase Support team, and now on GitHub for SDK issues and pull requests! And to read about the other improvements to Firebase that launched at Google I/O, head over to the Firebase blog.

By Salman Qadri, Firebase Product Manager
Categories: Open Source

Open Source at Google I/O 2017

Tue, 05/16/2017 - 18:00
One of the best parts of Google I/O every year is the chance to meet with the developers and community organizers from all over the world. It's a unique opportunity to have candid one-on-one conversations about the products and technologies we all love.

This year, I/O features a Community Lounge for attendees to relax, hangout, and play with neat experiments and games. It also features several mini-meetups during which you can chat with Googlers on a variety of topics.

Chris DiBona and Will Norris from the Google Open Source Programs Office will be around Thursday and Friday to talk about anything and everything open source, including our student outreach programs and the new Google Open Source website. If you're at Google I/O this year, make sure to drop by and say hello. Find dates, times, and other details in the Community Lounge schedule.

By Josh Simmons, Google Open Source
Categories: Open Source

OSS-Fuzz: Five months later, and rewarding projects

Mon, 05/08/2017 - 17:00
Five months ago, we announced OSS-Fuzz, Google’s effort to help make open source software more secure and stable. Since then, our robot army has been working hard at fuzzing, processing 10 trillion test inputs a day. Thanks to the efforts of the open source community who have integrated a total of 47 projects, we’ve found over 1,000 bugs (264 of which are potential security vulnerabilities).

Breakdown of the types of bugs we're finding.
Notable resultsOSS-Fuzz has found numerous security vulnerabilities in several critical open source projects: 10 in FreeType2, 17 in FFmpeg, 33 in LibreOffice, 8 in SQLite 3, 10 in GnuTLS, 25 in PCRE2, 9 in gRPC, and 7 in Wireshark, etc. We’ve also had at least one bug collision with another independent security researcher (CVE-2017-2801). (Some of the bugs are still view restricted so links may show smaller numbers.)

Once a project is integrated into OSS-Fuzz, the continuous and automated nature of OSS-Fuzz means that we often catch these issues just hours after the regression is introduced into the upstream repository, before any users are affected.

Fuzzing not only finds memory safety related bugs, it can also find correctness or logic bugs. One example is a carry propagating bug in OpenSSL (CVE-2017-3732).

Finally, OSS-Fuzz has reported over 300 timeout and out-of-memory failures (~75% of which got fixed). Not every project treats these as bugs, but fixing them enables OSS-Fuzz to find more interesting bugs.
Announcing rewards for open source projectsWe believe that user and internet security as a whole can benefit greatly if more open source projects include fuzzing in their development process. To this end, we’d like to encourage more projects to participate and adopt the ideal integration guidelines that we’ve established.

Combined with fixing all the issues that are found, this is often a significant amount of work for developers who may be working on an open source project in their spare time. To support these projects, we are expanding our existing Patch Rewards program to include rewards for the integration of fuzz targets into OSS-Fuzz.

To qualify for these rewards, a project needs to have a large user base and/or be critical to global IT infrastructure. Eligible projects will receive $1,000 for initial integration, and up to $20,000 for ideal integration (the final amount is at our discretion). You have the option of donating these rewards to charity instead, and Google will double the amount.

To qualify for the ideal integration reward, projects must show that:
  • Fuzz targets are checked into their upstream repository and integrated in the build system with sanitizer support (up to $5,000).
  • Fuzz targets are efficient and provide good code coverage (>80%) (up to $5,000). 
  • Fuzz targets are part of the official upstream development and regression testing process, i.e. they are maintained, run against old known crashers and the periodically updated corpora (up to $5,000).
  • The last $5,000 is a “l33t” bonus that we may reward at our discretion for projects that we feel have gone the extra mile or done something really awesome.
We’ve already started to contact the first round of projects that are eligible for the initial reward. If you are the maintainer or point of contact for one of these projects, you may also reach out to us in order to apply for our ideal integration rewards.
The futureWe’d like to thank the existing contributors who integrated their projects and fixed countless bugs. We hope to see more projects integrated into OSS-Fuzz, and greater adoption of fuzzing as standard practice when developing software.

By Oliver Chang, Abhishek Arya (Security Engineers, Chrome Security), Kostya Serebryany (Software Engineer, Dynamic Tools), and Josh Armour (Security Program Manager)
Categories: Open Source

Students, Start Your Engineerings!

Thu, 05/04/2017 - 17:19

It’s that time again! Our 201 mentoring organizations have selected 1,318 the students they look forward to working with during the 13th Google Summer of Code (GSoC). Congratulations to our 2017 students and a big thank you to everyone who applied!

The next step for participating students is the Community Bonding period which runs from May 4th through May 30th. During this time, students will get up to speed on the culture and toolset of their new community. They’ll also get acquainted with their mentor and learn more about the languages or tools they will need to complete their projects. Coding commences May 30th.

To the more than 4,200 students who were not chosen this year - don’t be discouraged! Many students apply at least once to GSoC before being accepted. You can improve your odds for next time by contributing to the open source project of your choice directly; organizations are always eager for new contributors! Look around GitHub and elsewhere on the internet for a project that interests you and get started.

Happy coding, everyone!

By Cat Allman, Google Open Source
Categories: Open Source

Saddle up and meet us in Texas for OSCON 2017

Tue, 05/02/2017 - 21:05
The Google Open Source team is getting ready to hit the road and join the open source panoply that is Open Source Convention (OSCON). This year the event runs May 8-11 in Austin, Texas and is preceded on May 6-7 by the free-to-attend Community Leadership Summit (CLS).
Program chairs at OSCON 2016, left to right:
Kelsey Hightower, Scott Hanselman, Rachel Roumeliotis.
Photo used with permission from O'Reilly Media.
You’ll find our team and many other Googlers throughout the week on the program schedule and in the expo hall at booth #401. We’ve got a full rundown of our schedule below, but you can swing by the expo hall anytime to discuss Google Cloud Platform, our open source outreach programs, the projects we’ve open-sourced including Kubernetes, TensorFlow, gRPC, and even our recently released open source documentation.

Of course, you’ll also find our very own Kelsey Hightower everywhere since he is serving as one of three OSCON program chairs for the second year in a row.

Are you a student, educator, project maintainer, community leader, past or present participant in Google Summer of Code or Google Code-in? Join us for lunch at the Google Summer of Code table in the conference lunch area on Wednesday afternoon. We’ll discuss our outreach programs which help open source communities grow while providing students with real world software development experience. We’ll be updating this blog post and tweeting with details closer to the date.
Without further ado, here’s our schedule of events:
Monday, May 8th (Tutorials)9:00am   Site reliability engineering by Jean Joswig1:30pm   Kubernetes hands-on by Kelsey Hightower
Tuesday, May 9th (Tutorials)1:30pm   Building amazing cross-platform command-line apps in Go by Steve Francia co-presented with Ashley McNamara
1:30pm  "Measure all the things" and other memes you haven’t implemented yet by Kelsey Hightower
Wednesday, May 10th (Sessions)11:00am  TensorFlow Community Keynote by Yufeng Guo and Amy Unruh11:50am  The life of a large-scale open source project by Jessica Frazelle 11:50am  What is machine learning by Amy Unruh12:30pm Google Summer of Code and Google Code-in lunch4:15pm   From WebSockets to WISH (web in strict HTTP) by Wenbo Zhu5:05pm   gRPC 101 for Java developers: Building small and efficient microservices by Ray Tsang5:05pm   Go deep, go wide, go everywhere: Hands-on machine learning with TensorFlow by Yufeng Guo
Thursday, May 11th (Sessions)9:30am  Half my life spent in open source by Brad Fitzpatrick4:15pm  Multilayered testing by Alex Martelli
We look forward to seeing you deep in the heart of Texas at OSCON 2017!
By Josh Simmons, Google Open Source
Categories: Open Source

The latest round of Google Open Source Peer Bonus winners

Mon, 04/17/2017 - 17:48
Google relies on open source software throughout our systems, much of it written by non-Googlers. We’re always looking for ways to say “thank you!” so 5 years ago we started asking Googlers to nominate open source contributors outside of the company who have made significant contributions to codebases we use or think are important. We’ve recognized more than 500 developers from 30+ countries who have contributed their time and talent to over 400 open source projects since the program’s inception in 2011.

Today we are pleased to announce the latest round of awardees, 52 individuals we’d like to recognize for their dedication to open source communities. The following is a list of everyone who gave us permission to thank them publicly:

.pb_table tbody td { vertical-align: top; font-size: 14px; } .pb_table thead tr td { vertical-align: bottom; font-weight: 900; }
Name Project Name Project Philipp Hancke Adapter.js Fernando Perez Jupyter & IPython Geoff Greer Ag Michelle Noorali Kubernetes & Helm Dzmitry Shylovich Angular Prosper Otemuyiwa Laravel Hackathon Starter David Kalnischkies Apt Keith Busch Linux kernel Peter Mounce Bazel Thomas Caswell matplotlib Yuki Yugui Sonoda Bazel Tatsuhiro Tsujikawa nghttp2 Eric Fiselier benchmark Anna Henningsen Node.js Rob Stradling Certificate Transparency Charles Harris NumPy Ke He Chromium Jeff Reback pandas Daniel Micay CopperheadOS Ludovic Rousseau PCSC-Lite, CCID Nico Huber coreboot Matti Picus PyPy Kyösti Mälkki coreboot Salvatore Sanfilippo Redis Jana Moudrá Dart Ralf Gommers SciPy John Wiegley Emacs Kevin O'Connor SeaBIOS Alex Saveau FirebaseUI-Android Sam Aaron Sonic Pi Toke Hoiland-Jorgensen Flent Michael Tyson The Amazing Audio Engine Hanno Böck Fuzzing Project Rob Landley Toybox Luca Milanesio Gerrit Bin Meng U-Boot Daniel Theophanes Go programming language Ben Noordhuis V8 Josh Snyder Go programming language Fatih Arslan vim-go Brendan Tracey Go programming language Adam Treat WebKit Elias Naur Go on Mobile Chris Dumez WebKit Anthonios Partheniou Google Cloud Datalab Sean Larkin Webpack Marcus Meissner gPhoto2 Tobias Koppers Webpack Matt Butcher Helm Alexis La Goutte Wireshark dissector for QUIC
Congratulations to all of the awardees, past and present! Thank you for your contributions.

By Helen Hu, Open Source Programs Office
Categories: Open Source

Introducing tf-seq2seq: An Open Source Sequence-to-Sequence Framework in TensorFlow

Tue, 04/11/2017 - 21:16
Crossposted on the Google Research Blog

Last year, we announced Google Neural Machine Translation (GNMT), a sequence-to-sequence (“seq2seq”) model which is now used in Google Translate production systems. While GNMT achieved huge improvements in translation quality, its impact was limited by the fact that the framework for training these models was unavailable to external researchers.

Today, we are excited to introduce tf-seq2seq, an open source seq2seq framework in TensorFlow that makes it easy to experiment with seq2seq models and achieve state-of-the-art results. To that end, we made the tf-seq2seq codebase clean and modular, maintaining full test coverage and documenting all of its functionality.

Our framework supports various configurations of the standard seq2seq model, such as depth of the encoder/decoder, attention mechanism, RNN cell type, or beam size. This versatility allowed us to discover optimal hyperparameters and outperform other frameworks, as described in our paper, “Massive Exploration of Neural Machine Translation Architectures.”

A seq2seq model translating from Mandarin to English. At each time step, the encoder takes in one Chinese character and its own previous state (black arrow), and produces an output vector (blue arrow). The decoder then generates an English translation word-by-word, at each time step taking in the last word, the previous state, and a weighted combination of all the outputs of the encoder (aka attention [3], depicted in blue) and then producing the next English word. Please note that in our implementation we use wordpieces [4] to handle rare words.In addition to machine translation, tf-seq2seq can also be applied to any other sequence-to-sequence task (i.e. learning to produce an output sequence given an input sequence), including machine summarization, image captioning, speech recognition, and conversational modeling. We carefully designed our framework to maintain this level of generality and provide tutorials, preprocessed data, and other utilities for machine translation.

We hope that you will use tf-seq2seq to accelerate (or kick off) your own deep learning research. We also welcome your contributions to our GitHub repository, where we have a variety of open issues that we would love to have your help with!

Acknowledgments:
We’d like to thank Eugene Brevdo, Melody Guan, Lukasz Kaiser, Quoc V. Le, Thang Luong, and Chris Olah for all their help. For a deeper dive into how seq2seq models work, please see the resources below.

References:
[1] Massive Exploration of Neural Machine Translation Architectures, Denny Britz, Anna Goldie, Minh-Thang Luong, Quoc Le
[2] Sequence to Sequence Learning with Neural Networks, Ilya Sutskever, Oriol Vinyals, Quoc V. Le. NIPS, 2014
[3] Neural Machine Translation by Jointly Learning to Align and Translate, Dzmitry Bahdanau, Kyunghyun Cho, Yoshua Bengio. ICLR, 2015
[4] Google’s Neural Machine Translation System: Bridging the Gap between Human and Machine Translation, Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Łukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, Jeffrey Dean. Technical Report, 2016
[5] Attention and Augmented Recurrent Neural Networks, Chris Olah, Shan Carter. Distill, 2016
[6] Neural Machine Translation and Sequence-to-sequence Models: A Tutorial, Graham Neubig
[7] Sequence-to-Sequence Models, TensorFlow.org

By Anna Goldie and Denny Britz, Research Software Engineer and Google Brain Resident, Google Brain Team
Categories: Open Source

Join the first POSSE Workshop in Europe

Mon, 04/10/2017 - 18:19
We are excited to announce that the Professors’ Open Source Software Experience (POSSE) is expanding to Europe! POSSE is an event that brings together educators interested in providing students with experience in real-world projects through participation in humanitarian free and open source software (HFOSS) projects.

Over 100 faculty members have attended past workshops and there is a growing community of instructors teaching students through contributions to HFOSS. This three-stage faculty workshop will prepare you to support student participation in open source projects. During the workshop, you will:

  • Learn how to support student learning within real-world project environments
  • Motivate students and cultivate their appreciation of computing for social good
  • Collaborate with instructors who have similar interests and goals
  • Join a community of educators passionate about HFOSS

Workshop FormatStage 1: Starts May 8, 2017 with online activities. Activities will take 2-3 hours per week and include interaction with workshop instructors and participants.
Stage 2: The face-to-face workshop will be held in Bologna, Italy, July 1-2, 2017 and is a pre-event for the ACM ITiCSE conference. Workshop participants include the workshop organizers, POSSE alumni, and members of the open source community.
Stage 3: Online activities and interactions in small groups immediately following the face-to-face workshop. Participants will have support while involving students in an HFOSS project in the classroom.

How to ApplyIf you’re a full-time instructor at an academic institution outside of the United States, you can join the workshop being held in Bologna, Italy, July 1-2, 2017. Please complete and submit the application by May 1, 2017. Prior work with FOSS projects is not required. English is the official language of the workshop. The POSSE workshop committee will send an email notifying you of the status of your application by May 5, 2017.

Participant SupportThe POSSE workshop in Europe is supported by Google. Attendees will be provided with funding for two nights lodging ($225 USD per night) and meals during the workshop. Travel costs will also be covered up to $450 USD. Participants are responsible for any charges above these limits. At this time, we can only support instructors at institutions of higher education outside of the U.S. For faculty at U.S. institutions, the next POSSE will be in fall 2017 on the east coast of the U.S.

We look forward to seeing you at the POSSE workshop in Italy!

By Helen Hu, Open Source Programs Office
Categories: Open Source

Noto Serif CJK is here!

Thu, 04/06/2017 - 17:45
Crossposted from the Google Developers Blog

Today, in collaboration with Adobe, we are responding to the call for Serif! We are pleased to announce Noto Serif CJK, the long-awaited companion to Noto Sans CJK released in 2014. Like Noto Sans CJK, Noto Serif CJK supports Simplified Chinese, Traditional Chinese, Japanese, and Korean, all in one font.

A serif-style CJK font goes by many names: Song (宋体) in Mainland China, Ming (明體) in Hong Kong, Macao and Taiwan, Minchō (明朝) in Japan, and Myeongjo (명조) or Batang (바탕) in Korea. The names and writing styles originated during the Song and Ming dynasties in China, when China's wood-block printing technique became popular. Characters were carved along the grain of the wood block. Horizontal strokes were easy to carve and vertical strokes were difficult; this resulted in thinner horizontal strokes and wider vertical ones. In addition, subtle triangular ornaments were added to the end of horizontal strokes to simulate Chinese Kai (楷体) calligraphy. This style continues today and has become a popular typeface style.

Serif fonts, which are considered more traditional with calligraphic aesthetics, are often used for long paragraphs of text such as body text of web pages or ebooks. Sans-serif fonts are often used for user interfaces of websites/apps and headings because of their simplicity and modern feeling.

Design of '永' ('eternity') in Noto Serif and Sans CJK. This ideograph is famous for having the most important elements of calligraphic strokes. It is often used to evaluate calligraphy or typeface design.
The Noto Serif CJK package offers the same features as Noto Sans CJK:

  • It has comprehensive character coverage for the four languages. This includes the full coverage of CJK Ideographs with variation support for four regions, Kangxi radicals, Japanese Kana, Korean Hangul and other CJK symbols and letters in the Unicode Basic Multilingual Plane of Unicode. It also provides a limited coverage of CJK Ideographs in Plane 2 of Unicode, as necessary to support standards from China and Japan.


Simplified ChineseSupports GB 18030 and China’s latest standard Table of General Chinese Characters (通用规范汉字表) published in 2013. Traditional ChineseSupports BIG5, and Traditional Chinese glyphs are compliant to glyph standard of Taiwan Ministry of Education (教育部國字標準字體).JapaneseSupports all of the kanji in  JIS X 0208, JIS X 0213, and JIS X 0212 to include all kanji in Adobe-Japan1-6.KoreanThe best font for typesetting classic Korean documents in Hangul and Hanja such as Humninjeongeum manuscript, a UNESCO World Heritage.Supports over 1.5 million archaic Hangul syllables and 11,172 modern syllables as well as all CJK ideographs in KS X 1001 and KS X 1002
Noto Serif CJK’s support of character and glyph set standards for the four languages
  • It respects diversity of regional writing conventions for the same character. The example below shows the four glyphs of '述' (describe) in four languages that have subtle differences.
From left to right are glyphs of '述' in S. Chinese, T. Chinese, Japanese and Korean. This character means "describe".
  • It is offered in seven weights: ExtraLight, Light, Regular, Medium, SemiBold, Bold, and Black. Noto Serif CJK supports 43,027 encoded characters and includes 65,535 glyphs (the maximum number of glyphs that can be included in a single font). The seven weights, when put together, have almost a half-million glyphs. The weights are compatible with Google's Material Design standard fonts, Roboto, Noto Sans and Noto Serif(Latin-Greek-Cyrillic fonts in the Noto family).
Seven weights of Noto Serif CJK
    • It supports vertical text layout and is compliant with the Unicode vertical text layout standard. The shape, orientation, and position of particular characters (e.g., brackets and kana letters) are changed when the writing direction of the text is vertical.



    The sheer size of this project also required regional expertise! Glyph design would not have been possible without leading East Asian type foundries Changzhou SinoType Technology, Iwata Corporation, and Sandoll Communications.

    Noto Serif CJK is open source under the SIL Open Font License, Version 1.1. We invite individual users to install and use these fonts in their favorite authoring apps; developers to bundle these fonts with your apps, and OEMs to embed them into their devices. The fonts are free for everyone to use!

    Noto Serif CJK font download:https://www.google.com/get/noto
    Noto Serif CJK on GitHub:https://github.com/googlei18n/noto-cjk
    Adobe's landing page for this release: http://adobe.ly/SourceHanSerif
    Source Han Serif on GitHub: https://github.com/adobe-fonts/source-han-serif/tree/release/

    By Xiangye Xiao and Jungshik Shin, Internationalization Engineering team
    Categories: Open Source

    Google hosts the Apache HBase community at HBaseCon West 2017

    Fri, 03/31/2017 - 18:00
    We’re excited to announce that Google will host and organize HBaseCon West 2017, the official conference for the Apache HBase community on June 12. Registration for the event in Mountain View, CA is free and the call for papers (CFP) is open through April 24. Seats are limited and the CFP closes soon, so act fast.


    Apache HBase is the original open source implementation of the design concepts behind Bigtable, a critical piece of internal Google data infrastructure which was first described in a 2006 research paper and earned a SIGOPS Hall of Fame award last year. Since the founding of HBase, its community has made impressive advances supporting massive scale with enterprise users including Alibaba, Apple, Facebook, and Visa. The community is fostering a rich and still-growing ecosystem including Apache Phoenix, OpenTSDB, Apache Trafodion, Apache Kylin and many others.

    Now that Bigtable is available to Google Cloud users through Google Cloud Bigtable, developers have the benefit of platform choices for apps that rely on high-volume and low-latency reads and writes. Without the ability to build portable applications on open APIs,  however, even that freedom of choice can lead to a dead end — something Google addresses through its investment in open standards like Apache Beam, Kubernetes and TensorFlow.

    To that end, Google’s Bigtable team has been actively participating in the HBase community. We’ve helped co-author the HBase 1.0 API and have standardized on that API in Cloud Bigtable. This design choice means developers with HBase experience don’t need to learn a new API for building cloud-native applications, ensures Cloud Bigtable users have access to the large Apache Hadoop ecosystem and alleviates concerns about long-term lock-in.

    We hope you’ll join us and the HBase community at HBaseCon West 2017. We recommend registering early as there is no registration available on site. As usual, sessions are selected by the HBase community from a pool reflecting some of the world’s largest and most advanced production deployments.

    Register soon or submit a paper for HBaseCon — remember, the CFP closes on April 24! We look forward to seeing you at the conference.

    By Carter Page and Michael Stack, Apache HBase Project Management Committee members
    Categories: Open Source

    An Upgrade to SyntaxNet, New Models and a Parsing Competition

    Thu, 03/30/2017 - 19:04
    Crossposted from the Google Research Blog

    At Google, we continuously improve the language understanding capabilities used in applications ranging from generation of email responses to translation. Last summer, we open-sourced SyntaxNet, a neural-network framework for analyzing and understanding the grammatical structure of sentences. Included in our release was Parsey McParseface, a state-of-the-art model that we had trained for analyzing English, followed quickly by a collection of pre-trained models for 40 additional languages, which we dubbed Parsey's Cousins. While we were excited to share our research and to provide these resources to the broader community, building machine learning systems that work well for languages other than English remains an ongoing challenge. We are excited to announce a few new research resources, available now, that address this problem.

    SyntaxNet Upgrade
    We are releasing a major upgrade to SyntaxNet. This upgrade incorporates nearly a year’s worth of our research on multilingual language understanding, and is available to anyone interested in building systems for processing and understanding text. At the core of the upgrade is a new technology that enables learning of richly layered representations of input sentences. More specifically, the upgrade extends TensorFlow to allow joint modeling of multiple levels of linguistic structure, and to allow neural-network architectures to be created dynamically during processing of a sentence or document.

    Our upgrade makes it, for example, easy to build character-based models that learn to compose individual characters into words (e.g. ‘c-a-t’ spells ‘cat’). By doing so, the models can learn that words can be related to each other because they share common parts (e.g. ‘cats’ is the plural of ‘cat’ and shares the same stem; ‘wildcat’ is a type of ‘cat’). Parsey and Parsey’s Cousins, on the other hand, operated over sequences of words. As a result, they were forced to memorize words seen during training and relied mostly on the context to determine the grammatical function of previously unseen words.

    As an example, consider the following (meaningless but grammatically correct) sentence:
    This sentence was originally coined by Andrew Ingraham who explained: “You do not know what this means; nor do I. But if we assume that it is English, we know that the doshes are distimmed by the gostak. We know too that one distimmer of doshes is a gostak." Systematic patterns in morphology and syntax allow us to guess the grammatical function of words even when they are completely novel: we understand that ‘doshes’ is the plural of the noun ‘dosh’ (similar to the ‘cats’ example above) or that ‘distim’ is the third person singular of the verb distim. Based on this analysis we can then derive the overall structure of this sentence even though we have never seen the words before.

    ParseySaurus
    To showcase the new capabilities provided by our upgrade to SyntaxNet, we are releasing a set of new pretrained models called ParseySaurus. These models use the character-based input representation mentioned above and are thus much better at predicting the meaning of new words based both on their spelling and how they are used in context. The ParseySaurus models are far more accurate than Parsey’s Cousins (reducing errors by as much as 25%), particularly for morphologically-rich languages like Russian, or agglutinative languages like Turkish and Hungarian. In those languages there can be dozens of forms for each word and many of these forms might never be observed during training - even in a very large corpus.

    Consider the following fictitious Russian sentence, where again the stems are meaningless, but the suffixes define an unambiguous interpretation of the sentence structure:
    Even though our Russian ParseySaurus model has never seen these words, it can correctly analyze the sentence by inspecting the character sequences which constitute each word. In doing so, the system can determine many properties of the words (notice how many more morphological features there are here than in the English example). To see the sentence as ParseySaurus does, here is a visualization of how the model analyzes this sentence:
    Each square represents one node in the neural network graph, and lines show the connections between them. The left-side “tail” of the graph shows the model consuming the input as one long string of characters. These are intermittently passed to the right side, where the rich web of connections shows the model composing words into phrases and producing a syntactic parse. Check out the full-size rendering here.

    A Competition
    You might be wondering whether character-based modeling are all we need or whether there are other techniques that might be important. SyntaxNet has lots more to offer, like beam search and different training objectives, but there are of course also many other possibilities. To find out what works well in practice we are helping co-organize, together with Charles University and other colleagues, a multilingual parsing competition at this year’s Conference on Computational Natural Language Learning (CoNLL) with the goal of building syntactic parsing systems that work well in real-world settings and for 45 different languages.

    The competition is made possible by the Universal Dependencies (UD) initiative, whose goal is to develop cross-linguistically consistent treebanks. Because machine learned models can only be as good as the data that they have access to, we have been contributing data to UD since 2013. For the competition, we partnered with UD and DFKI to build a new multilingual evaluation set consisting of 1000 sentences that have been translated into 20+ different languages and annotated by linguists with parse trees. This evaluation set is the first of its kind (in the past, each language had its own independent evaluation set) and will enable more consistent cross-lingual comparisons. Because the sentences have the same meaning and have been annotated according to the same guidelines, we will be able to get closer to answering the question of which languages might be harder to parse.

    We hope that the upgraded SyntaxNet framework and our the pre-trained ParseySaurus models will inspire researchers to participate in the competition. We have additionally created a tutorial showing how to load a Docker image and train models on the Google Cloud Platform, to facilitate participation by smaller teams with limited resources. So, if you have an idea for making your own models with the SyntaxNet framework, sign up to compete! We believe that the configurations that we are releasing are a good place to start, but we look forward to seeing how participants will be able to extend and improve these models or perhaps create better ones!

    Thanks to everyone involved who made this competition happen, including our collaborators at UD-Pipe, who provide another baseline implementation to make it easy to enter the competition. Happy parsing from the main developers, Chris Alberti, Daniel Andor, Ivan Bogatyy, Mark Omernick, Zora Tung and Ji Ma!

    By David Weiss and Slav Petrov, Research Scientists
    Categories: Open Source

    A New Home for Google Open Source

    Wed, 03/29/2017 - 00:57
    Google Open Source logoFree and open source software has been part of our technical and organizational foundation since Google’s early beginnings. From servers running the Linux kernel to an internal culture of being able to patch any other team's code, open source is part of everything we do. In return, we've released millions of lines of open source code, run programs like Google Summer of Code and Google Code-in, and sponsor open source projects and communities through organizations like Software Freedom Conservancy, the Apache Software Foundation, and many others.

    Today, we’re launching opensource.google.com, a new website for Google Open Source that ties together all of our initiatives with information on how we use, release, and support open source.

    This new site showcases the breadth and depth of our love for open source. It will contain the expected things: our programs, organizations we support, and a comprehensive list of open source projects we've released. But it also contains something unexpected: a look under the hood at how we "do" open source.

    Helping you find interesting open sourceOne of the tenets of our philosophy towards releasing open source code is that "more is better." We don't know which projects will find an audience, so we help teams release code whenever possible. As a result, we have released thousands of projects under open source licenses ranging from larger products like TensorFlow, Go, and Kubernetes to smaller projects such as Light My Piano, Neuroglancer and Periph.io. Some are fully supported while others are experimental or just for fun. With so many projects spread across 100 GitHub organizations and our self-hosted Git service, it can be difficult to see the scope and scale of our open source footprint.

    To provide a more complete picture, we are launching a directory of our open source projects which we will expand over time. For many of these projects we are also adding information about how they are used inside Google. In the future, we hope to add more information about project lifecycle and maturity.

    How we do open sourceOpen source is about more than just code; it's also about community and process. Participating in open source projects and communities as a large corporation comes with its own unique set of challenges. In 2014, we helped form the TODO Group, which provides a forum to collaborate and share best practices among companies that are deeply committed to open source. Inspired by many discussions we've had over the years, today we are publishing our internal documentation for how we do open source at Google.

    These docs explain the process we follow for releasing new open source projects, submitting patches to others' projects, and how we manage the open source code that we bring into the company and use ourselves. But in addition to the how, it outlines why we do things the way we do, such as why we only use code under certain licenses or why we require contributor license agreements for all patches we receive.

    Our policies and procedures are informed by many years of experience and lessons we've learned along the way. We know that our particular approach to open source might not be right for everyone—there's more than one way to do open source—and so these docs should not be read as a "how-to" guide. Similar to how it can be valuable to read another engineer's source code to see how they solved a problem, we hope that others find value in seeing how we approach and think about open source at Google.

    To hear a little more about the backstory of the new Google Open Source site, we invite you to listen to the latest episode from our friends at The Changelog. We hope you enjoy exploring the new site!

    By Will Norris, Open Source Programs Office
    Categories: Open Source

    Google Summer of Code 2017 student applications are open!

    Tue, 03/28/2017 - 17:24
    Are you a university student looking to learn more about open source software development? Consider applying to Google Summer of Code (GSoC) for a chance to spend your break coding on an open source project.

    vertical GSoC logo.jpg

    For the 13th straight year GSoC will give students from around the world the opportunity to learn the ins and outs of open source software development while working from their home. Students will receive a stipend for their successful contributions to allow them to focus on their coding during the program.

    Mentors are paired with the students to help address technical questions and to monitor their progress throughout the program. Former GSoC participants have told us that the real-world experience they’ve gained during the program has not only sharpened their technical skills, but has also boosted their confidence, broadened their professional network and enhanced their resumes.

    Interested students can submit proposals on the program site now through Monday, April 3 at 16:00 UTC. The first step is to search through the 201 open source organizations and review the “Project ideas” for the organizations that appeal to you. Next, reach out to the organizations to introduce yourself and determine if your skills and interests are a good match with their organization.




    Since spots are limited, we recommend writing a strong project proposal and submitting a draft early to receive feedback from the organization which will help increase your chances of selection. Our Student Manual, written by former students and mentors, provides excellent helpful advice to get you started with choosing an organization and crafting a great proposal.

    For information throughout the application period and beyond, visit the Google Open Source Blog, join our Google Summer of Code discussion lists or join us on Internet Relay Chat (IRC) at #gsoc on Freenode. Be sure to read the Program Rules, Timeline and FAQ, all available on the program site, for more information about Google Summer of Code.

    Good luck to all the open source coders who apply, and remember to submit your proposals early — you only have until Monday, April 3 at 16:00 UTC!

    By Stephanie Taylor, Google Summer of Code Program Manager
    Categories: Open Source

    Dispatches from the latest Mercurial sprints

    Fri, 03/24/2017 - 17:51
    On March 10th-12th, the Mercurial project held one of its twice-a-year sprints in the Google Mountain View office. Mercurial is a distributed version control system, used by Google, W3C, OpenJDK and Mozilla among others. We had 40 developers in attendance, some from companies with large Mercurial deployments and some individual contributors who volunteer in their spare time.

    One of the major themes we discussed was user-friendliness. Mercurial developers work hard to keep the command-line interface backwards compatible, but at the same time, we would like to make progress by smoothing out some rough edges. We discussed how we can provide a better user interface for users to opt-in to without breaking the backwards compatibility constraint. We also talked about how to make Mercurial’s Changeset Evolution feature easier to use.

    We considered moving Mercurial past SHA1 for revision identification, to enhance security and integrity of Mercurial repositories in light of recent SHA1 exploits. A rough consensus on a plan started to emerge, and design docs should start to circulate in the next month or so.

    We also talked about performance, such as new storage layers that would scale more effectively and work better with clones that only contain a partial repository history, a key requirement for Mercurial adoption in enterprise environments with large repositories, like Google.

    If you are interested in finding out more about Mercurial (or perhaps you’d like to contribute!) you can find our mailing list information here.

    By Martin von Zweigbergk and Augie Fackler, Software Engineers
    Categories: Open Source

    Announcing Guetzli: A New Open Source JPEG Encoder

    Fri, 03/17/2017 - 21:59
    Crossposted on the Google Research Blog

    At Google, we care about giving users the best possible online experience, both through our own services and products and by contributing new tools and industry standards for use by the online community. That’s why we’re excited to announce Guetzli, a new open source algorithm that creates high quality JPEG images with file sizes 35% smaller than currently available methods, enabling webmasters to create webpages that can load faster and use even less data.

    Guetzli [guɛtsli] — cookie in Swiss German — is a JPEG encoder for digital images and web graphics that can enable faster online experiences by producing smaller JPEG files while still maintaining compatibility with existing browsers, image processing applications and the JPEG standard. From the practical viewpoint this is very similar to our Zopfli algorithm, which produces smaller PNG and gzip files without needing to introduce a new format; and different than the techniques used in RNN-based image compression, RAISR, and WebP, which all need client changes for compression gains at internet scale.

    The visual quality of JPEG images is directly correlated to its multi-stage compression process: color space transform, discrete cosine transform, and quantization. Guetzli specifically targets the quantization stage in which the more visual quality loss is introduced, the smaller the resulting file. Guetzli strikes a balance between minimal loss and file size by employing a search algorithm that tries to overcome the difference between the psychovisual modeling of JPEG's format, and Guetzli’s psychovisual model, which approximates color perception and visual masking in a more thorough and detailed way than what is achievable by simpler color transforms and the discrete cosine transform. However, while Guetzli creates smaller image file sizes, the tradeoff is that these search algorithms take significantly longer to create compressed images than currently available methods.

    orig-libjpeg-guetzli.pngFigure 1. 16x16 pixel synthetic example of  a phone line  hanging against a blue sky — traditionally a case where JPEG compression algorithms suffer from artifacts. Uncompressed original is on the left. Guetzli (on the right) shows less ringing artefacts than libjpeg (middle) and has a smaller file size.And while Guetzli produces smaller image file sizes without sacrificing quality, we additionally found that in experiments where compressed image file sizes are kept constant that human raters consistently preferred the images Guetzli produced over libjpeg images, even when the libjpeg files were the same size or even slightly larger. We think this makes the slower compression a worthy tradeoff.
    montage-cats-zoom-eye2.pngFigure 2. 20x24 pixel zoomed areas from a picture of a cat’s eye. Uncompressed original on the left. Guetzli (on the right) shows less ringing artefacts than libjpeg (middle) without requiring a larger file size.It is our hope that webmasters and graphic designers will find Guetzli useful and apply it to their photographic content, making users’ experience smoother on image-heavy websites in addition to reducing load times and bandwidth costs for mobile users. Last, we hope that the new explicitly psychovisual approach in Guetzli will inspire further image and video compression research.
    By Robert Obryk and Jyrki Alakuijala, Software Engineers, Google Research Europe
    Categories: Open Source

    Announcing Google Radio PHY Test, aka “Graphyte”, as part of the Chromium Project

    Fri, 03/17/2017 - 18:00
    With many different Chromebook models for sale from several different OEMs, the Chrome OS Factory team interfaces with many different Contract Manufacturers (CMs), Original Device Manufacturers (ODMs), and factory teams. The Google Chromium OS Factory Software Platform, a suite of factory tools provided to Chrome OS partners, allows any factory team to quickly bring up a Chrome OS manufacturing test line.

    Today, we are announcing that this platform has been extended to remove the friction of bringing up wireless verification test systems with a component called Google Radio PHY Test or “Graphyte.” Graphyte is an open source software framework that can be used and extended by the wireless ecosystem of chipset companies, test solution providers, and wireless device manufacturers, as opposed to the traditional approach of vendor-specific solutions. It is developed in Python and capable of running on Linux and Chrome OS test stations with an initial focus on Wi-Fi, Bluetooth, and 802.15.4 physical layer verification.

    Verifying that a wireless device is working properly requires chipset- and instrument-specific software which coordinate transmitting and measuring power and signal quality across channels, bandwidths, and data rates. Graphyte provides high-level API abstractions for controlling wireless chipsets and test instruments, allowing anyone to develop a “plugin” for a given chipset or instrument.

    Graphyte architecture.
    We’ve worked closely with industry leaders like Intel and LitePoint to ensure the Graphyte APIs have the right level of abstraction, and it is already being used in production on multiple manufacturing lines and several different products.

    To get started, use Git to clone our repository. You can learn more by reading the Graphyte User Manual and checking out the example of how to use Graphyte in a real test. You can get involved by joining our mailing list. If you’d like to contribute, please follow the Chromium OS Developer Guide.

    To get started with the LitePoint Graphyte plugin, please contact LitePoint directly. To get started with the Intel Graphyte plugin, please contact Intel directly.

    Happy testing!

    By Kurt Williams, Technical Program Manager
    Categories: Open Source

    Getting ready for Google Summer of Code 2017

    Fri, 03/10/2017 - 17:48
    Spring is just around the corner here in the Northern Hemisphere and Google Summer of Code is fast approaching. If you are a student interested in participating this year, now is the time to prepare -- read on for tips on how to get ready.

    This year we’ve accepted 201 open source organizations into the program, nearly 40 of which are new to the program. The organizations cover a wide range of topics including (but certainly not limited to!):

    • Operating systems
    • Web application frameworks
    • Healthcare and bioinformatics
    • Music and graphic design
    • Machine learning
    • Robotics
    • Security




    How should you prepare for Google Summer of Code?While student applications don’t open until March 20th at 16:00 UTC, you need to decide which projects you’re interested in and what you’ll propose. You should also communicate with those projects to learn more before you apply.

    Start by looking at the list of participating projects and organizations. You can explore by searching for specific names or technologies, or filtering by topics you are interested in. Follow the “Learn More” link through to each organization’s page for additional information.

    Once you’ve identified the organizations that you’re interested in, take a look at their ideas list to get a sense of the specific projects you could work on. Typically, you will choose a project from that list and write a proposal based on that idea, but you could also propose something that’s not on that list.

    You should reach out to the organizations after you’ve decided what you want to work on. Doing this can make the difference between a good application and a great application.

    Whatever you do, don’t wait until March 20th to begin preparing for Google Summer of Code! History has shown that students who reach out to organizations before the start of the application period have a higher chance of being accepted into the program, as they have had more time to talk to the organizations and understand what they are looking for with the project.

    If you have any questions along the way, take a look at the Student Manual, FAQ and Timeline. If you can’t find the answer to your question, try taking your question to the mailing list.

    By Josh Simmons, Open Source Programs Office
    Categories: Open Source

    Another option for file sharing

    Wed, 03/08/2017 - 17:59
    Originally posted on the Google Security Blog

    Existing mechanisms for file sharing are so fragmented that people waste time on multi-step copying and repackaging. With the new open source project Upspin, we aim to improve the situation by providing a global name space to name all your files. Given an Upspin name, a file can be shared securely, copied efficiently without "download" and "upload", and accessed by anyone with permission from anywhere with a network connection.

    Our target audience is personal users, families, or groups of friends. Although Upspin might have application in enterprise environments, we think that focusing on the consumer case enables easy-to-understand and easy-to-use sharing.

    File names begin with the user's email address followed by a slash-separated Unix-like path name:

    ann@example.com/dir/file.
    Any user with appropriate permission can access the contents of this file by using Upspin services to evaluate the full path name, typically via a FUSE filesystem so that unmodified applications just work. Upspin names usually identify regular static files and directories, but may point to dynamic content generated by devices such as sensors or services.

    If the user wishes to share a directory (the unit at which sharing privileges are granted), she adds a file called Access to that directory. In that file she describes the rights she wishes to grant and the users she wishes to grant them to. For instance,

    read: joe@here.com, mae@there.com
    allows Joe and Mae to read any of the files in the directory holding the Access file, and also in its subdirectories. As well as limiting who can fetch bytes from the server, this access is enforced end-to-end cryptographically, so cleartext only resides on Upspin clients, and use of cloud storage does not extend the trust boundary.

    Upspin looks a bit like a global file system, but its real contribution is a set of interfaces, protocols, and components from which an information management system can be built, with properties such as security and access control suited to a modern, networked world. Upspin is not an "app" or a web service, but rather a suite of software components, intended to run in the network and on devices connected to it, that together provide a secure, modern information storage and sharing network. Upspin is a layer of infrastructure that other software and services can build on to facilitate secure access and sharing. This is an open source contribution, not a Google product. We have not yet integrated with the Key Transparency server, though we expect to eventually, and for now use a similar technique of securely publishing all key updates. File storage is inherently an archival medium without forward secrecy; loss of the user's encryption keys implies loss of content, though we do provide for key rotation.

    It’s early days, but we’re encouraged by the progress and look forward to feedback and contributions. To learn more, see the GitHub repository at Upspin.

    By Andrew Gerrand, Eric Grosse, Rob Pike, Eduardo Pinheiro and Dave Presotto, Google Software Engineers
    Categories: Open Source

    By maintainers, for maintainers: Wontfix_Cabal

    Mon, 03/06/2017 - 19:00
    The Google Open Source Programs Office likes to highlight events we support, organize, or speak at. In this case, Google’s own Jess Frazelle was responsible for running a unique event for open source maintainers.

    This year I helped organize the first inaugural Wontfix_Cabal. The conference was organized by open source software maintainers for open source software maintainers. Our initial concept was an unconference where attendees could discuss topics candidly with their peers from other open source communities.

    The idea for the event stemmed from the response to a blog post I published about closing pull requests. The response was overwhelming, with many maintainers commiserating and sharing lessons they had learned. It seemed like we could all learn a lot from our peers in other projects -- if we had the space to do so -- and it was clear that people needed a place to vent.

    Major thanks to Katrina Owen and Brandon Keepers from GitHub who jumped right in and provided the venue we needed to make this happen. Without their support this would’ve never become a reality!

    It was an excellent first event and the topics discussed were wide ranging, including:
    • How to deal with unmaintained projects
    • Collecting metrics to judge project health
    • Helping newcomers
    • Dealing with backlogs
    • Coping with, and minimizing, toxic behavior in our communities

    Never have I seen so many open source maintainers in one place. Thanks @wontfix_, this is amazing pic.twitter.com/GdYXUjZds3— Gregor (@gr2m) February 15, 2017
    The discussion around helping newcomers focused on creating communities with welcoming and productive cultures right from the start. I was fascinated to learn that some projects pre-fill issues before going public so as to set the tone for the future of the project. Another good practice is clearly defining how one becomes a maintainer or gets commit access. There should be clear rules in place so people know what they have to do to succeed.

    Another discussion I really liked focused on “saying no.” Close fast and close early was a key takeaway. There’s no sense in letting a contribution sit waiting when you know it will never be accepted. Multiple projects found that having a bot give the hard news was always better than having the maintainer do it. This way it is not personal, just a regular part of the process.

    One theme seen in multiple sessions: “Being kind is not the same as being nice.” The distinction here is that being nice comes from a place of fear and leads people to bend over backwards just to please. Being kind comes from a place of strength, from doing the right thing.

    Summaries of many of the discussions have been added to the GitHub repo if you would like to read more.

    After the event concluded many maintainers got right to work, putting what they had learned into practice. For instance, Rust got help from the Google open source fuzzing team.

    Flurry of internal emails following up on ideas from @wontfix_: all sent! Now it's time to start on some PRs.— Rainer Sigwald (@Tashkant) February 22, 2017
    Our goal was to put together a community of maintainers that could support and learn from each other. When I saw Linux kernel maintainers talking to people who work on Node and JavaScript, I knew we had achieved that goal. Laura Abbott, one of those kernel developers, wrote a blog post about the experience.

    Not only was the event useful, it was also a lot of fun. Meeting maintainers, people who care a great deal about open source software, from such a diverse group of projects was great. Overall, I think our initial run was a success! Follow us on Twitter to find out about future events.

    By Jess Frazelle, Software Engineer
    Categories: Open Source

    Introducing Python Fire, a library for automatically generating command line interfaces

    Thu, 03/02/2017 - 19:00
    Today we are pleased to announce the open-sourcing of Python Fire. Python Fire generates command line interfaces (CLIs) from any Python code. Simply call the Fire function in any Python program to automatically turn that program into a CLI. The library is available from pypi via `pip install fire`, and the source is available on GitHub.

    Python Fire will automatically turn your code into a CLI without you needing to do any additional work. You don't have to define arguments, set up help information, or write a main function that defines how your code is run. Instead, you simply call the `Fire` function from your main module, and Python Fire takes care of the rest. It uses inspection to turn whatever Python object you give it -- whether it's a class, an object, a dictionary, a function, or even a whole module -- into a command line interface, complete with tab completion and documentation, and the CLI will stay up-to-date even as the code changes.

    To illustrate this, let's look at a simple example.

    #!/usr/bin/env python
    import fire

    class Example(object):
    def hello(self, name='world'):
    """Says hello to the specified name."""
    return 'Hello {name}!'.format(name=name)

    def main():
    fire.Fire(Example)

    if __name__ == '__main__':
    main()

    When the Fire function is run, our command will be executed. Just by calling Fire, we can now use the Example class as if it were a command line utility.

    $ ./example.py hello
    Hello world!
    $ ./example.py hello David
    Hello David!
    $ ./example.py hello --name=Google
    Hello Google!

    Of course, you can continue to use this module like an ordinary Python library, enabling you to use the exact same code both from Bash and Python. If you're writing a Python library, then you no longer need to update your main method or client when experimenting with it; instead you can simply run the piece of your library that you're experimenting with from the command line. Even as the library changes, the command line tool stays up to date.

    At Google, engineers use Python Fire to generate command line tools from Python libraries. We have an image manipulation tool built by using Fire with the Python Imaging Library, PIL. In Google Brain, we use an experiment management tool built with Fire, allowing us to manage experiments equally well from Python or from Bash.

    Every Fire CLI comes with an interactive mode. Run the CLI with the `--interactive` flag to launch an IPython REPL with the result of your command, as well as other useful variables already defined and ready to use. Be sure to check out Python Fire's documentation for more on this and the other useful features Fire provides.

    Between Python Fire's simplicity, generality, and power, we hope you find it a useful library for your own projects.

    By David Bieber, Software Engineer on Google Brain
    Categories: Open Source