Connect with us

Opinion

AI at Google: our principles

Published

on

Sundar Pichai, CEO, Google

By Sundar Pichai, CEO, Google

At its heart, AI is computer programming that learns and adapts. It can’t solve every problem, but its potential to improve our lives is profound. At Google, we use AI to make products more useful—from email that’s spam-free and easier to compose, to a digital assistant you can speak to naturally, to photos that pop the fun stuff out for you to enjoy.

Beyond our products, we’re using AI to help people tackle urgent problems. A pair of high school students are building AI-powered sensors to predict the risk of wildfires. Farmers are using it to monitor the health of their herds. Doctors are starting to use AI to help diagnose cancer and prevent blindness. These clear benefits are why Google invests heavily in AI research and development, and makes AI technologies widely available to others via our tools and open-source code.

We recognize that such powerful technology raises equally powerful questions about its use. How AI is developed and used will have a significant impact on society for many years to come. As a leader in AI, we feel a deep responsibility to get this right. So today, we’re announcing seven principles to guide our work going forward. These are not theoretical concepts; they are concrete standards that will actively govern our research and product development and will impact our business decisions.

We acknowledge that this area is dynamic and evolving, and we will approach our work with humility, a commitment to internal and external engagement, and a willingness to adapt our approach as we learn over time.

Objectives for AI applications

We will assess AI applications in view of the following objectives. We believe that AI should:

1. Be socially beneficial.

The expanded reach of new technologies increasingly touch society as a whole. Advances in AI will have transformative impacts in a wide range of fields, including healthcare, security, energy, transportation, manufacturing, and entertainment. As we consider potential development and uses of AI technologies, we will take into account a broad range of social and economic factors, and will proceed where we believe that the overall likely benefits substantially exceed the foreseeable risks and downsides.

AI also enhances our ability to understand the meaning of content at scale. We will strive to make high-quality and accurate information readily available using AI, while continuing to respect cultural, social, and legal norms in the countries where we operate. And we will continue to thoughtfully evaluate when to make our technologies available on a non-commercial basis.

2. Avoid creating or reinforcing unfair bias.

AI algorithms and datasets can reflect, reinforce, or reduce unfair biases. We recognize that distinguishing fair from unfair biases is not always simple, and differs across cultures and societies. We will seek to avoid unjust impacts on people, particularly those related to sensitive characteristics such as race, ethnicity, gender, nationality, income, sexual orientation, ability, and political or religious belief.

3. Be built and tested for safety.

We will continue to develop and apply strong safety and security practices to avoid unintended results that create risks of harm. We will design our AI systems to be appropriately cautious, and seek to develop them in accordance with best practices in AI safety research. In appropriate cases, we will test AI technologies in constrained environments and monitor their operation after deployment.

4. Be accountable to people.

We will design AI systems that provide appropriate opportunities for feedback, relevant explanations, and appeal. Our AI technologies will be subject to appropriate human direction and control.

5. Incorporate privacy design principles.

We will incorporate our privacy principles in the development and use of our AI technologies. We will give opportunity for notice and consent, encourage architectures with privacy safeguards, and provide appropriate transparency and control over the use of data.

6. Uphold high standards of scientific excellence.

Technological innovation is rooted in the scientific method and a commitment to open inquiry, intellectual rigor, integrity, and collaboration. AI tools have the potential to unlock new realms of scientific research and knowledge in critical domains like biology, chemistry, medicine, and environmental sciences. We aspire to high standards of scientific excellence as we work to progress AI development.

We will work with a range of stakeholders to promote thoughtful leadership in this area, drawing on scientifically rigorous and multidisciplinary approaches. And we will responsibly share AI knowledge by publishing educational materials, best practices, and research that enable more people to develop useful AI applications.

7. Be made available for uses that accord with these principles.

Many technologies have multiple uses. We will work to limit potentially harmful or abusive applications. As we develop and deploy AI technologies, we will evaluate likely uses in light of the following factors:

· Primary purpose and use: the primary purpose and likely use of a technology and application, including how closely the solution is related to or adaptable to a harmful use

· Nature and uniqueness: whether we are making available technology that is unique or more generally available

· Scale: whether the use of this technology will have significant impact

· Nature of Google’s involvement: whether we are providing general-purpose tools, integrating tools for customers, or developing custom solutions

AI applications we will not pursue
In addition to the above objectives, we will not design or deploy AI in the following application areas:

1. Technologies that cause or are likely to cause overall harm. Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risks, and will incorporate appropriate safety constraints.

2. Weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.

3. Technologies that gather or use information for surveillance violating internationally accepted norms.

4. Technologies whose purpose contravenes widely accepted principles of international law and human rights.

We want to be clear that while we are not developing AI for use in weapons, we will continue our work with governments and the military in many other areas. These include cybersecurity, training, military recruitment, veterans’ healthcare, and search and rescue. These collaborations are important and we’ll actively look for more ways to augment the critical work of these organizations and keep service members and civilians safe.

AI for the long term

While this is how we’re choosing to approach AI, we understand there is room for many voices in this conversation. As AI technologies progress, we’ll work with a range of stakeholders to promote thoughtful leadership in this area, drawing on scientifically rigorous and multidisciplinary approaches. And we will continue to share what we’ve learned to improve AI technologies and practices.

We believe these principles are the right foundation for our company and the future development of AI. This approach is consistent with the values laid out in our original Founders’ Letter back in 2004. There we made clear our intention to take a long-term perspective, even if it means making short-term trade offs. We said it then, and we believe it now.

Continue Reading
Advertisement
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Opinion

Business benefits of deploying a hadoop based data lake

Published

on

Deepak Jha

By Deepak Jha

A trade-off between benefits and challenges of managing and analysing data has often confused the organizations taking decisions towards investment in the new age data processing solution and technologies. However, the success stories of Hadoop based Big data implementation by the industry giants across the globe comes to their rescue and opens up avenues for the league of organizations who strive to reach summit within their respective industry, courtesy the data backed decision making.

A Hadoop based big data setup, often termed as a datalake, offers capability to process Petabyte scale volumes of disparate datasets at the right speed and within the stipulated timeframe. Data Lake has been identified as a powerful data architecture that is gaining a lot of momentum across different organizations today. It leverages the economics of big data with its phenomenal data storage, processing and analytics capabilities to help companies address their business challenges. By definition, a Data Lake is a centralized repository that allows an organization to store structured as well as unstructured data at any scale in its native format and supports on-the-fly processing of such data for exploration, analytics, and operations.

The very reason why data lake is being adopted by organizations at a large scale is that it provisions the storage of structured and unstructured data that is critical for advanced analytics such as predictive and discovery-oriented analytics. It also empowers self-service data practices and facilitates sharing of data and analytics best practices across all line-of-businesses of the organization.

Business benefits

Scale As Per Business Demands

A datalake can scale in or scale out and can grow to any extent by addition of nodes to match up the compute and storage capabilities with the growth in data. The Hadoop Distributed File System(HDFS) can accommodate arbitrarily large volumes of data with the ability to simultaneously read and write data from multiple data stores and scope of addition and integration of the new data stores invisibly for the end users.

Ingest All Size and Format of Data

A datalake platform can process any kind of data from disparate sources in any form and size. It helps organizations to explore and connects dots in different data sets for better and more accurate insights. It can process multi lingual data and can cater to a well defined problem statement by identifying and processing the related industry specific datasets.

Support Schema-less Storage of Data

Unlike traditional databases, a Hadoop-powered data lake can embrace all forms of data be it, schema-based data of a traditional database or a schema-less data stored in a NoSQL database. It helps organizations by running different analytics and eliminates the need of a specifc schema.

Enable Advanced Analytics

Using advanced modelling techniques such as machine learning and predictive analytics, datalake helps to extract actionable intelligence that drives better business decisions helping them to come up with new products & services while improving the current business processes. With advanced analytics, organizations can segment and structure customer’s data based on their behaviour, preferences, needs and sentiments, and discover historical and trending patterns.

Better Quality of Data

The overwhelming abundance of seemingly random and disconnected data and the task to ingest and process petabytes of data is equally matched by the capability of a datalake to build and maintain quality level of the ingested data by ensuring that the data elements are represented in the same way across different data stores and to different user base.

Empower Data Democratization

Removing data siloes and having easy access to data is essential. Hadoop-based data lake develops a democratic data structure ensuring that every authorized user has access to the data sets essential for their requirement and usage. It develops a collaborative environment wherein data can be instantly retrieved for reporting and generation of insights, thus decreasing the turnaround time significantly and creating more opportunities for analysis.

Data Zoning

Data zoning provides defined and quick access to the relevant datasets for data scientists as well as front end applications. Multiple zones can be created in a datalake such as Landing, Staging, Sandbox and Curated with each zone encapsulating the data pertaining to step of ingestion, transformation and exploration. By properly implementing these zones, an organization can ensure the quality of data while retaining the ability to quickly and easily ingest new sources of data. It also offers the defined datasets acclimatized to the requirements of the different user base.

Hadoop based datalake also helps in…….

Modernizing Data Warehouses

A Hadoop-based data lake can work in a hybrid environment with a data warehouse, thereby prolonging the life and potential of a data warehouse.

By augmenting a data warehouse along with a data lake, organizations can use the respective technology that is the best fit and it caters to the requirement of the analysis of the ever increasing datasets. While a data warehouse can be used for reporting on dimensional data, the data lake can be utilized for unstructured and streaming data for predictive and real-time analytics which helps in huge cost savings for the organizations. Additionalily it offers management with right set of insights which helps them in taking informed decisions.

Setting the Path Right

Hadoop based data lake brings in a plethora of benefits. It poses as an unbeatable competition to traditional data processing technologies but at the same time it presents an option to integrate with them and enhance their capabilities at a marginal cost. A smart combination of data lake and existing data processing technology provides an organization, the opportunity to exploit insights in new and potentially game-changing ways.

(Writer is a Deputy General Manager- AIPF (Artificial Intelligence Platform), NEC Technologies India)

 

Continue Reading

Opinion

Jio’s tsunami terrorizes big-time telcos

Published

on

By Kshitiz Verma

The telecom sector has seen tremendous ups and downs in last two years after the entry of Mukesh Ambani-led Reliance Jio in the market, reducing the numbers of service providers to just 4 from 10 earlier.

The industry has undergone a transformation of sorts, with smaller players falling to the wayside in light of narrowing margins as  Jio has gone from zero to more than 200 million subscribers, all of them on a nationwide 4G network, since 2016. While the user growth has come at the expense of smaller rivals who’ve merged or quit the market, the thrust into the country’s No. 3 spot for wireless carriers has also shrunk Bharti Airtel Ltd. and Idea Cellular Ltd.’s profit share.

Owing to the decreasing profit and shrinking market all thanks to the Jio’s tariff plans, the latter decided to merge with Vodafone Group Plc’s India unit in August 2018 and created India’s largest telco by number of subscribers (422 million), overtaking Bharti Airtel (343 million subscribers) and Reliance Jio (252.3 million subscribers).

The huge Long-Term Evolution (LTE) mobile network operator Reliance Jio Infocomm Limited is a wholly owned subsidiary of Reliance Industries. It is the only ‘VoLTE’ company in the country that provides 4G networks without any 2G or 3G network support.

Jio’s services were commercially launched on 5 September 2016. Acquiring 16 million subscribers within the first month of its launch, Jio has grown immensely, both in terms of subscribers and launching new services, offered services completely free for a good period of time, and hastened the exit of laggards such as RCOM, Aircel, Telenor and Tata Teleservices.

Jio, with investments totalling more than Rs 2.5 lakh crore, seems better placed financially as it comes with debt of Rs 80,000 crore, while Vodafone Idea has a debt of Rs 1.20 lakh crore on its books. Bharti Airtel has a debt of over RS 1.13 lakh crore.

Recently, Jio reported a 65 per cent increase in its standalone net profit for the October–December 2018 period.

Its standalone net profit stood at Rs 831 crore in the third quarter of the financial year 2018–2019, against Rs 504 crore reported in October–December 2017–2018, the company said.

The company’s operating revenue during the period under review stood at Rs 10,383 crore, 50.9 per cent higher than Rs 6,879 crore earned during the corresponding period of the last financial year.

Its subscriber base as of 31 December 2018 was 280.01 million. Earning per subscriber marginally moderated to Rs 130 per month from Rs 131.7 previously.

‘Jio has sustained its pace of underlying subscriber additions with net addition during the quarter of 27.9 million (as against previous four-quarter average of 28.4 million)’, said the statement.

With Jio coming into play, the data costs and the call costs have reduced drastically. This is one of the primary reasons for attracting millions of customers in just a few months.

After the introduction of Jio, other telecom companies too had to reduce tariffs and had to think ways to improve efficiency. This is a big plus to telecom industry as well as consumers.

As Jio services support 4G networks only, the demand for 4G-enabled smartphones has also increased. According to data by IDC and Morgan Stanley Research, 95 per cent of the smartphones sold in the country in the first quarter after Jio launch were 4G-capable.

The launch of Jio Phones was the next big bomb for the telecom industry. Its 4G-VoLTE feature along with free calls, web surfing supports, etc. made it another success. According to the company, the phone received over 6 million pre-booking requests in one day.

Jio’s strategies to capture the Indian market

The strategies such as reasonable smartphones and data services, and the approachability of rich element and applications have endowed Jio to make an incorporated business procedure from the initial preparatory point. Today, Jio is equipped to offer an exceptional mix of telecom, rapid info, computerized trade, media and payment services.

Market Disruption: Jio has captured the market base by offering customers to use whatever amount of data they like, later they capped the usage to 1 GB/day. This strategy really pulled 100 million subscribers to take Jio within 170 days.

Pricing Disruption: Reliance Jio’s plans to offer affordable pricing on all the plans such as free voice calls, free roaming and 100 SMS per day really clicked among the customers. Before Jio’s launch, most of the service providers were providing 1 GB data for approximately Rs 190. This pricing was made affordable by Reliance Jio.

The pricing behaviour in telecom sector has changed from a typical monthly pricing package to a variety of flexible plans tailored to the end customer. The spending and frequency of recharges in telecom industry was changed due to the introduction of Reliance Jio. It has opened up a new data-driven industry by changing customer perceptions by making data services a commodity.

By achieving many successes in a short period of time, such as highest 4G download average speed of about 20.3 Mbps in India compared to other networks, the average upload speed of about 4.4 Mbps and higher call rating when people are travelling compared to other operators, it can be logically derived that Jio’s strategies – Marketing, Pricing, Operational, Distribution, Capacity Management – have been successful in disrupting the Indian telecom market and achieving commercial success in a short period of time.

The author is a Global CTO at Olialia World

 

Continue Reading

5g

5G takeaways for telecom operators from mobile video industry council

Published

on

Indranil Chatterjee-Sr. Vice President

By Indranil Chatterjee

Mobile video is growing at a phenomenal rate, but it is a double-edged sword. Openwave Mobility’s research on live operator networks found that currently video is 58% of traffic by volume worldwide. Subscribers love it, but monetizing it is easier said than done, given the flood of HD content and encryption that is adversely impacting Quality of Experience (QoE). That was the clear message from operators across the board who attended the MViC.

The Council in fact debated the conflicting components of Video QoE i.e. Quality of Delivery (reduced buffering) versus Quality of Picture (resolution) and the implications. Why does this matter? That’s because when subscribers experience poor quality when streaming video our research found that consumers blame the operator, not the OTT. And it is only a matter of time before they churn.  With more video traversing mobile networks than ever before, QoE is a major headache.

So, what were some of the key 5G takeaways for operators from the MViC?
1.  The how and why of mobile video: Interestingly, most operators experienced growth in mobile video during 4G – from 2010 to 2015 – and it came as a result of increased video watch times. But, since 2015, growth in mobile video has come significantly as a result of a move to higher bandwidth HD content, rather than greater watch time only. That’s evidence of HD content from the likes of Netflix, YouTube and Amazon Video growing in popularity. As operators prepare for the dawn of 5G, there is one sure-fire certainty: HD content (including 4K and soon 8K content) and therefore mobile video will soar.

2.  Skyrocketing mobile video

The MViC forecast on the day that 90% of traffic on 5G will be mobile video. Dimitris Mavrakis, Research Director at ABI Research also highlighted some key 5G mobile video insights during his MVIC presentation: “5G Vision and Deployments”. Some operators are yet to fully monetize 4G and they are already looking at 5G as an enterprise vertical enabler. According to Mavrakis, 5G will initially be used to improve the consumer user experience – and surprise surprise – mobile video will spearhead this strategy.

In 2016, mobile video represented 48% of traffic and ABI Research predicts that 5G’s mobile video growth will accelerate in 2022. By 2025, video will reach 78% – and here’s the punch line: 40% will be 4K video that sucks up bandwidth.

3.  Way more encryption

Remember the fanfare when 4G was launched? 4G was all about mobility and connectivity. It propelled companies like Waze, Uber and Spotify. Also, Edward Snowden happened. The shockwaves it sent encouraged many OTTs to jump at the opportunity to encrypt their data. The likes of Google and Facebook coated their data with secure protocols that prevented operators from managing the very data that travels on their own networks. Encryption is here to stay – and with 5G, it will intensify. 5G technologies will usher a new wave of mobile video data – much more diverse than 4G.

4.  Way, way more data intensity

5G has far more data intensive services and already, OTTs are lining up to take advantage of immersive services such as Augmented Reality (AR) and Virtual Reality (VR). However, Augmented Reality can be 33x more data intensive than equivalent 480p video. And with it comes more encryption. And if that was not enough, it is expected that OTT services will have more subscribers than pay TV customers when 5G becomes widespread.

5.  4G/LTE Networks will remain for years to come

While all the focus on 5G is well deserved, our customers are quick to point out that 4G/LTE networks are not going away anytime soon as they will be critical to ensure nationwide coverage. From an investment perspective, the focus for 4G networks will shift from new build-outs to maximize the capacity of these networks. This means tools such as RAN Congestion based video optimization will be critical to help operators preserve good QoE and enhance 4G RAN capacity while investing in 5G network build out.
New opportunities with 5G

The growth in 4G democratized mobile video. Thanks to a smartphone and a decent data connection, people can watch cute cat videos almost anywhere. A number of operators treated video like any other service. They didn’t consider it important enough for preferential treatment. Of course encryption did not help the cause either for many operators. 5G can change all that.

Armed with new optimization technology, 5G provides the opportunity for more granular service prioritization and network appropriation. If operators get their ducks in a row, 5G can indeed provide the impetus for the creation of a new video ecosystem.

Continue Reading

Trending