CEO Blog Archives - OpenText Blogs https://blogs.opentext.com/category/ceo-blog/ The Information Company Wed, 04 Jun 2025 14:04:59 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.1 https://blogs.opentext.com/wp-content/uploads/2024/07/cropped-OT-Icon-Box-150x150.png CEO Blog Archives - OpenText Blogs https://blogs.opentext.com/category/ceo-blog/ 32 32 Business at the Speed of AI https://blogs.opentext.com/business-at-the-speed-of-ai/ Tue, 01 Apr 2025 12:31:24 +0000 https://blogs.opentext.com/?p=999307809 Image of a stack of books with one upright on top showing a cover with a pixelated hand

The next 10 years are going to change everything.

Business. Healthcare. Climate science. Energy. Internet. Life expectancy. Learning.

We are on the verge of discoveries in every major area of human experience. Companies in all fields are bringing new ideas to market and creating new roles for employees that are fundamentally changing and challenging our status quo. We are also on the verge of creating a new workforce, a limitless digital labor force with AI agents.

The internet changed everything. With AI, everything will change again.

This is why today I am excited to launch my latest book: Business at the Speed of AI. It’s an exploration of how the new digital workforce, comprised of AI agents, will radically transform every aspect of work, and elevate human potential.

I am a techno-optimist, and I see a truly awesome future on the horizon, powered by AI. Human longevity will increase dramatically, as well as our IQs. New discoveries for infinite energy will revolutionize industries, and take the training wheels off AI. Everything that moves will be autonomous, with embedded AI. And AI tools will help climate innovators work on new solutions.

Our civilization is built on technology. 

Technology is the glory of human ambition and achievement, the fuel to our advancement, and the realization of our potential.

Technology is the great frontier, the great liberator of human potential. With AI, we are each a Jules Verne of our time.

Business at the Speed of AI also dives into topics that are central to how OpenText is innovating with AI—from a complete reimagining of knowledge work and software development, to transformations in cybersecurity, ITOps, supply chains, and much more.

OpenText Summits in London and Munich

I’ll be digging into some of the most exciting ideas from the book during my keynote at our OpenText Summits in London and Munich this week—and I’ll be doing some amazing demos of our AI technology in action. Today also marks the launch our next generation platform, Cloud Editions 25.2 (the culmination of our exciting Titanium X roadmap), with incredible new capabilities across our Business AI, Business Clouds, and Business Technology to propel our customers into the future of information management. Human potential and productivity will soar!

Stay tuned to my blog—in the coming weeks I’ll be posting a series based on Business at the Speed of AI, along with videos from my OpenText Summit keynote to further illuminate the topics. You won’t want to miss it! 

We are in the middle of a great transformation. How we step into this new reality will shape our industries, and our world, for years to come. The future is awesome. And it’s everyone’s to build. It’s time to #GO.

To read more, download Business at the Speed of AI now.

The post Business at the Speed of AI appeared first on OpenText Blogs.

]]>
Image of a stack of books with one upright on top showing a cover with a pixelated hand

The next 10 years are going to change everything.

Business. Healthcare. Climate science. Energy. Internet. Life expectancy. Learning.

We are on the verge of discoveries in every major area of human experience. Companies in all fields are bringing new ideas to market and creating new roles for employees that are fundamentally changing and challenging our status quo. We are also on the verge of creating a new workforce, a limitless digital labor force with AI agents.

The internet changed everything. With AI, everything will change again.

This is why today I am excited to launch my latest book: Business at the Speed of AI. It’s an exploration of how the new digital workforce, comprised of AI agents, will radically transform every aspect of work, and elevate human potential.

I am a techno-optimist, and I see a truly awesome future on the horizon, powered by AI. Human longevity will increase dramatically, as well as our IQs. New discoveries for infinite energy will revolutionize industries, and take the training wheels off AI. Everything that moves will be autonomous, with embedded AI. And AI tools will help climate innovators work on new solutions.

Our civilization is built on technology. 

Technology is the glory of human ambition and achievement, the fuel to our advancement, and the realization of our potential.

Technology is the great frontier, the great liberator of human potential. With AI, we are each a Jules Verne of our time.

Business at the Speed of AI also dives into topics that are central to how OpenText is innovating with AI—from a complete reimagining of knowledge work and software development, to transformations in cybersecurity, ITOps, supply chains, and much more.

OpenText Summits in London and Munich

I’ll be digging into some of the most exciting ideas from the book during my keynote at our OpenText Summits in London and Munich this week—and I’ll be doing some amazing demos of our AI technology in action. Today also marks the launch our next generation platform, Cloud Editions 25.2 (the culmination of our exciting Titanium X roadmap), with incredible new capabilities across our Business AI, Business Clouds, and Business Technology to propel our customers into the future of information management. Human potential and productivity will soar!

Stay tuned to my blog—in the coming weeks I’ll be posting a series based on Business at the Speed of AI, along with videos from my OpenText Summit keynote to further illuminate the topics. You won’t want to miss it! 

We are in the middle of a great transformation. How we step into this new reality will shape our industries, and our world, for years to come. The future is awesome. And it’s everyone’s to build. It’s time to #GO.

To read more, download Business at the Speed of AI now.

The post Business at the Speed of AI appeared first on OpenText Blogs.

]]>
OpenText World 2024—Information Reimagined https://blogs.opentext.com/opentext-world-2024-information-reimagined/ Wed, 16 Oct 2024 17:16:00 +0000 https://blogs.opentext.com/?p=999276671 OpenText characters. a Duck and a Yeti, in flight outfits in front of the OpenText "OT" logo, with the words "OpenTextWorld 2024" in the top left

Information is the heartbeat of every organization.  

It flows through every process, every workflow, every innovation. It touches all roles and enables success at all levels of the organization. Without it, nothing would get done. It is the connective fabric that makes businesses, industries, and economies run.

But islands of disconnected data—are an impediment to progress. To overcome this, organizations need to unlock the expressive power of integrated and secure information management, at scale and in the cloud.

We need information, reimagined.

Join us for OpenText World 2024 on November 18-21 in Las Vegas (or virtually, for event highlights), where we will explore how organizations can unleash the power of their information to redefine business for a new era. OpenText is creating the future with our next-generation technologies. Our Business Clouds, Business AI, and Cybersecurity will be top focuses at the conference. 

With over 150 sessions and speakers, attendees will learn from business leaders and world-class experts about how they are using technologies like AI to redefine their relationship with data, how to stay secure in an age of increasing cyber-attacks, and what’s needed to leverage next-generation cloud to elevate your business.

OpenText World has never been more important. You’ll hear about reimagined Knowledge Management, Customer Experience, Business Networks and Supply Chains, Digital Operations, Cybersecurity, and the Developer—all working across multi-cloud. You will also learn about our next-generation platform, Titanium X (or, Cloud Editions 25.2), with exciting breakthrough capabilities and more AI agents every 90 days.

Meet Our Speakers

We are bringing together industry experts and leaders from the world’s top companies to share how they are applying security, AI, and other breakthrough strategies to drive growth and accelerate innovation.

Venus Williams, legendary tennis champion, New York Times bestselling author, and serial entrepreneur will share her insights with us about everything from what drives her success, to the lessons she learned on and off the court, to how we can integrate our values into our business practices. I can’t wait for this conversation—it is going to be incredible!

In my keynote, I will discuss how OpenText, as a pioneer in information management, is building an exciting future for our customers with our next-generation Business Clouds, Business AI, and autonomous cloud platform. Our unique approach includes multi-cloud integration and unlocking the value of business data sets with GenAI. I will reveal the latest capabilities of our next-generation platform, Titanium X, and discuss cybersecurity. Most cyber-attacks start with human error, so it is time to let machines do the work to keep data safe.

I’ll also speak with Alok Daga, CIO Commercial & Corporate Banking at Bank of Montreal (BMO), about the bank’s incredible 18-year partnership with OpenText, and how BMO’s innovative approach to content management connects and streamlines diverse aspects of its business.

OpenText EVP & Chief Product Officer, Muhi S. Majzoub’s keynote will share the latest innovations in our Cloud Editions (CE) product launch as well as the roadmap for how we are building AI-driven, secure cloud services to help organizations reimagine information. He will also be joined by Shannon Bell, EVP, Chief Digital Officer at OpenText, to give an inside look at how OpenText is using its own technology to drive innovation, and Todd Cione, President, Worldwide Sales at OpenText, who will interview some of our amazing customers, including Bosch.

Let the Machines Do the Work

With information management reimagined, we can let the machines do the work and elevate human potential.

We have a once-in-a-generation chance to create the future—a future that is powered by information, cloud, and AI. We are going to see some truly amazing things in our lifetime.

I invite you to register now for what’s going to be a fantastic event that you won’t want to miss!

I look forward to reimagining information and building the future with you at OpenText World 2024!

The post OpenText World 2024—Information Reimagined appeared first on OpenText Blogs.

]]>
OpenText characters. a Duck and a Yeti, in flight outfits in front of the OpenText "OT" logo, with the words "OpenTextWorld 2024" in the top left

Information is the heartbeat of every organization.  

It flows through every process, every workflow, every innovation. It touches all roles and enables success at all levels of the organization. Without it, nothing would get done. It is the connective fabric that makes businesses, industries, and economies run.

But islands of disconnected data—are an impediment to progress. To overcome this, organizations need to unlock the expressive power of integrated and secure information management, at scale and in the cloud.

We need information, reimagined.

Join us for OpenText World 2024 on November 18-21 in Las Vegas (or virtually, for event highlights), where we will explore how organizations can unleash the power of their information to redefine business for a new era. OpenText is creating the future with our next-generation technologies. Our Business Clouds, Business AI, and Cybersecurity will be top focuses at the conference. 

With over 150 sessions and speakers, attendees will learn from business leaders and world-class experts about how they are using technologies like AI to redefine their relationship with data, how to stay secure in an age of increasing cyber-attacks, and what’s needed to leverage next-generation cloud to elevate your business.

OpenText World has never been more important. You’ll hear about reimagined Knowledge Management, Customer Experience, Business Networks and Supply Chains, Digital Operations, Cybersecurity, and the Developer—all working across multi-cloud. You will also learn about our next-generation platform, Titanium X (or, Cloud Editions 25.2), with exciting breakthrough capabilities and more AI agents every 90 days.

Meet Our Speakers

We are bringing together industry experts and leaders from the world’s top companies to share how they are applying security, AI, and other breakthrough strategies to drive growth and accelerate innovation.

Venus Williams, legendary tennis champion, New York Times bestselling author, and serial entrepreneur will share her insights with us about everything from what drives her success, to the lessons she learned on and off the court, to how we can integrate our values into our business practices. I can’t wait for this conversation—it is going to be incredible!

In my keynote, I will discuss how OpenText, as a pioneer in information management, is building an exciting future for our customers with our next-generation Business Clouds, Business AI, and autonomous cloud platform. Our unique approach includes multi-cloud integration and unlocking the value of business data sets with GenAI. I will reveal the latest capabilities of our next-generation platform, Titanium X, and discuss cybersecurity. Most cyber-attacks start with human error, so it is time to let machines do the work to keep data safe.

I’ll also speak with Alok Daga, CIO Commercial & Corporate Banking at Bank of Montreal (BMO), about the bank’s incredible 18-year partnership with OpenText, and how BMO’s innovative approach to content management connects and streamlines diverse aspects of its business.

OpenText EVP & Chief Product Officer, Muhi S. Majzoub’s keynote will share the latest innovations in our Cloud Editions (CE) product launch as well as the roadmap for how we are building AI-driven, secure cloud services to help organizations reimagine information. He will also be joined by Shannon Bell, EVP, Chief Digital Officer at OpenText, to give an inside look at how OpenText is using its own technology to drive innovation, and Todd Cione, President, Worldwide Sales at OpenText, who will interview some of our amazing customers, including Bosch.

Let the Machines Do the Work

With information management reimagined, we can let the machines do the work and elevate human potential.

We have a once-in-a-generation chance to create the future—a future that is powered by information, cloud, and AI. We are going to see some truly amazing things in our lifetime.

I invite you to register now for what’s going to be a fantastic event that you won’t want to miss!

I look forward to reimagining information and building the future with you at OpenText World 2024!

The post OpenText World 2024—Information Reimagined appeared first on OpenText Blogs.

]]>
Welcome to Fiscal 2025 and the Launch of OpenText 3.0 https://blogs.opentext.com/welcome-to-fiscal-2025-and-the-launch-of-opentext-3-0/ Wed, 03 Jul 2024 13:10:42 +0000 https://blogs.opentext.com/?p=999275616

Dear OpenText Stakeholders:

Welcome to Fiscal 2025! It is an exciting time to be a technology company as information automation and AI continue to drive the future of business. 

At OpenText, we are acutely focused on creating the future through Information Management that elevates every individual and organization to be their best.

Over the next decade, by 2035, we could see inventions that fundamentally change the way we live and work:

  • Average life spans increase to 100
  • Brain power that is boosted with AI and physical technology
  • Energy is solved with fusion
  • Desalination is commonplace
  • Information moves from our fingertips to intelligent AI that understands us
  • And anything that moves, is autonomous

As part of that, we are designing and building the future of business to support three big industry trends: NextGen Autonomous Cloud, End-to-End Security, and AI for Humans.    

When we started this company, the first decade (OpenText 1.0) centered around content management with on-prem software. In the next decade (OpenText 2.0) we transitioned to information management in the hybrid cloud. 

Today, we are excited to announce the launch of OpenText 3.0 – Information Reimagined through the power of Cloud, Security, and AI. 

OpenText 3.0 is our three-year strategic plan.   

  • Our Vision: To be the best information management company in the world.

  • Our Belief: Information elevates every individual and organization to be their best.

  • Our Common Purpose: We sit at the center of connected ecosystems, the internet of clouds, and we play a critical role as our customers adopt cloud, security, and AI.

  • Our Differentiators: Putting customers first, expertise in information management, scaled go-to-market, and we care about each other.

  • Our Business Priorities: Customer success, market leadership, accelerated growth, expanded margins, and all being powered by our data.

  • All supported by our Core Values: Create the Future, Be Deserving of Trust, We NOT I, Raise the Bar and Own the Outcome.

To achieve the ambitions outlined in OpenText 3.0, our quarterly rhythm of innovations will focus on three primary areas:

In further support of OpenText 3.0, today we announced a Business Optimization Plan focused on: (1) placing the right talent in the right locations of our business, (2) funding growth and innovations, and (3) completing these objectives with higher productivity, lower cost, and expanded margin.

Through our Business Optimization Plan, we expect to reduce approximately 1,200 roles and reinvest in 800 new roles in Sales, PS, and Engineering to support our growth and innovation plans. Combined, this is expected to reduce our annual expense by $150M with the cost of the reduction approximately $60M. 

In addition, earlier this year, we announced a $250M share repurchase program. As we disclosed in our monthly SEDI filings in May and June, we completed $150M of the program, and purchased and retired 5M shares. In August, we will provide an update on our share repurchase program for Fiscal 2025.

We are very excited about opportunities going forward to continue our growth and increase our market share by helping our customers transform. Along with our plans to pursue large margin expansion opportunities and execute on strong capital allocation, we are confident we will deliver significant long-term value for all our stakeholders.

We look forward to providing further updates on our business when we report our quarterly financial results in August.

Mark

The post Welcome to Fiscal 2025 and the Launch of OpenText 3.0 appeared first on OpenText Blogs.

]]>

Dear OpenText Stakeholders:

Welcome to Fiscal 2025! It is an exciting time to be a technology company as information automation and AI continue to drive the future of business. 

At OpenText, we are acutely focused on creating the future through Information Management that elevates every individual and organization to be their best.

Over the next decade, by 2035, we could see inventions that fundamentally change the way we live and work:

  • Average life spans increase to 100
  • Brain power that is boosted with AI and physical technology
  • Energy is solved with fusion
  • Desalination is commonplace
  • Information moves from our fingertips to intelligent AI that understands us
  • And anything that moves, is autonomous

As part of that, we are designing and building the future of business to support three big industry trends: NextGen Autonomous Cloud, End-to-End Security, and AI for Humans.    

When we started this company, the first decade (OpenText 1.0) centered around content management with on-prem software. In the next decade (OpenText 2.0) we transitioned to information management in the hybrid cloud. 

Today, we are excited to announce the launch of OpenText 3.0 – Information Reimagined through the power of Cloud, Security, and AI. 

OpenText 3.0 is our three-year strategic plan.   

  • Our Vision: To be the best information management company in the world.
  • Our Belief: Information elevates every individual and organization to be their best.
  • Our Common Purpose: We sit at the center of connected ecosystems, the internet of clouds, and we play a critical role as our customers adopt cloud, security, and AI.
  • Our Differentiators: Putting customers first, expertise in information management, scaled go-to-market, and we care about each other.
  • Our Business Priorities: Customer success, market leadership, accelerated growth, expanded margins, and all being powered by our data.
  • All supported by our Core Values: Create the Future, Be Deserving of Trust, We NOT I, Raise the Bar and Own the Outcome.

To achieve the ambitions outlined in OpenText 3.0, our quarterly rhythm of innovations will focus on three primary areas:

In further support of OpenText 3.0, today we announced a Business Optimization Plan focused on: (1) placing the right talent in the right locations of our business, (2) funding growth and innovations, and (3) completing these objectives with higher productivity, lower cost, and expanded margin.

Through our Business Optimization Plan, we expect to reduce approximately 1,200 roles and reinvest in 800 new roles in Sales, PS, and Engineering to support our growth and innovation plans. Combined, this is expected to reduce our annual expense by $150M with the cost of the reduction approximately $60M. 

In addition, earlier this year, we announced a $250M share repurchase program. As we disclosed in our monthly SEDI filings in May and June, we completed $150M of the program, and purchased and retired 5M shares. In August, we will provide an update on our share repurchase program for Fiscal 2025.

We are very excited about opportunities going forward to continue our growth and increase our market share by helping our customers transform. Along with our plans to pursue large margin expansion opportunities and execute on strong capital allocation, we are confident we will deliver significant long-term value for all our stakeholders.

We look forward to providing further updates on our business when we report our quarterly financial results in August.

Mark

The post Welcome to Fiscal 2025 and the Launch of OpenText 3.0 appeared first on OpenText Blogs.

]]>
OpenText Committed to Climate Innovation https://blogs.opentext.com/opentext-committed-to-climate-innovation/ Mon, 22 Apr 2024 14:39:49 +0000 https://blogs.opentext.com/?p=123045

As I return home from OpenText World Europe, I am feeling invigorated by the powerful conversations that occurred throughout the week. I am also feeling inspired by the incredible sights and experiences that come with traveling this beautiful world of ours and am once again reminded of the critical role we play in protecting it.

I read Before It’s Gone by Jonathan Vigliotti while traveling, and it is a story for every small town facing climate change—from fire, water, air, food, and earth.

Today, April 22, is Earth Day, an opportunity to not only celebrate just how extraordinary our world is but reflect on the action we must all take to ensure a healthier planet, and a brighter future. Something that we remain deeply committed to as an organization.

At OpenText, we believe that it is essential to understand the urgent environmental challenges and create a future that is sustainable and inclusive. Through the OpenText Zero-In Initiative, we have a Zero Footprint focus, working diligently to achieve our sustainability goals as a company, while helping our customers to do the same.

Essentially, how do you achieve maximum impact with the lightest touch to the environment?

At OpenText World Europe, I spoke extensively about the power of AI, the importance of adopting an AI mindset, and the new AI-powered innovations that we are proud to offer our customers. As I reflect today on Earth Day, I whole-heartedly believe that the revolutionary potential of AI can not only help us to accelerate our Zero-In program but can ultimately help reshape our world into a more sustainable one.

What we are building at OpenText impacts humanity and impacts the world. We believe that our products help to address environmental and societal challenges by bringing forth technologies that enable visibility and action. From the basics of digitization to what we can do with observability to anticipate the regulations to come, OpenText innovates with our customers’ sustainability needs in mind. We remain steadfast in our commitment to offering innovative climate solutions to help our customers unleash exponential innovation—through information, automation, and the cloud.

We are also partnering with our cloud partners to gain more energy efficiency and to seek more hydro- and wind-powered infrastructure.

The below illustrates some of the key sustainability wins stemming from OpenText products last year. I look forward to sharing more results like these in our upcoming Corporate Citizenship Report, which will be released in August.

By investing in innovative technologies that contribute to a net-zero future, we can help our customers move from pledge to progress. Companies like Method, Heineken, Sutter Health, and so many more are already reducing their footprint thanks to OpenText solutions—and their success is just the beginning as we continue to add new innovations to our portfolio.

Here are a few more of the products that are helping customers address environmental challenges while improving efficiency:

  • Our Cloud FinOps solution offers reporting for scope 2 and 3 emissions produced by both a customer’s cloud and owned data centers. This is the first step of our GreenOps solution to help customers reduce their IT carbon footprint.

  • OpenText Active Risk Monitor gives customers visibility into their supply chains, including a view into their suppliers’ ESG compliance details, which can support a shift towards more sustainable, ethical business practices.

  • OpenText Vertica runs on less hardware due to the optimization of products and data compression, resulting in a smaller carbon footprint.

  • With OpenText LoadRunner Cloud, each customer receives its own segregated tenant on a multi-tenant cloud platform, rather than running cycles on their own dedicated servers, resulting in less energy usage.

  • OpenText UFT Digital Lab allows developers to simulate in a software environment versus physical devices, which means less infrastructure, less power consumption, and ultimately, a smaller carbon footprint.

Our Path to Zero

As we continue our zero-in journey, it’s important that we recognize that the path to zero requires collective action—we all have an important role to play in understanding how our daily choices can have a lasting impact.

The great news is that OpenTexters are already doing tremendous work in helping us to zero in on our zero footprint goals, and I am pleased to share today that OpenText has recently been recognized as one of Canada’s Greenest Employers for the very first time. This achievement is a direct testament to the passion and dedication of our employees and comes on the heels of several impressive ESG-related accolades, including qualifying as a constituent on the Dow Jones Sustainability Index and receiving a “AAA” rating from MSCI.

To quote Bertrand Piccard during last week’s opening keynote: “The impossible does not exist in the reality, it exists only in the mindset of the people that believe that the future is going to be an extrapolation of the past—which, of course, is never the case. The future is unpredictable, uncertain, and it requires us to be creative, to be innovative, and to be pioneers.”

We need to keep challenging ourselves:

  1. How do we achieve maximum impact with the lightest touch to the environment?
  2. How can we build key and essential features for our customers to achieve the Path to Zero?
  3. And lastly, how can we lead as individuals, and take personal action for a healthier planet?

It is not what we leave behind, it is what we send forward. Happy Earth Day.

The post OpenText Committed to Climate Innovation appeared first on OpenText Blogs.

]]>

As I return home from OpenText World Europe, I am feeling invigorated by the powerful conversations that occurred throughout the week. I am also feeling inspired by the incredible sights and experiences that come with traveling this beautiful world of ours and am once again reminded of the critical role we play in protecting it.

I read Before It’s Gone by Jonathan Vigliotti while traveling, and it is a story for every small town facing climate change—from fire, water, air, food, and earth.

Today, April 22, is Earth Day, an opportunity to not only celebrate just how extraordinary our world is but reflect on the action we must all take to ensure a healthier planet, and a brighter future. Something that we remain deeply committed to as an organization.

At OpenText, we believe that it is essential to understand the urgent environmental challenges and create a future that is sustainable and inclusive. Through the OpenText Zero-In Initiative, we have a Zero Footprint focus, working diligently to achieve our sustainability goals as a company, while helping our customers to do the same.

Essentially, how do you achieve maximum impact with the lightest touch to the environment?

At OpenText World Europe, I spoke extensively about the power of AI, the importance of adopting an AI mindset, and the new AI-powered innovations that we are proud to offer our customers. As I reflect today on Earth Day, I whole-heartedly believe that the revolutionary potential of AI can not only help us to accelerate our Zero-In program but can ultimately help reshape our world into a more sustainable one.

What we are building at OpenText impacts humanity and impacts the world. We believe that our products help to address environmental and societal challenges by bringing forth technologies that enable visibility and action. From the basics of digitization to what we can do with observability to anticipate the regulations to come, OpenText innovates with our customers’ sustainability needs in mind. We remain steadfast in our commitment to offering innovative climate solutions to help our customers unleash exponential innovation—through information, automation, and the cloud.

We are also partnering with our cloud partners to gain more energy efficiency and to seek more hydro- and wind-powered infrastructure.

The below illustrates some of the key sustainability wins stemming from OpenText products last year. I look forward to sharing more results like these in our upcoming Corporate Citizenship Report, which will be released in August.

By investing in innovative technologies that contribute to a net-zero future, we can help our customers move from pledge to progress. Companies like Method, Heineken, Sutter Health, and so many more are already reducing their footprint thanks to OpenText solutions—and their success is just the beginning as we continue to add new innovations to our portfolio.

Here are a few more of the products that are helping customers address environmental challenges while improving efficiency:

  • Our Cloud FinOps solution offers reporting for scope 2 and 3 emissions produced by both a customer’s cloud and owned data centers. This is the first step of our GreenOps solution to help customers reduce their IT carbon footprint.
  • OpenText Active Risk Monitor gives customers visibility into their supply chains, including a view into their suppliers’ ESG compliance details, which can support a shift towards more sustainable, ethical business practices.
  • OpenText Vertica runs on less hardware due to the optimization of products and data compression, resulting in a smaller carbon footprint.
  • With OpenText LoadRunner Cloud, each customer receives its own segregated tenant on a multi-tenant cloud platform, rather than running cycles on their own dedicated servers, resulting in less energy usage.
  • OpenText UFT Digital Lab allows developers to simulate in a software environment versus physical devices, which means less infrastructure, less power consumption, and ultimately, a smaller carbon footprint.

Our Path to Zero

As we continue our zero-in journey, it’s important that we recognize that the path to zero requires collective action—we all have an important role to play in understanding how our daily choices can have a lasting impact.

The great news is that OpenTexters are already doing tremendous work in helping us to zero in on our zero footprint goals, and I am pleased to share today that OpenText has recently been recognized as one of Canada’s Greenest Employers for the very first time. This achievement is a direct testament to the passion and dedication of our employees and comes on the heels of several impressive ESG-related accolades, including qualifying as a constituent on the Dow Jones Sustainability Index and receiving a “AAA” rating from MSCI.

To quote Bertrand Piccard during last week’s opening keynote: “The impossible does not exist in the reality, it exists only in the mindset of the people that believe that the future is going to be an extrapolation of the past—which, of course, is never the case. The future is unpredictable, uncertain, and it requires us to be creative, to be innovative, and to be pioneers.”

We need to keep challenging ourselves:

  1. How do we achieve maximum impact with the lightest touch to the environment?
  2. How can we build key and essential features for our customers to achieve the Path to Zero?
  3. And lastly, how can we lead as individuals, and take personal action for a healthier planet?

It is not what we leave behind, it is what we send forward. Happy Earth Day.

The post OpenText Committed to Climate Innovation appeared first on OpenText Blogs.

]]>
The Future Needs You Today: A Conversation on AI & Decolonization with Karen Palmer https://blogs.opentext.com/the-future-needs-you-today-a-conversation-on-ai-amp-decolonization-with-karen-palmer/ Thu, 29 Feb 2024 23:59:11 +0000 https://blogs.opentext.com/?p=76794

AI is bringing us into a new epoch of human society—it is a force multiplier for human potential.

OpenText is about Information Management + Data + AI + Trust.

AI also reflects its creators. We are currently at a critical point with AI. This is our moment to build the future that we want to live in.

AI can carry implicit bias and perpetuate unequitable power structures. We are on a journey to hear from a wide variety of voices and learn new perspectives on how to build AI that is sustainable, ethical, and inclusive.

As part of our celebrations for Black History Month, I recently had an incredible conversation with Karen Palmer, Storyteller from the Future, Award-winning XR Artist, and TED Speaker who explores the implications of AI and technology on societal structures and inequality. Karen won the XR Experience Competition at South by Southwest (SXSW) 2023 with her most recent project, Consensus Gentium, designed to drive discussion about data privacy, unconscious biases, and the power of technology.

I am thankful to Karen for sharing her powerful and insightful ideas with OpenText, and exploring with us the idea of decolonizing AI. You can read highlights from our conversation below.

***

Mark: As humans, why do you think we need AI?

Karen: The most important aspect or characteristic of AI is efficiency and speed. So, not about accuracy off the bat. It's going to make things more efficient for you and it's going to make it quicker for you. That's a service they're providing for you.

It's all being driven around commerce, capitalism, and then the other side of it is surveillance. Like when Charles was crowned king, that was when they kicked in the most complicated and far-reaching AI system, facial recognition system, that's ever been in England.

You and I may think, “hey, do I need AI?” But we haven't got a choice in what's happening today, because it's being suggested to us.

My view on smart cities is that I really call them “surveillance cities,” because everything is sold to us as “it's going to be more efficient, speedy, and make our life more safe.” But what it does is that it brings in more measures of security.

So, for example, Robert Williams in Detroit was the first person arrested by the facial recognition system that got it wrong. That was a system called Project Greenlight, and it was presented to the city of Detroit and recommended because there was so much crime. That if they put this surveillance grid in there, it would be better for fighting crime, to keep them safer. And what happened is that it's now surveilling people and arresting people of color and they can't dismantle that system. It’s here now.

So we have to be very aware of what is being sold to us and how we would like to use it. And by “us,” I mean all people. It might impact people of color or black people or women or minorities first, but it's going to impact all of us eventually.

Mark: Thank you for sharing that. We’re here to challenge ourselves today. You used this expression, “chains of colonial algorithms,” and you also used a term, “decolonizing AI.” I’d love to hear your voice on what does that mean to you and what should we take away from that?

Karen: I've been looking at bias in AI since 2016-2017. But the deeper I go into it, the more I feel that maybe that term is a little bit of an understatement. That we really need to look at decolonizing AI and dismantling the colonial biases which are deeply embedded within these artificial intelligence systems— which are built on historical injustices and dominance and prejudices—and really enable different types of code to be brought to the forefront, such as Indigenous AI.

Let's create AI systems from an Indigenous perspective, from a different cultural lens, from an African lens, or Hawaiian lens, or a Māori lens. Not coming at it like, “okay, you’ve got to be diverse for the quota of diversity.” This will make systems better for everybody.

What about building solutions from us, the people? What would that look like? How would we actually go around decolonizing society? How would we go around decolonizing AI, and what would that look like? And that's my work that I'm embarking on.

Mark: So, to bring in wider data sets that express a full picture of society, is that another way to say it?

Karen: Yes! Holistic. Total. Authentic. Representative. Something which is reflective and authentic of the world in which we live.

Mark: OpenAI is in the news almost every day, and they announced recently their video generator called Sora. Some of the early imagery is phenomenal. Google recently announced that it is pausing image generation. Inaccurate historical context was coming out of Google.

I’d welcome your thoughts.

Karen: Let me just backtrack a little, with the writers’ strike in America. That happened because Hollywood and the studios were exploiting people's rights. Their data, their digital data, their digital identity.

Everybody is nervous about AI taking their jobs, wherever you are, whether you're a driver, whether you're an artist like myself. AI is reflective of society.

So with the writers’ strike, the studios tend to be quite exploitative of talent. What they were trying to implement through the contracts was also exploitative. So it’s very important for our society to reflect the best part of ourselves, because the Algorithm will automate whatever we train it.

Technology reinforces inequality. And when it does that, it's not a glitch. It's a signal that we need to redesign our systems to create a more equitable world. And so, as we’re moving forward in this ever-changing world to whatever role is being lost and whatever jobs are being discovered, it’s really important that it's a world which is accessible for all of us.

And in terms of Gemini AI, that was the other extreme of bias in AI, where it was too diverse. There were Nazis, where they generated images that were Asian women or Black men. So it wasn't historically accurate. So that's why they paused it.

There's got to be this middle ground—we've gone too far one way in terms of bias and too far one way regarding data sets in terms of diversity—to find something which is more representative.

And that again, is probably where the Indigenous AI and that decolonizing will create a bit more of an authentic representation.

Mark: Yeah, I don't know how one really regulates it or oversees it. Other than the market going, “good tool / bad tool.” Where is that authentic voice to say, “this whole market's moving in the right direction?”

Karen: That position of good or bad, that just comes down to perspective. That's why we're going to move into the age of perception and greater understanding. Because we're in a time now of real division, and we’ve got to understand that what you may deem good, someone else might deem bad.

And that's why, by democratizing more AI, more people can develop their own, more independent systems. So that people can have and code whatever they need to. They're not dependent on a body doing it. Like, say, Joe Biden, two weeks ago. They’ve announced this organization now that’s going to regulate AI. But we don't really know whose interest they're actually going to represent, because there's this history of governments and big business working together.

So that's why what's good for someone may not be good for you. It’s about us having a seat at the table of what's happening.

Mark: Look 5-10 years out in AI. Love to hear your view of how the next few years play out in the world of AI.

Karen Discusses Her View of What’s Next from the Perspective of a Time Traveler from the Future

***

Karen invites us to envision a future where we have already created the world we would like to live in, using technology. What does it look like? Now, work backwards. What steps do we need to take today to get there?

I was inspired by her words: “The future is not something that happens to us. It’s something which we build together.”

I believe AI will be a force multiplier for human potential. To realize this, AI must be combined with our capacity for compassion, justice, and ethical behavior—our humanity, in a nutshell. AI will herald a new era of prosperity if—and only if—we prioritize the humanist impact of technology. Let’s apply AI for the betterment of our world and use it to help us solve our greatest, most pressing challenges. Let’s use it to become more human, not less.

And never forget: the future needs you today.

Thank you, Karen Palmer.

The comments of Karen Palmer are her own and do not necessarily represent the views of Open Text Corporation or its employees.

The post The Future Needs You Today: A Conversation on AI & Decolonization with Karen Palmer appeared first on OpenText Blogs.

]]>

AI is bringing us into a new epoch of human society—it is a force multiplier for human potential.

OpenText is about Information Management + Data + AI + Trust.

AI also reflects its creators. We are currently at a critical point with AI. This is our moment to build the future that we want to live in.

AI can carry implicit bias and perpetuate unequitable power structures. We are on a journey to hear from a wide variety of voices and learn new perspectives on how to build AI that is sustainable, ethical, and inclusive.

As part of our celebrations for Black History Month, I recently had an incredible conversation with Karen Palmer, Storyteller from the Future, Award-winning XR Artist, and TED Speaker who explores the implications of AI and technology on societal structures and inequality. Karen won the XR Experience Competition at South by Southwest (SXSW) 2023 with her most recent project, Consensus Gentium, designed to drive discussion about data privacy, unconscious biases, and the power of technology.

I am thankful to Karen for sharing her powerful and insightful ideas with OpenText, and exploring with us the idea of decolonizing AI. You can read highlights from our conversation below.

***

Mark: As humans, why do you think we need AI?

Karen: The most important aspect or characteristic of AI is efficiency and speed. So, not about accuracy off the bat. It's going to make things more efficient for you and it's going to make it quicker for you. That's a service they're providing for you.

It's all being driven around commerce, capitalism, and then the other side of it is surveillance. Like when Charles was crowned king, that was when they kicked in the most complicated and far-reaching AI system, facial recognition system, that's ever been in England.

You and I may think, “hey, do I need AI?” But we haven't got a choice in what's happening today, because it's being suggested to us.

My view on smart cities is that I really call them “surveillance cities,” because everything is sold to us as “it's going to be more efficient, speedy, and make our life more safe.” But what it does is that it brings in more measures of security.

So, for example, Robert Williams in Detroit was the first person arrested by the facial recognition system that got it wrong. That was a system called Project Greenlight, and it was presented to the city of Detroit and recommended because there was so much crime. That if they put this surveillance grid in there, it would be better for fighting crime, to keep them safer. And what happened is that it's now surveilling people and arresting people of color and they can't dismantle that system. It’s here now.

So we have to be very aware of what is being sold to us and how we would like to use it. And by “us,” I mean all people. It might impact people of color or black people or women or minorities first, but it's going to impact all of us eventually.

Mark: Thank you for sharing that. We’re here to challenge ourselves today. You used this expression, “chains of colonial algorithms,” and you also used a term, “decolonizing AI.” I’d love to hear your voice on what does that mean to you and what should we take away from that?

Karen: I've been looking at bias in AI since 2016-2017. But the deeper I go into it, the more I feel that maybe that term is a little bit of an understatement. That we really need to look at decolonizing AI and dismantling the colonial biases which are deeply embedded within these artificial intelligence systems— which are built on historical injustices and dominance and prejudices—and really enable different types of code to be brought to the forefront, such as Indigenous AI.

Let's create AI systems from an Indigenous perspective, from a different cultural lens, from an African lens, or Hawaiian lens, or a Māori lens. Not coming at it like, “okay, you’ve got to be diverse for the quota of diversity.” This will make systems better for everybody.

What about building solutions from us, the people? What would that look like? How would we actually go around decolonizing society? How would we go around decolonizing AI, and what would that look like? And that's my work that I'm embarking on.

Mark: So, to bring in wider data sets that express a full picture of society, is that another way to say it?

Karen: Yes! Holistic. Total. Authentic. Representative. Something which is reflective and authentic of the world in which we live.

Mark: OpenAI is in the news almost every day, and they announced recently their video generator called Sora. Some of the early imagery is phenomenal. Google recently announced that it is pausing image generation. Inaccurate historical context was coming out of Google.

I’d welcome your thoughts.

Karen: Let me just backtrack a little, with the writers’ strike in America. That happened because Hollywood and the studios were exploiting people's rights. Their data, their digital data, their digital identity.

Everybody is nervous about AI taking their jobs, wherever you are, whether you're a driver, whether you're an artist like myself. AI is reflective of society.

So with the writers’ strike, the studios tend to be quite exploitative of talent. What they were trying to implement through the contracts was also exploitative. So it’s very important for our society to reflect the best part of ourselves, because the Algorithm will automate whatever we train it.

Technology reinforces inequality. And when it does that, it's not a glitch. It's a signal that we need to redesign our systems to create a more equitable world. And so, as we’re moving forward in this ever-changing world to whatever role is being lost and whatever jobs are being discovered, it’s really important that it's a world which is accessible for all of us.

And in terms of Gemini AI, that was the other extreme of bias in AI, where it was too diverse. There were Nazis, where they generated images that were Asian women or Black men. So it wasn't historically accurate. So that's why they paused it.

There's got to be this middle ground—we've gone too far one way in terms of bias and too far one way regarding data sets in terms of diversity—to find something which is more representative.

And that again, is probably where the Indigenous AI and that decolonizing will create a bit more of an authentic representation.

Mark: Yeah, I don't know how one really regulates it or oversees it. Other than the market going, “good tool / bad tool.” Where is that authentic voice to say, “this whole market's moving in the right direction?”

Karen: That position of good or bad, that just comes down to perspective. That's why we're going to move into the age of perception and greater understanding. Because we're in a time now of real division, and we’ve got to understand that what you may deem good, someone else might deem bad.

And that's why, by democratizing more AI, more people can develop their own, more independent systems. So that people can have and code whatever they need to. They're not dependent on a body doing it. Like, say, Joe Biden, two weeks ago. They’ve announced this organization now that’s going to regulate AI. But we don't really know whose interest they're actually going to represent, because there's this history of governments and big business working together.

So that's why what's good for someone may not be good for you. It’s about us having a seat at the table of what's happening.

Mark: Look 5-10 years out in AI. Love to hear your view of how the next few years play out in the world of AI.

Karen Discusses Her View of What’s Next from the Perspective of a Time Traveler from the Future

***

Karen invites us to envision a future where we have already created the world we would like to live in, using technology. What does it look like? Now, work backwards. What steps do we need to take today to get there?

I was inspired by her words: “The future is not something that happens to us. It’s something which we build together.”

I believe AI will be a force multiplier for human potential. To realize this, AI must be combined with our capacity for compassion, justice, and ethical behavior—our humanity, in a nutshell. AI will herald a new era of prosperity if—and only if—we prioritize the humanist impact of technology. Let’s apply AI for the betterment of our world and use it to help us solve our greatest, most pressing challenges. Let’s use it to become more human, not less.

And never forget: the future needs you today.

Thank you, Karen Palmer.

The comments of Karen Palmer are her own and do not necessarily represent the views of Open Text Corporation or its employees.

The post The Future Needs You Today: A Conversation on AI & Decolonization with Karen Palmer appeared first on OpenText Blogs.

]]>
OpenText World 2023—Welcome to the AI Revolution https://blogs.opentext.com/opentext-world-2023-welcome-to-the-ai-revolution/ Thu, 19 Oct 2023 13:38:33 +0000 https://blogs.opentext.com/?p=74686

Welcome to the AI Revolution.

AI is not just a technology, it is a new ontology—for creativity, data, trust. No business or individual will be spared this new way of being.

At OpenText World 2023, we discussed our massively expanded mission around AI and information management. We showcased the incredible innovations available to our customers right now, the exciting capabilities coming soon, and how we are helping organizations pilot the AI journey ahead.

AI + Information Management

What makes great AI? Great information management!

Great AI needs great information management

The OpenText Cloud delivers information management + AI for powerful disruption. Our cloud is a data cloud and ingests vast amounts of information types—documents, video, voice, images, collaboration, records, archives, assets, cases, contracts and more. For organizations plotting their AI journey, OpenText helps you bring your data into one place, and layers in AI capabilities—such as decision support, risk management, automation and security.

We have been working in this AI arena for over a decade, with our foundational AI solutions in OpenText Magellan, IDOL, Vertica and more.

A decade+ of AI innovation

We aim to build on this expertise in profound ways. Let me be clear—generative AI is not our destination. It’s a waypoint. We believe the destination is artificial general intelligence (AGI). We intend to deliver key components leading to AGI—metadata vectorization, IoT, robotics, natural language processing, learning models, data trust and security.

This is a multi-year path. Computers and software have done calculations for us our entire lives and are now providing support for decision-making. But AGI will move beyond support, and actually make decisions for us, safely and ethically. This is a profound shift. We will experience a hundred years of progress in the next 10 years.

Introducing OpenText Aviator

The next leg in that journey starts now. Because great AI requires great information management, we are introducing OpenText Aviator, a full stack suite of AI capabilities, built into our business clouds.

Introducing OpenText Aviator

Here are just a few key components of Aviator that we were excited to reveal at OpenText World:

  • Aviator Platform delivers information digitalization, with deep support for multiple large language models, offering information visualization and automating decision support.
  • Aviator Thrust is a comprehensive portfolio of services, spanning data governance, information protection and security, risk management and compliance. It provides a composite API, ingesting and building information flows so organizations can unlock the power of AI on large data sets.
  • Aviator Search lets organizations, through natural language query, interact with cognitive engines and talk directly to their data, transforming action from days to minutes.
  • Aviator IoT enables organizations to manage assets with automated tagging, tracking and environment monitoring, for simple and instant access to information.
  • Aviator Information Orchestration builds AI into automation, managing information flows across applications, our business clouds, large language models, your operational and experience data, and the OpenText AI Platform. Just as we’ve elevated information orchestration, we aim to optimize AI orchestration.
  • Aviator Business Clouds deploy AI to reimagine work across every function, through IT Operations Aviator, DevOps Aviator, Experience Aviator, Content Aviator, Business Network Aviator and Cybersecurity Aviator.

To help our customers quickly achieve new AI capabilities, we are introducing OpenText Aviator Flight School. Give us up to one million documents, and we’ll upload them into our private cloud. We’ll apply Content Cloud with Content Aviator and Search Aviator, enable metadata, embeddings and vectors, and give you back a full language model, ready for your prompt tuning. We'll get you up and running in two weeks from start to finish!*

AI = Ethical AI

There is no difference between AI and ethical AI, there is only AI. In addition, history shows us that every technology has a dual use—we need to think about how we design tech as well as how we use it. One of our luminary speakers at OpenText World was Dr. Joy Buolamwini, AI Expert, Artist, and Founder of the Algorithmic Justice League, and Author of Unmasking AI. Dr. Buolamwini urged businesses to proactively combat technological bias: “I truly do think that companies that invest in building fair-trade data and ethical AI pipelines are going to be the ones that win in the long term.”

I was also pleased to sit down with The Right Honourable Stephen J. Harper, 22nd Prime Minister of Canada. Mr. Harper sees a strong need for industry and government to work together to establish ethical frameworks for technology, and he underlined the importance of addressing online misinformation and democratizing knowledge.

My interview with Stephen Harper at OpenText World

At OpenText, we believe very deeply in ethical practices and outcomes as we write software—from the first click, the first prompt, the first line of code, values-based design must be at the center of the process. That has translated for us into our AI Bill of Obligations:

  • Transparency builds trust
  • There is no difference between AI and ethical AI
  • Your data is not our product
  • Respect intellectual property, images and likenesses
  • Security is essential
  • Dedicated to accurate, verifiable AI results
  • Promote the Common Good

OpenText is also proud to be the first signatory of Canada’s Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems. This is our pledge to uphold equity, accountability, safety and other vital guiding principles.

Earn Your Wings

OpenText is dedicated to being your trusted partner on the AI journey. I’m reminded of this quote from CS Lewis, in Beyond Personality, written many decades ago:

“It’s not like teaching a horse to jump better and better but like turning a horse into a winged creature.”

This is what the AI Revolution can do for us. I’m excited to see how Aviator can help our customers transform their processes, their data and their strategies into winged creatures—so their businesses can truly soar.

To learn more about OpenText Aviator and our approach to Information Management + AI, check out opentext.ai.

*Subject to customer signing OpenText standard commercial terms and conditions.

The post OpenText World 2023—Welcome to the AI Revolution appeared first on OpenText Blogs.

]]>

Welcome to the AI Revolution.

AI is not just a technology, it is a new ontology—for creativity, data, trust. No business or individual will be spared this new way of being.

At OpenText World 2023, we discussed our massively expanded mission around AI and information management. We showcased the incredible innovations available to our customers right now, the exciting capabilities coming soon, and how we are helping organizations pilot the AI journey ahead.

AI + Information Management

What makes great AI? Great information management!

Great AI needs great information management

The OpenText Cloud delivers information management + AI for powerful disruption. Our cloud is a data cloud and ingests vast amounts of information types—documents, video, voice, images, collaboration, records, archives, assets, cases, contracts and more. For organizations plotting their AI journey, OpenText helps you bring your data into one place, and layers in AI capabilities—such as decision support, risk management, automation and security.

We have been working in this AI arena for over a decade, with our foundational AI solutions in OpenText Magellan, IDOL, Vertica and more.

A decade+ of AI innovation

We aim to build on this expertise in profound ways. Let me be clear—generative AI is not our destination. It’s a waypoint. We believe the destination is artificial general intelligence (AGI). We intend to deliver key components leading to AGI—metadata vectorization, IoT, robotics, natural language processing, learning models, data trust and security.

This is a multi-year path. Computers and software have done calculations for us our entire lives and are now providing support for decision-making. But AGI will move beyond support, and actually make decisions for us, safely and ethically. This is a profound shift. We will experience a hundred years of progress in the next 10 years.

Introducing OpenText Aviator

The next leg in that journey starts now. Because great AI requires great information management, we are introducing OpenText Aviator, a full stack suite of AI capabilities, built into our business clouds.

Introducing OpenText Aviator

Here are just a few key components of Aviator that we were excited to reveal at OpenText World:

  • Aviator Platform delivers information digitalization, with deep support for multiple large language models, offering information visualization and automating decision support.
  • Aviator Thrust is a comprehensive portfolio of services, spanning data governance, information protection and security, risk management and compliance. It provides a composite API, ingesting and building information flows so organizations can unlock the power of AI on large data sets.
  • Aviator Search lets organizations, through natural language query, interact with cognitive engines and talk directly to their data, transforming action from days to minutes.
  • Aviator IoT enables organizations to manage assets with automated tagging, tracking and environment monitoring, for simple and instant access to information.
  • Aviator Information Orchestration builds AI into automation, managing information flows across applications, our business clouds, large language models, your operational and experience data, and the OpenText AI Platform. Just as we’ve elevated information orchestration, we aim to optimize AI orchestration.
  • Aviator Business Clouds deploy AI to reimagine work across every function, through IT Operations Aviator, DevOps Aviator, Experience Aviator, Content Aviator, Business Network Aviator and Cybersecurity Aviator.

To help our customers quickly achieve new AI capabilities, we are introducing OpenText Aviator Flight School. Give us up to one million documents, and we’ll upload them into our private cloud. We’ll apply Content Cloud with Content Aviator and Search Aviator, enable metadata, embeddings and vectors, and give you back a full language model, ready for your prompt tuning. We'll get you up and running in two weeks from start to finish!*

AI = Ethical AI

There is no difference between AI and ethical AI, there is only AI. In addition, history shows us that every technology has a dual use—we need to think about how we design tech as well as how we use it. One of our luminary speakers at OpenText World was Dr. Joy Buolamwini, AI Expert, Artist, and Founder of the Algorithmic Justice League, and Author of Unmasking AI. Dr. Buolamwini urged businesses to proactively combat technological bias: “I truly do think that companies that invest in building fair-trade data and ethical AI pipelines are going to be the ones that win in the long term.”

I was also pleased to sit down with The Right Honourable Stephen J. Harper, 22nd Prime Minister of Canada. Mr. Harper sees a strong need for industry and government to work together to establish ethical frameworks for technology, and he underlined the importance of addressing online misinformation and democratizing knowledge.

My interview with Stephen Harper at OpenText World

At OpenText, we believe very deeply in ethical practices and outcomes as we write software—from the first click, the first prompt, the first line of code, values-based design must be at the center of the process. That has translated for us into our AI Bill of Obligations:

  • Transparency builds trust
  • There is no difference between AI and ethical AI
  • Your data is not our product
  • Respect intellectual property, images and likenesses
  • Security is essential
  • Dedicated to accurate, verifiable AI results
  • Promote the Common Good

OpenText is also proud to be the first signatory of Canada’s Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems. This is our pledge to uphold equity, accountability, safety and other vital guiding principles.

Earn Your Wings

OpenText is dedicated to being your trusted partner on the AI journey. I’m reminded of this quote from CS Lewis, in Beyond Personality, written many decades ago:

“It’s not like teaching a horse to jump better and better but like turning a horse into a winged creature.”

This is what the AI Revolution can do for us. I’m excited to see how Aviator can help our customers transform their processes, their data and their strategies into winged creatures—so their businesses can truly soar.

To learn more about OpenText Aviator and our approach to Information Management + AI, check out opentext.ai.

*Subject to customer signing OpenText standard commercial terms and conditions.

The post OpenText World 2023—Welcome to the AI Revolution appeared first on OpenText Blogs.

]]>
OpenText World 2023—AI and Its Forces of Change https://blogs.opentext.com/opentext-world-2023-ai-and-its-forces-of-change/ Tue, 26 Sep 2023 16:48:08 +0000 https://blogs.opentext.com/?p=74022

AI will change everything.

We are in the midst of a massive shift from the cloud digital era to the AI cognitive era. Automation, data and learning models are coming together to work seamlessly. AI will require a whole new ontology, a new way of thinking, all centered on how we innovate and how we create.

The world is moving fast, and the organizations who will lead are making the AI pivot now.

Join us for OpenText World 2023 on October 11-12 in Las Vegas, where we will be focusing on the future of AI, and its forces of change on commerce, climate, biotech and the very nature of work itself.

With more than 150 sessions and 100 speakers, attendees will learn from world-leading experts and business leaders about how they are evolving for the future of AI, develop skills in hands-on labs, and network with partners and peers. Plus, you will have the opportunity to explore various use cases, demo OpenText solutions, and let us help you find the foundational tools and platforms you need to scale AI-led processes for your business.

You’ll learn about opentext.ai, our strategy for helping our customers unlock the potential of AI and large language models to solve their most complex problems. Customers will be able to explore our latest solutions for powering and protecting information, including our new product line, OpenText Aviator. OpenText Aviator is our extraordinary suite of AI innovations, which we’re embedding across all our major solutions and services.

You’ll also learn how OpenText is reimagining our own software development processes, and our commitment to sustainability, data privacy and trust.

Meet Our Luminary Speakers

Headshots of key speakers from OpenText World 2023 with their names and titles
A selection of OpenText World 2023 Luminary Speakers

We are bringing together industry experts to share their views on AI and the most important issues of our time.

In my keynote, I will delve into the opportunities offered by generative AI, automation, the cloud, the Internet of Things and the emergent technologies that are supercharging business. I will also showcase opentext.ai and reveal OpenText Aviator, our new suite of AI innovations. The world is evolving, and it’s time for every organization to begin their AI journey.

OpenText EVP & Chief Product Officer, Muhi S. Majzoub’s keynote will showcase the highlights of OpenText’s Titanium X product roadmap designed to help customers take advantage of new opportunities with information management. From cloud acceleration to cybersecurity, Muhi will talk about OpenText’s upcoming innovations and advanced technologies that will help customers stay competitive in today’s digital landscape. He will also discuss OpenText’s unique approach to software co-innovation to build integrated solutions with SAP, Microsoft, Google, Salesforce, AWS and others.

Dr. Joy Buolamwini—AI expert, artist, activist and Founder of the Algorithmic Justice League—will speak about Design Justice and Values-Based Innovation in a software process world.

David Wallace-Wells, author of the instant New York Times best-seller, The Uninhabitable Earth, will explore the future of our climate and our impact on the planet.

Former UN Chief Economist & Assistant Secretary-General for Economic Development, Elliott Harris, joins us with an illuminating talk on UN Sustainability Goals.

Vivek Wadhwa, futurist, author and emerging technologies expert, is going to go deep into the incredible value AI is creating and how to balance it with real-world risks.

And we have many more speakers, including Career and Workplace Expert, and New York Times best-selling author Lindsey Pollak; President of the Foreign Policy Research Institute, Carol Rollie Flynn; Group VP at IDC, Simon Ellis; and Founder, Chairman and Principal Analyst of Constellation Research, R “Ray” Wang.

The Future Is Human + AI

AI will shape our future and what it means to be human. It will elevate human capabilities, empowering us to reach heights we never thought possible.

OpenText will be a trusted partner on our customers’ AI journey. Now is the time to lean in and learn, so businesses can navigate the new tech era, soar beyond barriers and seize the opportunities on the horizon.

I invite you to register now for what’s going to be an absolutely incredible event—our best OpenText World ever!



I look forward to exploring the future of AI and its forces of change with you at OpenText World 2023!

The post OpenText World 2023—AI and Its Forces of Change appeared first on OpenText Blogs.

]]>

AI will change everything.

We are in the midst of a massive shift from the cloud digital era to the AI cognitive era. Automation, data and learning models are coming together to work seamlessly. AI will require a whole new ontology, a new way of thinking, all centered on how we innovate and how we create.

The world is moving fast, and the organizations who will lead are making the AI pivot now.

Join us for OpenText World 2023 on October 11-12 in Las Vegas, where we will be focusing on the future of AI, and its forces of change on commerce, climate, biotech and the very nature of work itself.

With more than 150 sessions and 100 speakers, attendees will learn from world-leading experts and business leaders about how they are evolving for the future of AI, develop skills in hands-on labs, and network with partners and peers. Plus, you will have the opportunity to explore various use cases, demo OpenText solutions, and let us help you find the foundational tools and platforms you need to scale AI-led processes for your business.

You’ll learn about opentext.ai, our strategy for helping our customers unlock the potential of AI and large language models to solve their most complex problems. Customers will be able to explore our latest solutions for powering and protecting information, including our new product line, OpenText Aviator. OpenText Aviator is our extraordinary suite of AI innovations, which we’re embedding across all our major solutions and services.

You’ll also learn how OpenText is reimagining our own software development processes, and our commitment to sustainability, data privacy and trust.

Meet Our Luminary Speakers

Headshots of key speakers from OpenText World 2023 with their names and titles
A selection of OpenText World 2023 Luminary Speakers

We are bringing together industry experts to share their views on AI and the most important issues of our time.

In my keynote, I will delve into the opportunities offered by generative AI, automation, the cloud, the Internet of Things and the emergent technologies that are supercharging business. I will also showcase opentext.ai and reveal OpenText Aviator, our new suite of AI innovations. The world is evolving, and it’s time for every organization to begin their AI journey.

OpenText EVP & Chief Product Officer, Muhi S. Majzoub’s keynote will showcase the highlights of OpenText’s Titanium X product roadmap designed to help customers take advantage of new opportunities with information management. From cloud acceleration to cybersecurity, Muhi will talk about OpenText’s upcoming innovations and advanced technologies that will help customers stay competitive in today’s digital landscape. He will also discuss OpenText’s unique approach to software co-innovation to build integrated solutions with SAP, Microsoft, Google, Salesforce, AWS and others.

Dr. Joy Buolamwini—AI expert, artist, activist and Founder of the Algorithmic Justice League—will speak about Design Justice and Values-Based Innovation in a software process world.

David Wallace-Wells, author of the instant New York Times best-seller, The Uninhabitable Earth, will explore the future of our climate and our impact on the planet.

Former UN Chief Economist & Assistant Secretary-General for Economic Development, Elliott Harris, joins us with an illuminating talk on UN Sustainability Goals.

Vivek Wadhwa, futurist, author and emerging technologies expert, is going to go deep into the incredible value AI is creating and how to balance it with real-world risks.

And we have many more speakers, including Career and Workplace Expert, and New York Times best-selling author Lindsey Pollak; President of the Foreign Policy Research Institute, Carol Rollie Flynn; Group VP at IDC, Simon Ellis; and Founder, Chairman and Principal Analyst of Constellation Research, R “Ray” Wang.

The Future Is Human + AI

AI will shape our future and what it means to be human. It will elevate human capabilities, empowering us to reach heights we never thought possible.

OpenText will be a trusted partner on our customers’ AI journey. Now is the time to lean in and learn, so businesses can navigate the new tech era, soar beyond barriers and seize the opportunities on the horizon.

I invite you to register now for what’s going to be an absolutely incredible event—our best OpenText World ever!

I look forward to exploring the future of AI and its forces of change with you at OpenText World 2023!

The post OpenText World 2023—AI and Its Forces of Change appeared first on OpenText Blogs.

]]>
Everything Will Change: A Conversation on Ethical AI with Dr. Sasha Luccioni https://blogs.opentext.com/everything-will-change-a-conversation-on-ethical-ai-with-dr-sasha-luccioni/ Tue, 19 Sep 2023 14:05:51 +0000 https://blogs.opentext.com/?p=73740

We are in the midst of a massive platform shift—from digital to cognitive. I once wrote that the internet changed everything. But with AI, everything will change. Every role. Every organization. Every industry.

At OpenText, we are embracing this change by continuing to learn and raise the bar. As part of this journey, we invited Dr. Sasha Luccioni, Research Scientist and Climate Lead at Hugging Face, and one of MIT Technology Review’s “35 Innovators Under 35,” to speak with OpenTexters about one of the most crucial topics of our time—ethical AI.

You can read some of Dr. Luccioni’s extraordinary insights below, including how we can create more diversity in the field of machine learning, and why she chose not to sign the open letter calling for a pause on AI research. She also revealed how companies have an opportunity to take strong action at the crossroads of AI and climate change.

My warmest thanks to Dr. Luccioni for sharing her clear and powerful point of view.

* * *

Mark: What does it mean to be a computer scientist today, in the world of AI, automation and language models? I find that those who come into the field now are part philosopher, part physicist, part electrical engineer and part IT expert.

Sasha: I agree that it's becoming more complex. In the past, we would build some toy network on some toy problem, and then you would just publish a paper and move on. Nowadays, you have to consider the societal implications. You have to consider the dual-use problem. For example, if you create something that can potentially be used to create new antibiotics or new medicine, it can also be used to create new poison. You have to cultivate this new way of thinking, and it's really not something that gets taught in school.

  Acting on the challenges at the intersection of climate change and AI

Mark: You’re a fourth-generation woman in science in your family. I’d love to hear your perspective on your pursuit of academic excellence and any learnings you can share to elevate OpenTexters.

Sasha: Especially in computer science, especially right now, it's really important to have all sorts of voices, not only women but diverse communities and backgrounds. Early on, I was looking around my computer science classes, and I realized there were only a couple of women. In my PhD, I was the only one in my cohort. That was a really strong signal for me that I needed to stay, in order to make sure that there was a woman in the room and a woman at the table.

 I'm actually involved with an organization called Women in Machine Learning (WiML). Only 11% of people publishing at AI conferences are women, for example, which is very, very low. WiML was created as a way of cultivating networks, creating mentorships, organizing events. We really try to think through, how can we keep women in the ML community and make sure they’re not feeling alone? I had both my kids during my PhD, so I really want to make sure that others have support, have childcare at conferences and have the opportunity to meet other women, including senior women who have gone through this and can give advice and be mentors.

Mark: OpenText is on our AI Journey. Our view is that there’s going to be a lot of large language models. There could be thousands, all highly specialized. What’s your view? Is it going to rain large language models?

Sasha: I think it's already starting to drizzle! But I see this as a really important point in time, where it can go different ways. It could be that we’ll see everyone want to implement general purpose generative AI models into everything. That's not the best way forward, because having interacted with these systems, they're very good at answering questions, but it is a very central, average point of view. It's not nuanced. It doesn't represent other cultures, other languages. If we start going in that direction, we might have an echo chamber of the same opinions.

Whereas if we have multiple models—I really am a fan of specific models, because often they're more efficient, they're more lightweight and they are better suited for whatever task you're trying to do. I've worked in applied AI, and I've never encountered a situation when you could say, “let’s take a vanilla model and throw it at the issue, and it’s going to solve all the customer's problems.”

What you need is a model that you can fine-tune, that you can adapt, where you can take your data and continue training it, or train from scratch. You need a way to customize. And there's issues of privacy as well. There are so many subtle issues that don't get considered.

Mark: I completely agree. If you're trying to solve, for example, a liability contract problem across 10 million contracts, do you use a general learning model versus something specific? We're building our platform to be able to plug in open-source models, specialized language models, and of course to deploy work from Google, OpenAI and others as well.

Why Dr. Luccioni didn't sign the open letter calling for a pause on AI

Mark: We have 25,000 OpenTexters listening today. It's a community that’s very eager to learn. One of our values is “raise the bar and own the outcome.” So, Sasha I would love to hear your thoughts on AI and anything you want to leave OpenTexters with today.

Sasha: I think you all are doing great work, and I truly believe in the potential of AI. Something I like talking about is the fact that responsible AI is not a completely separate field of study. I'm not an ethical AI or responsible AI practitioner. Everyone is a responsible AI practitioner. Can you imagine if there were “cars” and “safe cars”? All cars are supposed to be safe! So, all AI is supposed to be responsible.

I invite everyone to think about that and how you can consider these ethical impacts. Even from a technical perspective, something like adding an extra layer of testing of your model before deploying it, to make sure that it represents different people, different languages, different communities in a way that's more or less equitable. Just these small additions or tweaks to your everyday practice can make a really big difference and can make systems more robust, and therefore more ethical.

* * *

As Dr. Luccioni observed, right now it’s drizzling language models. But I believe we’ll soon see a downpour—and they will be lightweight, low-cost, openly available and specialized. Companies in every industry will be able to apply AI to solve their most complex problems.

In the midst of this shift, we need to keep innovating, but hold ourselves accountable and raise the bar for value-based design. We need to orchestrate AI, but ask vital questions about its impacts on our planet and society, and own the outcomes. The right technology, deployed effectively and ethically, can spur phenomenal growth, and change the world for the better.

To learn more about OpenText’s strategy on AI, read my position paper, opentext.ai.

The post Everything Will Change: A Conversation on Ethical AI with Dr. Sasha Luccioni appeared first on OpenText Blogs.

]]>

We are in the midst of a massive platform shift—from digital to cognitive. I once wrote that the internet changed everything. But with AI, everything will change. Every role. Every organization. Every industry.

At OpenText, we are embracing this change by continuing to learn and raise the bar. As part of this journey, we invited Dr. Sasha Luccioni, Research Scientist and Climate Lead at Hugging Face, and one of MIT Technology Review’s “35 Innovators Under 35,” to speak with OpenTexters about one of the most crucial topics of our time—ethical AI.

You can read some of Dr. Luccioni’s extraordinary insights below, including how we can create more diversity in the field of machine learning, and why she chose not to sign the open letter calling for a pause on AI research. She also revealed how companies have an opportunity to take strong action at the crossroads of AI and climate change.

My warmest thanks to Dr. Luccioni for sharing her clear and powerful point of view.

* * *

Mark: What does it mean to be a computer scientist today, in the world of AI, automation and language models? I find that those who come into the field now are part philosopher, part physicist, part electrical engineer and part IT expert.

Sasha: I agree that it's becoming more complex. In the past, we would build some toy network on some toy problem, and then you would just publish a paper and move on. Nowadays, you have to consider the societal implications. You have to consider the dual-use problem. For example, if you create something that can potentially be used to create new antibiotics or new medicine, it can also be used to create new poison. You have to cultivate this new way of thinking, and it's really not something that gets taught in school.

  Acting on the challenges at the intersection of climate change and AI

Mark: You’re a fourth-generation woman in science in your family. I’d love to hear your perspective on your pursuit of academic excellence and any learnings you can share to elevate OpenTexters.

Sasha: Especially in computer science, especially right now, it's really important to have all sorts of voices, not only women but diverse communities and backgrounds. Early on, I was looking around my computer science classes, and I realized there were only a couple of women. In my PhD, I was the only one in my cohort. That was a really strong signal for me that I needed to stay, in order to make sure that there was a woman in the room and a woman at the table.

 I'm actually involved with an organization called Women in Machine Learning (WiML). Only 11% of people publishing at AI conferences are women, for example, which is very, very low. WiML was created as a way of cultivating networks, creating mentorships, organizing events. We really try to think through, how can we keep women in the ML community and make sure they’re not feeling alone? I had both my kids during my PhD, so I really want to make sure that others have support, have childcare at conferences and have the opportunity to meet other women, including senior women who have gone through this and can give advice and be mentors.

Mark: OpenText is on our AI Journey. Our view is that there’s going to be a lot of large language models. There could be thousands, all highly specialized. What’s your view? Is it going to rain large language models?

Sasha: I think it's already starting to drizzle! But I see this as a really important point in time, where it can go different ways. It could be that we’ll see everyone want to implement general purpose generative AI models into everything. That's not the best way forward, because having interacted with these systems, they're very good at answering questions, but it is a very central, average point of view. It's not nuanced. It doesn't represent other cultures, other languages. If we start going in that direction, we might have an echo chamber of the same opinions.

Whereas if we have multiple models—I really am a fan of specific models, because often they're more efficient, they're more lightweight and they are better suited for whatever task you're trying to do. I've worked in applied AI, and I've never encountered a situation when you could say, “let’s take a vanilla model and throw it at the issue, and it’s going to solve all the customer's problems.”

What you need is a model that you can fine-tune, that you can adapt, where you can take your data and continue training it, or train from scratch. You need a way to customize. And there's issues of privacy as well. There are so many subtle issues that don't get considered.

Mark: I completely agree. If you're trying to solve, for example, a liability contract problem across 10 million contracts, do you use a general learning model versus something specific? We're building our platform to be able to plug in open-source models, specialized language models, and of course to deploy work from Google, OpenAI and others as well.

Why Dr. Luccioni didn't sign the open letter calling for a pause on AI

Mark: We have 25,000 OpenTexters listening today. It's a community that’s very eager to learn. One of our values is “raise the bar and own the outcome.” So, Sasha I would love to hear your thoughts on AI and anything you want to leave OpenTexters with today.

Sasha: I think you all are doing great work, and I truly believe in the potential of AI. Something I like talking about is the fact that responsible AI is not a completely separate field of study. I'm not an ethical AI or responsible AI practitioner. Everyone is a responsible AI practitioner. Can you imagine if there were “cars” and “safe cars”? All cars are supposed to be safe! So, all AI is supposed to be responsible.

I invite everyone to think about that and how you can consider these ethical impacts. Even from a technical perspective, something like adding an extra layer of testing of your model before deploying it, to make sure that it represents different people, different languages, different communities in a way that's more or less equitable. Just these small additions or tweaks to your everyday practice can make a really big difference and can make systems more robust, and therefore more ethical.

* * *

As Dr. Luccioni observed, right now it’s drizzling language models. But I believe we’ll soon see a downpour—and they will be lightweight, low-cost, openly available and specialized. Companies in every industry will be able to apply AI to solve their most complex problems.

In the midst of this shift, we need to keep innovating, but hold ourselves accountable and raise the bar for value-based design. We need to orchestrate AI, but ask vital questions about its impacts on our planet and society, and own the outcomes. The right technology, deployed effectively and ethically, can spur phenomenal growth, and change the world for the better.

To learn more about OpenText’s strategy on AI, read my position paper, opentext.ai.

The post Everything Will Change: A Conversation on Ethical AI with Dr. Sasha Luccioni appeared first on OpenText Blogs.

]]>
Building an Ustopia: A Conversation on Tech & Equity with Dr. Ruha Benjamin https://blogs.opentext.com/building-an-ustopia-a-conversation-on-tech-equity-with-dr-ruha-benjamin/ Tue, 28 Feb 2023 18:00:12 +0000 https://blogs.opentext.com/?p=69773

We are quickly approaching the point where the majority of the workforce is millennials and GenZ. In the United States, post millennials are the most ethnically and racially diverse generation ever. The conversations and requirements for creating empowered talent has changed over the years, and for the better. Employees are looking to work for a great company, have rewarding work, total rewards, impact in the community, protect the planet, be mission led, prioritize for sustainability and fairness, equity and inclusion. Mission, Trust, Sustainability, Equity and Impact are essential to unlocking the energy of a company.

At OpenText, we embrace these beliefs. We are setting the bar high, holding a mirror to ourselves, learning and advancing, and driving action to be more. We are committed to creating a more diverse industry and company. We will have challenging and even uncomfortable conversations, and take strong action to fight discrimination.

Recently, as part of our celebrations for Black History Month, Dr. Ruha Benjamin, Professor of African American Studies at Princeton University, spoke with OpenTexters about the complex dynamic between innovation and equity. She explored why it is so vital that we learn the history of technology alongside the history of race.

I am deeply grateful to Dr. Benjamin, who both challenged and inspired us to think about the world in new ways. Here are a few highlights from our conversation—which covered everything from Black Mirror and discriminatory design, to how we train future engineers and build better organizations to do good in the world.

I am very pleased to share our conversation.

***

Mark: You’re a professor at Princeton, with a deep love for education and learning. Can you tell us about the curriculum you put together at Princeton?

Dr. Benjamin: Absolutely. One of the classes I enjoy teaching is called “Black Mirror: Race, Technology and Justice.” It's a riff off the TV show Black Mirror, which I think is a really important cultural awareness in terms of having us think about how technology reflects existing social processes. What's interesting about the show, which we build on in the class, is that technology is not the problem. It is reflecting existing social problems—perhaps amplifying them, perhaps hiding them. So, in that way, we want to put technology in its place. Rather than blaming it for everything, we want to think about it reflecting us, so we need to look back at ourselves.

That class is one of the few probably that you could find anywhere in the country, in terms of trying to combine the humanities and social sciences in this way, with engineering and computer science.

Mark: You talked about technology and society, and the two stories—a dystopian view of the world and a utopian view of the world, whether we think technology is harmful or helpful. I’d like to hear more on your view of how we view that glass. Half empty, half full? Dystopian, utopian?

Dr. Benjamin: Part of it is to think about what other options there are. If it's not dystopia, if it's not utopia, it might be what the science fiction writer Ursula Le Guin calls ustopia. This goes back to the idea that whatever we’re dealing with is a reflection of us, of society, as it is. So that is one way to describe what that third frame is. We need to figure out who the “us” is that’s shaping the world that we have.

One of the things I want us to consider is that there's a very small sliver of all of humanity who currently monopolizes the power and the resources to shape the world for everyone else. And so, if we want to begin to undo this polarity, we have to broaden whose voices are heard and whose lives are considered as we build both our physical and our digital structures. In that ustopia, we need to broaden the “us” in terms of who's actually participating in the process.

Mark: You use the term discriminatory design, and have written that racism isn’t actually a form of ignorance, but rather a distorted form of knowledge.

Dr. Benjamin: Yes. Some years ago, there was a viral video that went around that showed a soap dispenser with two friends, Black and White, at a hotel restroom, and the soap wouldn't come out for the friend with darker skin. This video went viral under the label “racist soap dispenser,” which is in a way funny because we know that the soap dispenser doesn't have intentions to be discriminatory. Essentially, this infrared technology, because darker skin absorbs light, wasn't bouncing off and causing the soap to come out. So it could be that everyone who it was tested on had lighter skin, and because of who was behind the scenes, the glitch or the problem never came to light. This is an analog, simple example of the way that it doesn't require intention to be harmful or intention to discriminate. 

But we can think about it in more complex hiring algorithms that many companies now use to streamline the process of recruitment, where in many cases the training data is based on people who already work in the company. So, if you love your employees, they're doing a great job, that becomes training data to find more employees like that.

But if for the last 50 years or 10 years, you've hired mostly men or hired mostly people from North America or hired people only that spoke one language, then there are proxies for that that then get reproduced. You might think of this hiring algorithm as more neutral and unbiased than, say, a human resource officer, someone who's doing the interviewing, whose own bias and subjectivity may come into the screening process. But if you're using historical data—and all data is historical—and that history has had forms of explicit or implicit discrimination built into it, what you’re going to do is reproduce that under the guise of neutrality through this hiring algorithm.

So, intention is the wrong metric because this can happen without the intention to do harm. It can actually happen because you're not asking the right questions or thinking about the legacy you’re building into these systems.



"Cosmetic" diversity is not enough.

Mark: A few years ago, there was a term being used, and COVID definitely accentuated this—the digital divide. In your view is the digital divide widening, is it holding constant, is it increasing? And what can we do?

Dr. Benjamin: I think the consequences of the digital divide are becoming more severe. That is because more and more of our lives—not just our recreational lives or our leisure time—relies on these tools. Every day basic needs are now mediated through the internet and through digital tools.

And certainly, as you mentioned, during COVID a lot of this came to light. In New York City, when we went remote, it was found that something like 200,000 school children were housing insecure or homeless, so they couldn't do remote learning. So, what had already existed as a problem, the consequences of it became more severe, because now they relied on this to be able to get their basic education.

Not having access to these technologies is really important and something we need to address, but some of these same communities are also hyper-surveilled. Technologies are deployed against them without their knowledge. They're exposed to technologies in a way that is disempowering. So, in any conversation about access, we need to remember both of those things. We need to understand when people need to be included, but also when people are exposed to technologies and surveillance that are harming them, that are extracting their data and weaponizing it against them.

Inclusion is not a straightforward good, because there are all kinds of predatory “inclusions.” Many of these same communities who don't have access to the internet and broadband and other digital platforms are experiencing technology, but not of their own design and not of their own choice.

Mark: It’s worthy of awareness and education, this predatory inclusion of technology. It wasn't something that I was thinking about, but will be reflecting deeply on.

25,000 OpenTexters. 10,000 programmers. What advice do you have for us—even if challenging, uncomfortable—to build a better company, one more aware, more inclusive, and to go do good in the world?

Dr. Benjamin: You know, when it comes to technology, if we're talking about ethics or we're talking about equity, I think we train our attention on the outcomes, how technology’s impacting X, Y and Z.

What I would love us to do is start much earlier and think about what are the inputs to the process, the starting values, the assumptions, the incentives, the forms of knowledge that we're inputting. Reevaluate the things we take most for granted in the process, and scrutinize them and ask ourselves, how is this either contributing to the common good, to more equity and inclusion and justice in the world, or how is it undermining that?

And put each thing under the microscope to really evaluate the things we take most for granted, rather than approaching this work with big buzzwords, big initiatives, flashy campaigns that get us attention, but often don't scrutinize and think about the nitty gritty, the small things that actually build up to larger harms.

***

A key element from my conversation with Dr. Benjamin was the importance of re-inventing “design” and ensuring the design has the right inputs and perspectives. When we improve the design, we improve the chance of positive change. I am a big believer in the idea that we need both long-range telescopes and high-power microscopes when it comes to designing and building software.

Thank you, Dr. Benjamin.

Keep visiting this space for more insights from “OpenTalk with Mark J. Barrenechea,” my conversations with some of the world’s greatest thinkers and leaders.

The post Building an Ustopia: A Conversation on Tech & Equity with Dr. Ruha Benjamin appeared first on OpenText Blogs.

]]>

We are quickly approaching the point where the majority of the workforce is millennials and GenZ. In the United States, post millennials are the most ethnically and racially diverse generation ever. The conversations and requirements for creating empowered talent has changed over the years, and for the better. Employees are looking to work for a great company, have rewarding work, total rewards, impact in the community, protect the planet, be mission led, prioritize for sustainability and fairness, equity and inclusion. Mission, Trust, Sustainability, Equity and Impact are essential to unlocking the energy of a company.

At OpenText, we embrace these beliefs. We are setting the bar high, holding a mirror to ourselves, learning and advancing, and driving action to be more. We are committed to creating a more diverse industry and company. We will have challenging and even uncomfortable conversations, and take strong action to fight discrimination.

Recently, as part of our celebrations for Black History Month, Dr. Ruha Benjamin, Professor of African American Studies at Princeton University, spoke with OpenTexters about the complex dynamic between innovation and equity. She explored why it is so vital that we learn the history of technology alongside the history of race.

I am deeply grateful to Dr. Benjamin, who both challenged and inspired us to think about the world in new ways. Here are a few highlights from our conversation—which covered everything from Black Mirror and discriminatory design, to how we train future engineers and build better organizations to do good in the world.

I am very pleased to share our conversation.

***

Mark: You’re a professor at Princeton, with a deep love for education and learning. Can you tell us about the curriculum you put together at Princeton?

Dr. Benjamin: Absolutely. One of the classes I enjoy teaching is called “Black Mirror: Race, Technology and Justice.” It's a riff off the TV show Black Mirror, which I think is a really important cultural awareness in terms of having us think about how technology reflects existing social processes. What's interesting about the show, which we build on in the class, is that technology is not the problem. It is reflecting existing social problems—perhaps amplifying them, perhaps hiding them. So, in that way, we want to put technology in its place. Rather than blaming it for everything, we want to think about it reflecting us, so we need to look back at ourselves.

That class is one of the few probably that you could find anywhere in the country, in terms of trying to combine the humanities and social sciences in this way, with engineering and computer science.

Mark: You talked about technology and society, and the two stories—a dystopian view of the world and a utopian view of the world, whether we think technology is harmful or helpful. I’d like to hear more on your view of how we view that glass. Half empty, half full? Dystopian, utopian?

Dr. Benjamin: Part of it is to think about what other options there are. If it's not dystopia, if it's not utopia, it might be what the science fiction writer Ursula Le Guin calls ustopia. This goes back to the idea that whatever we’re dealing with is a reflection of us, of society, as it is. So that is one way to describe what that third frame is. We need to figure out who the “us” is that’s shaping the world that we have.

One of the things I want us to consider is that there's a very small sliver of all of humanity who currently monopolizes the power and the resources to shape the world for everyone else. And so, if we want to begin to undo this polarity, we have to broaden whose voices are heard and whose lives are considered as we build both our physical and our digital structures. In that ustopia, we need to broaden the “us” in terms of who's actually participating in the process.

Mark: You use the term discriminatory design, and have written that racism isn’t actually a form of ignorance, but rather a distorted form of knowledge.

Dr. Benjamin: Yes. Some years ago, there was a viral video that went around that showed a soap dispenser with two friends, Black and White, at a hotel restroom, and the soap wouldn't come out for the friend with darker skin. This video went viral under the label “racist soap dispenser,” which is in a way funny because we know that the soap dispenser doesn't have intentions to be discriminatory. Essentially, this infrared technology, because darker skin absorbs light, wasn't bouncing off and causing the soap to come out. So it could be that everyone who it was tested on had lighter skin, and because of who was behind the scenes, the glitch or the problem never came to light. This is an analog, simple example of the way that it doesn't require intention to be harmful or intention to discriminate. 

But we can think about it in more complex hiring algorithms that many companies now use to streamline the process of recruitment, where in many cases the training data is based on people who already work in the company. So, if you love your employees, they're doing a great job, that becomes training data to find more employees like that.

But if for the last 50 years or 10 years, you've hired mostly men or hired mostly people from North America or hired people only that spoke one language, then there are proxies for that that then get reproduced. You might think of this hiring algorithm as more neutral and unbiased than, say, a human resource officer, someone who's doing the interviewing, whose own bias and subjectivity may come into the screening process. But if you're using historical data—and all data is historical—and that history has had forms of explicit or implicit discrimination built into it, what you’re going to do is reproduce that under the guise of neutrality through this hiring algorithm.

So, intention is the wrong metric because this can happen without the intention to do harm. It can actually happen because you're not asking the right questions or thinking about the legacy you’re building into these systems.

"Cosmetic" diversity is not enough.

Mark: A few years ago, there was a term being used, and COVID definitely accentuated this—the digital divide. In your view is the digital divide widening, is it holding constant, is it increasing? And what can we do?

Dr. Benjamin: I think the consequences of the digital divide are becoming more severe. That is because more and more of our lives—not just our recreational lives or our leisure time—relies on these tools. Every day basic needs are now mediated through the internet and through digital tools.

And certainly, as you mentioned, during COVID a lot of this came to light. In New York City, when we went remote, it was found that something like 200,000 school children were housing insecure or homeless, so they couldn't do remote learning. So, what had already existed as a problem, the consequences of it became more severe, because now they relied on this to be able to get their basic education.

Not having access to these technologies is really important and something we need to address, but some of these same communities are also hyper-surveilled. Technologies are deployed against them without their knowledge. They're exposed to technologies in a way that is disempowering. So, in any conversation about access, we need to remember both of those things. We need to understand when people need to be included, but also when people are exposed to technologies and surveillance that are harming them, that are extracting their data and weaponizing it against them.

Inclusion is not a straightforward good, because there are all kinds of predatory “inclusions.” Many of these same communities who don't have access to the internet and broadband and other digital platforms are experiencing technology, but not of their own design and not of their own choice.

Mark: It’s worthy of awareness and education, this predatory inclusion of technology. It wasn't something that I was thinking about, but will be reflecting deeply on.

25,000 OpenTexters. 10,000 programmers. What advice do you have for us—even if challenging, uncomfortable—to build a better company, one more aware, more inclusive, and to go do good in the world?

Dr. Benjamin: You know, when it comes to technology, if we're talking about ethics or we're talking about equity, I think we train our attention on the outcomes, how technology’s impacting X, Y and Z.

What I would love us to do is start much earlier and think about what are the inputs to the process, the starting values, the assumptions, the incentives, the forms of knowledge that we're inputting. Reevaluate the things we take most for granted in the process, and scrutinize them and ask ourselves, how is this either contributing to the common good, to more equity and inclusion and justice in the world, or how is it undermining that?

And put each thing under the microscope to really evaluate the things we take most for granted, rather than approaching this work with big buzzwords, big initiatives, flashy campaigns that get us attention, but often don't scrutinize and think about the nitty gritty, the small things that actually build up to larger harms.

***

A key element from my conversation with Dr. Benjamin was the importance of re-inventing “design” and ensuring the design has the right inputs and perspectives. When we improve the design, we improve the chance of positive change. I am a big believer in the idea that we need both long-range telescopes and high-power microscopes when it comes to designing and building software.

Thank you, Dr. Benjamin.

Keep visiting this space for more insights from “OpenTalk with Mark J. Barrenechea,” my conversations with some of the world’s greatest thinkers and leaders.

The post Building an Ustopia: A Conversation on Tech & Equity with Dr. Ruha Benjamin appeared first on OpenText Blogs.

]]>
Preparing for Quantum. A Conversation with Scott Aaronson https://blogs.opentext.com/preparing-for-quantum-a-conversation-with-scott-aaronson/ Fri, 24 Feb 2023 15:29:35 +0000 https://blogs.opentext.com/?p=69708

We are on the cusp of a new tech global era.

It is no longer good enough to look around corners. We need to look around corners of corners. We need to see the potential before us, and be prepared—to take on new directions, new challenges and new unknowns. We set the stage for the next decade and beyond, with what we build today. 

Quantum is one key driving force.

The new tech global era will enable climate innovation, electrification, digital currency, voice/facial recognition and extended reality. Scalable quantum, in fact, could become a reality within our professional lifetimes. I believe it will.

One person who is deep in the tech frontier is Scott Aaronson, Founding Director of the Quantum Information Center at the University of Texas, Austin, and AI Safety Researcher at OpenAI. In a recent episode of our OpenTalk speaker series, Scott shared some startling insights about the future of quantum and AI. Here’s a glimpse at our conversation.

Scott is an amazing thinker and leading expert, and it was deeply insightful spending time together.

***

Mark: Would you say there are quantum machines out there today that are working?

Scott: Yes, there absolutely are. It’s just that they’re very small ones. In the 1990s there were skeptics who said, “This is just ridiculous. You will never actually build this, because in real life all quantum systems are subject to power and noise.” Superposition states are very unstable. Any kind of interaction with the environment can collapse superposition.

A huge discovery in the mid- to late-90s that really convinced people that this can actually be done was something called quantum error correction: you don't have to get the rate of leakage of your qubits into the environment all the way down to zero. You merely have to make it very, very, very small.

The goal has been to build qubits that you can act on accurately enough that then these error-correcting codes can get you the rest of the way. Then you want to be able to scale up to as many qubits as you like, maintaining their quantum state for as long as you need them to.

We are not there yet. All of the quantum computations that we can do, you could say are more or less impressive circus acts. Everything is going to fall apart after some number of steps, but you can try to make that number as large as you can. The state of the art today is systems with a few dozen qubits.

Mark: Humans are fallible, and we’re taught that computers are deterministic, at least in Von Neumann architecture. Do computers have to be deterministic? Can they be fallible?

Scott: Computers can certainly be fallible. Anyone who has tried ChatGPT over the last months has seen this! We now have these incredible AIs. You can ask them to prove that there are only a finite number of prime numbers, and they will happily oblige you with a proof that looks superficially plausible, but of course has some freshmen-level error in it, because the statement it’s trying to prove is false.

So, you can say, in a certain tautological sense, a computer always does what the laws of physics say that it would do. In that sense, it never makes an error. But in the humanly-relevant sense, of course they can be fallible, and we have daily experience with that.

And a quantum computer is no different. The one real difference with a quantum computer is they are inherently probabilistic. The whole point of the quantum computer is to create this superposition, this vector of amplitudes, and then make a measurement that converts those amplitudes into probabilities. So, given that everything is probabilistic, what do we even mean by a quantum computer succeeding?

This was one of the early questions that people like Umesh Vazirani had to think about when they invented the mathematical foundations of quantum computing about 30 years ago. And what they said was simply, “We will define a quantum algorithm for a problem to be a good one if we can make the probability of an error to be as small as we would like.”

Mark: You mentioned this earlier, and I still don’t understand what it means: a negative 30% chance of rain. Is that just a stronger zero, or does it mean something different?

Scott: A negative 30% chance doesn't mean anything. It is every bit as nonsensical as it sounds! The whole framework of probability theory only really works with numbers from 0 to 1. But that is why it is so surprising that in quantum mechanics we have to use these other numbers called amplitudes, which can be negative or even complex. Now the key point is the amplitudes are not probabilities. They’re sort of pre-probability, the fundamental numbers that nature keeps track of. And then they get converted into probabilities when we actually make a measurement at the end.



Will ChatGPT pass the Turing Test?

Mark: It feels like during the pandemic of the last two to three years, a decade of progress has been made. Over the next 10 years, do you think we'll make a hundred years of progress, and in your world view, where do you see that progress happening over the next five, ten years?

Scott: Well, progress is tricky because sometimes it goes ridiculously fast or faster than people expected in certain areas, while also going slower than they expected in other areas. If you had asked someone in 1970, they might imagine that by now everyone would have flying cars, that we would have space elevators, that we would have all kinds of things that we don't have.

Mark: Quantum teleportation!

Scott: Right! But then they might be pretty amazed that we all carry these devices in our pockets that have instant access to the whole world’s information. And that might go even beyond what they fantasized about in their science fiction.

So, it’s really hard to predict in what areas the progress will be. But I think the next decade is going to be an utterly insane time for AI. I hope that it will be for quantum computing also. I hope that we’ll build a quantum computer before we just build an AI that can build the quantum computer and everything else for us.

Mark: Well, let’s be ready for Q2K when we get there!

***

I am an optimist about the transformative power of information and digitalization. Whether you’re a quantum skeptic or a quantum enthusiast, quantum is going to have a massive impact on Business 2030. This is especially true, as Scott argues, if quantum computers do not work as predicted.

It is time to prepare for Quantum.

Keep watching this space for more insights from “OpenTalk with Mark J. Barrenechea,” my conversations with some of the world’s greatest thinkers and leaders.

The post Preparing for Quantum. A Conversation with Scott Aaronson appeared first on OpenText Blogs.

]]>

We are on the cusp of a new tech global era.

It is no longer good enough to look around corners. We need to look around corners of corners. We need to see the potential before us, and be prepared—to take on new directions, new challenges and new unknowns. We set the stage for the next decade and beyond, with what we build today. 

Quantum is one key driving force.

The new tech global era will enable climate innovation, electrification, digital currency, voice/facial recognition and extended reality. Scalable quantum, in fact, could become a reality within our professional lifetimes. I believe it will.

One person who is deep in the tech frontier is Scott Aaronson, Founding Director of the Quantum Information Center at the University of Texas, Austin, and AI Safety Researcher at OpenAI. In a recent episode of our OpenTalk speaker series, Scott shared some startling insights about the future of quantum and AI. Here’s a glimpse at our conversation.

Scott is an amazing thinker and leading expert, and it was deeply insightful spending time together.

***

Mark: Would you say there are quantum machines out there today that are working?

Scott: Yes, there absolutely are. It’s just that they’re very small ones. In the 1990s there were skeptics who said, “This is just ridiculous. You will never actually build this, because in real life all quantum systems are subject to power and noise.” Superposition states are very unstable. Any kind of interaction with the environment can collapse superposition.

A huge discovery in the mid- to late-90s that really convinced people that this can actually be done was something called quantum error correction: you don't have to get the rate of leakage of your qubits into the environment all the way down to zero. You merely have to make it very, very, very small.

The goal has been to build qubits that you can act on accurately enough that then these error-correcting codes can get you the rest of the way. Then you want to be able to scale up to as many qubits as you like, maintaining their quantum state for as long as you need them to.

We are not there yet. All of the quantum computations that we can do, you could say are more or less impressive circus acts. Everything is going to fall apart after some number of steps, but you can try to make that number as large as you can. The state of the art today is systems with a few dozen qubits.

Mark: Humans are fallible, and we’re taught that computers are deterministic, at least in Von Neumann architecture. Do computers have to be deterministic? Can they be fallible?

Scott: Computers can certainly be fallible. Anyone who has tried ChatGPT over the last months has seen this! We now have these incredible AIs. You can ask them to prove that there are only a finite number of prime numbers, and they will happily oblige you with a proof that looks superficially plausible, but of course has some freshmen-level error in it, because the statement it’s trying to prove is false.

So, you can say, in a certain tautological sense, a computer always does what the laws of physics say that it would do. In that sense, it never makes an error. But in the humanly-relevant sense, of course they can be fallible, and we have daily experience with that.

And a quantum computer is no different. The one real difference with a quantum computer is they are inherently probabilistic. The whole point of the quantum computer is to create this superposition, this vector of amplitudes, and then make a measurement that converts those amplitudes into probabilities. So, given that everything is probabilistic, what do we even mean by a quantum computer succeeding?

This was one of the early questions that people like Umesh Vazirani had to think about when they invented the mathematical foundations of quantum computing about 30 years ago. And what they said was simply, “We will define a quantum algorithm for a problem to be a good one if we can make the probability of an error to be as small as we would like.”

Mark: You mentioned this earlier, and I still don’t understand what it means: a negative 30% chance of rain. Is that just a stronger zero, or does it mean something different?

Scott: A negative 30% chance doesn't mean anything. It is every bit as nonsensical as it sounds! The whole framework of probability theory only really works with numbers from 0 to 1. But that is why it is so surprising that in quantum mechanics we have to use these other numbers called amplitudes, which can be negative or even complex. Now the key point is the amplitudes are not probabilities. They’re sort of pre-probability, the fundamental numbers that nature keeps track of. And then they get converted into probabilities when we actually make a measurement at the end.

Will ChatGPT pass the Turing Test?

Mark: It feels like during the pandemic of the last two to three years, a decade of progress has been made. Over the next 10 years, do you think we'll make a hundred years of progress, and in your world view, where do you see that progress happening over the next five, ten years?

Scott: Well, progress is tricky because sometimes it goes ridiculously fast or faster than people expected in certain areas, while also going slower than they expected in other areas. If you had asked someone in 1970, they might imagine that by now everyone would have flying cars, that we would have space elevators, that we would have all kinds of things that we don't have.

Mark: Quantum teleportation!

Scott: Right! But then they might be pretty amazed that we all carry these devices in our pockets that have instant access to the whole world’s information. And that might go even beyond what they fantasized about in their science fiction.

So, it’s really hard to predict in what areas the progress will be. But I think the next decade is going to be an utterly insane time for AI. I hope that it will be for quantum computing also. I hope that we’ll build a quantum computer before we just build an AI that can build the quantum computer and everything else for us.

Mark: Well, let’s be ready for Q2K when we get there!

***

I am an optimist about the transformative power of information and digitalization. Whether you’re a quantum skeptic or a quantum enthusiast, quantum is going to have a massive impact on Business 2030. This is especially true, as Scott argues, if quantum computers do not work as predicted.

It is time to prepare for Quantum.

Keep watching this space for more insights from “OpenTalk with Mark J. Barrenechea,” my conversations with some of the world’s greatest thinkers and leaders.

The post Preparing for Quantum. A Conversation with Scott Aaronson appeared first on OpenText Blogs.

]]>