From the very beginning, Google Workspace was built to allow you to collaborate in real time with other people. Now, you can collaborate with AI. We’ve been embedding the power of generative AI across all the Workspace apps. These features help you write, help you organize, help you visualize, help you accelerate workflows, have richer meetings and much more. This powerful new way of working is what we’re calling Duet AI for Google Workspace.
This builds on the vision for Workspace we shared in March and the launch of generative AI features in Gmail and Google Docs to trusted testers. We’ve been thrilled to see how people have started using these tools to help them get things done at work and in their personal lives — including client outreach emails, job applications, project plans, and so much more. Helping people write better is something we’ve been working on for years — our AI-powered grammar, spell check, smart compose and reply have helped people write over 180 billion times in the last year alone.
Helping you write, even on the go
Duet AI already works behind the scenes in Workspace to help you write — whether it’s refining existing work or helping you get started in both Gmail and Docs. Now we’re bringing this experience to Gmail mobile — imagine being on your phone and having the ability to draft complete responses with just a few words as a prompt. Our initial launch of mobile will be fast-followed by contextual assistance — allowing you to create professional replies that automatically fill in names and other relevant information.
Creating original images from text, right within Google Slides
A picture is worth a thousand words, but until now, creating unique and compelling visuals for presentations has been a manual and time-consuming process. We’re embedding Duet AI into Slides so you can easily generate images with a few words — the real power of these image models is that they can visualize something that has never existed. Perhaps you’re a marketer doing early concept brainstorming with your creative agency on a light-hearted campaign to get Parisians to go on safari and want to provide clear visual input early in the process to prevent wasted work later on. Now, you can generate an original visual that conveys your unique artistic vision, all from a simple prompt.
Now, you can generate a bespoke image for a Slides presentation with the help of Duet AI.
Turning ideas into action and data into insights with Google Sheets
Duet AI is here to help you analyze and act on your data in Sheets faster than ever before with automated data classification and the creation of custom plans.
Classification tools understand the context of data in a cell, and can assign a label to it, helping eliminate the burden of manual data entry. Whether you're a product development team analyzing the sentiment of user feedback, or an HR recruiter summarizing input from interviews, this saves time and toil in developing a clear and visually engaging analysis.
Our new help me organize capability in Sheets automatically creates custom plans for tasks, projects, or any activity that you want to track or manage — simply describe what you’re trying to accomplish, and Sheets generates a plan that helps you get organized. Whether you’re an event team planning an annual sales conference or a manager coordinating a team offsite, Duet AI helps you create organized plans with tools that give you a running start.
You can create custom plans for tasks, projects, or any activity you want to track or manage.
Fostering more meaningful connections in Google Meet
We’re weaving Duet AI into Meet and introducing the ability to generate unique backgrounds for your video calls. This helps users express themselves and deepen connections during video calls while protecting the privacy of their surroundings. Imagine a manager celebrating the employee of the month with a visual mashup of their favorite things during a company town hall meeting, or a sales exec jumping on a call with a prospective customer and using a custom background to reflect the customer’s industry or market. It's a subtle, personal touch to show you care about the people you’re connecting with and what’s important to them. And you can change that visual with an equally stunning and original one — all in just a few clicks.
Keeping you in the flow of projects with AI building blocks in Docs
Users tell us they love smart canvas because a simple @ mention can save them hours of work by keeping the team focused and collaborating in the document where they’re already working. Now, we’re integrating these capabilities into the new assisted writing experience in Docs. If you're writing a job description, Duet AI will not only help you write the content, it will also include smart chips for information like location and status, and variables for details you’d want to customize like your company name — helping you go from concept to completion faster without ever leaving your document.
Simply enter a topic you’d like to write about, and a draft will instantly be generated for you in Docs, including smart chips for information like location and status.
Our new upgraded neural models for grammar have helped people generate professional-grade writing — not just in English, but also in Spanish, French, Japanese, and more. Now we’re bringing even more powerful capabilities to help people with proofreading, tone, and style in their writing. You’ll see a new proofread suggestion pane that offers suggestions for writing concisely, avoiding repetition, and leveraging a more formal or active voice. Plus, it puts you in total control over when you see proofread suggestions and how you act on them.
Helping users unleash creativity while increasing productivity
Since March, we’ve welcomed hundreds of thousands of trusted testers to Workspace Labs. These testers come from enterprise organizations and educational institutions as well as people who use Workspace in their personal lives. We've been encouraged — and inspired — by their feedback.
“Adore Me has built our organization in a way that encourages cross-functional, multidisciplinary projects and the ability to write often presents a roadblock, especially with a highly international team. The ability to quickly create production-worthy copy with generative AI features in Docs and Gmail has been accelerating projects and processes in ways that have even surprised us!”
— Romain Liot, Chief Operating Officer, Adore Me
"Instacart is always looking for opportunities to adopt the latest technological innovations, and by joining this program, we have access to the new features and can discover how generative AI will make an impact for our teams using Google Workspace.” — JJ Zhuang, Chief Architect, Instacart
"We’re excited to test out the new generative AI Workspace experiences at Lyft. Whether it be kicking off a plan for a new campaign or drafting an email update to our community of drivers, we’re enthusiastic about how these new tools can help our teams move faster and be more productive." — Lyft
Keeping customers in control
We know that AI is no replacement for the ingenuity, creativity, and smarts of real people. We’re designing our products in accordance with Google’s AI Principles that keep the user in control, letting AI make suggestions that you’re able to accept, edit, and change. We’ll also deliver the corresponding administrative controls so that IT can set the right policies for their organization.
Sign up for Workspace Labs
The best way to learn more about AI in Workspace is to use it, which is why we’re excited to open up Workspace Labs to the public and manage the waitlist as we scale to even more users and countries in the weeks ahead.
GM and Vice President, Google Workspace
Service accounts(SAs) are a special type of Google account that grants permissions to GCP resources instead of end users. Service accounts are primarily used to ensure safe, managed connections to APIs and Google Cloud services. Granting access to trusted connections and rejecting malicious ones is a must-have security feature for any Google Cloud project.
In order to understand the threats to SAs companies must start with broad Org level IAM audits to gauge where these powerful IAM accounts stand in the present. Overly Permissive SAs at different levels of the GCP heirarchy are weaknesses that can cause havoc in GCP and should be reviewed regularly to maintain a consistent secure IAM posture.
At the Application level, specific SAs should not have access to both GCP Folder production and non-production environments. This process of cross pollinating SA accounts should never be recommended as a best practice for domain folders such as Production and Non-Production. This practice lowers the SA account security posture as a least privilege model within a Zero Trust Architecture. The result? In a worst case scenario an SA granted full access to non-production could theoretically use those permissions in a production environment to destroy everything and anything in its way. A total nightmare for enterprises! An even bigger nightmare is if a basic role (Owner, Editor) tied to SA has been set at the GCP org level in the hierarchy. If this were to occur the potential of homeland destruction increases 10-fold. This SA, if compromised by a bad actor, can now roam freely at will throughout the entire GCP environment.
So what happens when an application fails or any other emergency situation presents itself in a GCP environment? Where is the Bat-phone located? Who can reach the hammer to crack the glass enclosure? If the hammer is used, then what? These are questions to be addressed within a standard Break Glass process. IT companies have struggled to properly identify and implement a rapid reactionary method of protecting the homeland. In the GCP world a Break Glass solution can utilize “Service Account Impersonation.” This is a sophisticated approach that allows users to assume an SA permission set while abandoning their own. Access is granted ahead of time to a small trusted set of subject matter experts (SMEs) who are approved by management and security. When an agreed upon Break Glass event occurs, internal processes and workflows are kicked into high gear and the user impersonating the SA does what he/she does best, puts out fires!
LucidPoint Cloud Engineer
Today’s Cloud IT Security experts are overwhelmed in their fast-paced departments. They are the protectors against malicious activity from persistent threats and incorrectly configured cloud resources. Most have relied on stand-alone security information event management (SIEM) systems to aggregate log data from many sources for event correlation, detection, and incident response. However, these legacy SIEM solutions are unable to scale to accommodate increasing data volumes and the growing number of cloud data sources. In turn, the process of identifying, investigating and tracking these incidents have failed to keep up with the complexities of cloud computing. Companies have been experiencing a growing visibility gap and insufficient repeatable automation that prevent IT Security from achieving their goals in threat detection, response, and vulnerability management.
Enter Google’s Security Command Center (SCC), a one-stop shop for security and risk management platform for GCP with out-of-the-box detection and response capabilities for greater visibility. SCC allows you to gain centralized visibility of cloud assets, identify security configurations and detect threats using streamlined log aggregation running in GCP at scale. I must say, it is an amazing tool for stopping bad actors before attacks escalate into breaches.
A comprehensive approach to solving this problem is to automate processes and workflows from (identification) SCC into Gitlab (Tracking). In between these two endpoints are automated and human workflows that speed up the linking of the processes involved. By applying filters (Critical-High-Medium-Low) for active vulnerabilities in SCC the GCP Messaging service Pub/Sub can ingest these filter events at accelerated speeds and publish them in near real-time when alerts are generated. Server-less compute services such as Cloud Functions will quickly and logically process the input from Pub/Sub topics. The Cloud Function will also communicate rapidly with the GITLabs API and perform the following processes:
At this stage the human workflow GITLab Maintainers adhere to standard SLA Policies describing timeframes for resolving vulnerabilities. These Maintainers also analyze and decide whether the vulnerability is legitimate or not. More Importantly, Vulnerability Exceptions come into play and are an integral part of the workflow as they require a written approval from IT Security. At this stage additional mitigating risk controls can be discussed to compensate for a possible rejection of the Vulnerability Exception. Through the back and forth communications between the business unit and IT Security, a time frame and compromise will be agreed upon so that vulnerability fixes will be efficiently implemented.
A third and final workflow is executed via SCC Mute Rules which allows the IT Security staff to decide on either applying bulk or individual mute rules based on severity, validity and additional criteria being met. This essentially disables the noise within the SCC console on similar SCC findings. Finally, with the speedy click of a mouse button, IT Security can transform 100s or 1000s of reviewed SCC findings into resolved issues thereby responsibly enhancing the security posture of a company’s Google Cloud Platform.
LucidPoint Cloud Engineer
Data has always been critical to business. It’s what guides the formation and why managing data growth is so integral to the success of a company. Be it transactions marked down on Sumerian clay tablets, double-entry account records from Victorian-era England, or a database of web site statistics today, data and its collection are foundational to the decision-making process. From small, innocuous beginnings it grows and changes, consuming ever more time and resources until it becomes overwhelming. This transition point happens at different times for different kinds of businesses, and how it manifests can be as equally diverse. In the past, the main approach has been to simply throw more money, in the form of personnel, technology, or services at the problem. This is often done without realizing what the underlying data management challenge actually is. Let's review some generic data growth management best practices which help control the chaos and improve how data is used to make decisions.
Today there are many different ways to easily store, retrieve, and analyze information, which can quickly become a double-edged sword of benefits and necessary maintenance. In many companies there is simply too much going on for the key decision-makers to manage data growth to record quality, so logically they delegate these tasks back down to the individual department and engineer level. Left to their own methods, the various internal departments or business units, accounting, purchasing, shipping, sales, marketing, customer support, IT and so on, go about their specialized tasks, selecting and using tools effective for their respective data use-cases. This can lead to fragmented and siloed data, where, for example, a customer may be identified as ‘XYZ’ in Sales, ‘XYZ Corp’ in Support, and ‘XYZ, Inc’ in Accounting. Each of these entries contains duplicated data along with unique data, and each group is required to manage data and their records of its growth when changes occur. Names, addresses, phone numbers, emails, titles, labels, etc can and do change over time. Then there are internal changes and updates for products and services, resellers, and heaven forbid a customer is also a vendor. Finally, there are mergers and acquisitions which don’t just multiply the data growth issue, they do so exponentially.
While tools can help in the collection and management of data growth, in order to effectively maintain and utilize all of this information an overall data architect is required. Many companies see this as an extension of the duties of the Chief Information Officer or Chief Technology Officer while others have created a new Chief Data Officer position to help manage and guide the overall data situation instead. Quite often this is delegated to a database administrator (DBA) but this is not generally a good idea in the long term. While DBAs may have a good understanding of the technical side of organization, storage, and access, there are additional factors on the business side that need to be considered such as data context in the business unit, how the data is used inside the business unit, and how that data is expected to be used for analysis and reporting. This requires a data architect to have a good understanding of not only the managing of data growth involved, but also of the nature of data relationships, data organization, and how the data is expected to be used in each of the business units. This extends well outside of the traditional technical realm and can often involve extensive discussions on defining what terms are used and what they actually mean. In many large organizations the task of generating reports often falls either to each individual business unit or to a small group, permanent or ephemeral, assembled for that purpose. Because of this, gathering an effective high-level report might require traversing multiple reporting tools, formats, and platforms, requiring extra time and effort to assemble and manage data growth. Without an architect guiding the process, loss of context is a greater risk, and employee turnover has a greater impact on the report generation process.
One of the main tools in organizing the information is a Single Source of Truth (SSOT) database. While this is a fantastic tool to combat the data rats nest, it does come with its own set of challenges. This is especially true if such a design was not implemented initially and requires additional data migration and integration with internal tools. An SSOT makes control of data much easier, reducing duplicate data and facilitating audits, reporting, and access. Now, this is not a singular data source from which all the tools connect to directly, rather, it is a repository into which the various and sundry systems in the company can deposit their unique data. Use of the centralized warehouse drives consistency in the use of data, reduces redundancy, and standardizes the relationships between managing data growth.
I have been asked “Wouldn’t this SSOT just be a data warehouse?” The short answer is yes, but with a couple major caveats. A data warehouse is indeed a repository of information from disparate sources, but an SSOT has an additional requirement of maintenance and normalization. It’s not a fire and forget solution. Data in business is constantly changing and evolving, and it’s up to the data architect to monitor and maintain the system, to make sure that it stays as the Source of Truth for the company’s data. Without this Data Architect role and its associated responsibilities being fulfilled, it would indeed be just another data growth management warehouse, and the analytics and reports generated from the information contained therein would require additional steps of validation, reducing the utility of the system as a whole. Likewise, a data lake can also be used as an SSOT, but it too requires the same maintenance and normalization as its data warehouse counterpart.
Recently there have been inroads made on the use of Artificial Intelligence and Machine Learning (AI/ML) to help with the analysis of the prodigious amounts of data generated by large organizations. Amazon’s SageMaker is a good example of this. Even the application of such impressive technology doesn’t replace the need of a data architect to orchestrate the process of managing data growth, and in fact the introduction of AI/ML to the mix increases the need for a dedicated data scientist role in addition to that of the data architect.
The complex and evolving nature of business managing data growth means that it cannot be solved through the use of tools and technology alone. Proper organization and ongoing analysis and maintenance are critical to the effective use of the vast amount of information available. The old records vaults of the past requiring an army of scribes, accountants, and librarians to maintain are not gone, they now exist as 1’s and 0’s within a vast virtual construct. However, now the tasks of ordering and understanding this information is the purview of expert data architect.
LucidPoint Sr. Cloud Engineer
Cloud billing management is critical to success in an environment that can scale quickly. Forecasting, visibility, and control all contribute to a healthy relationship with the cloud, but it can slip in an instant. First, it is important to understand the challenges that arise in order to build a toolkit to solve them. Many companies struggle with billing data being siloed in different clouds, accurate reporting to cost centers, and giving the right visibility to the right people.
When using multiple cloud and SaaS services, it can seem insurmountable to get all of that cloud billing data into a single place to manage. Where will this billing data consolidate? Who will manage the infrastructure and tools required for cloud billing? How do different employees access the data they need for their use cases? Don’t be overwhelmed! Just like a jigsaw puzzle, it can be complex, but there are a lot of strategies that are employed to make things easier. Even better news, every puzzle piece has a place, and the end result is always rewarding.
To solve the SaaS and cloud billing management problem, it is worth starting with a tool that can be improved and customized. While it will take time to develop skills and knowledge around new software, significantly more time will be spent on a homegrown solution that can address every need. Too many hours and too much tribal knowledge will be siloed into the effort which is hard to justify. At LucidPoint, we use Looker blocks to provide billing insight across our client portfolios and put our puzzle together. In order to achieve positive outcomes, there are a few fundamentals we implement to make the process simple and effective for our clients and ourselves.
First, to optimize cloud billing management tactics, the data needs to live in a single location. It would be difficult to put a puzzle together if you hid the pieces around your house before you began. Keeping financials straight becomes harder to do with every new addition to your service and cloud portfolio. A great first step is to consolidate into a data warehouse such as BigQuery. The analytics layer of your solution might have strong integrations with a data warehouse, so take that into account when deciding where to put your data and what you will use to garner insight from it. Now that all of the pieces are on the kitchen table, it is time to start building the picture.
From our experience tackling this issue, we find that governance and labeling of resources set clients up for cost management success. Applying billing labels through automation is the most robust method of establishing accurate cost association and insights. Developing a consistent approach to billing metadata throughout an IT organization is a large effort, but it's critical for billing accuracy.
Now that all the data is assembled in a unified manner, it is important that it does not go unused. No one wants to put the time and effort into a puzzle if they are not able to see the final picture. Business units and the employees within them will have unique use cases. Some need a magnifying glass to zoom in on the individual pieces for which they are responsible. Others want to take a step back to analyze the whole picture. Regardless, identifying the use cases and presenting the data in a digestible way is the last piece of the puzzle, but certainly not the least important. Without the feedback from the employees needing to see the cloud billing management , the effort is cast aside.
At the end of the day, an ounce of preparation is worth a pound of remediation to avoid your cloud billing management from getting out of hand. Putting together your jigsaw puzzle is a process that will continue indefinitely, but there are tools and strategies to put new sections together effectively. As a critical part of the business, it is important to have a firm foundation that will allow you to grow into the next set of challenges. Do the research, make a plan, execute your vision, and adjust as necessary!
LucidPoint Cloud Engineer
I recently returned from some off-the-grid hiking in the beautiful North Cascades National Park. It was a perfect time to practice some landscape photography and share these memories with friends and family. This blog post uses the example of sharing vacation photos to explore the concept of Edge Computing and help readers find ways to apply these concepts to modernize IT and answer why edge computing is important.
When hiking, it's important to hike in groups and bring only the essentials to minimize weight and extend distance. While my group has a strict "no laptops / no working policy" during vacation, we do take advantage of the modern convenience that is a smartphone. In this case, we treat our smartphones a bit like the "glass cockpit" in a modern airplane using the map, data, and GPS to help us navigate most efficiently. We keep some paper backups and use a lot of digital redundancy to ensure the failure of one device doesn't disrupt our journey. Travel logistics, backcountry food plans, trail maps, and everything we could plan took place first on cloud services (Google Workspace) for convenience among the group. When the trip began, we took cached copies onto our smartphones so these details were accessible to us off-network. The smartphone, that ubiquitous and rugged-enough edge computing device, became our tether to all things data while hiking through the wilderness.
Before we get into why edge computing is important, this is a good time to define "Edge Computing" which simply refers to bringing computing resources as close to the source of data and users as possible. This is in contrast to "Cloud Computing" where large centralized computer centers provide depth of capabilities but aren't located near users.
Why is edge computing important even if it has limited utility? Because it is implemented with a centralized backend to accommodate data sharing, provide long-term storage, and facilitates deeper analysis. In our case, without network connectivity in wilderness areas, our edge computing devices could only record data to local storage. We had limited ability to share or fully analyze the data and a fixed quantity of storage capacity. Critically though, we had an equipment form factor to effectively use our maps and capture new data (photos, path tracking, etc). Industrial and commercial applications of edge computing may not be as small as a smartphone, but are still engineered to offload some data processing needs to remote systems.
Recommendation 1: Tailor edge computing to sunny-day and rainy-day scenarios and expect network failure as inevitable. Ensure sufficient buffering of services and local capability within your operational disruption error budget.
When we consider network connectivity gaps and system failure as being inevitable, we can design the edge computing environment to accommodate them. This is why edge computing is important.
-For a photographer this means "bring enough storage" in the form of additional memory cards. The design priority is continuing to record new data. A delay in sharing photos or post-processing edits using additional remote resources is tolerable until the network is reachable.
-For a factory process control computer, a network disruption means running factory machines in a fail-safe mode and recording activity data locally. Certainly, the safety mechanisms are the priority and must function without the need to receive instructions from remote resources. However, a delay in processing output data or receiving order shipping information is tolerable. Sufficient safety and data capture is the priority in this example.
Why is edge computing important to make all this happen? Because the powers and elegance of this edge and cloud computing for sharing photos became truly apparent at the end of the trip when we were all awaiting departure at the airport. Once reconnected to LTE and wifi, our photos and fitness data are automatically synchronized to cloud storage. Secure access is granted to the group members and extended as read-only to social networks. There is enough computing, storage, and networking access through the smartphone that I no longer have to wait until I get home (or to a photo lab) to process and upload photos to share. Millions of photos are shared securely this way every day and the same can happen to any other digital information your business runs on.
Recommendation 2: Edge computing is also important because it will ensure distributed system recovery automatically and efficiently.
-For a photographer, this means creating offsite backup copies of photos automatically when network access is detected. This ensures the loss or failure of an individual device doesn't affect the rest of the data.
-For a factory producing hard goods, this means that output data is triggered to upload when network connectivity is restored. This eliminates the need for manual data entry during the period of downtime and allows checking for fresh data from central systems such as newly added orders and shipping information.
A now famous example of why Edge Computing is important is because the distributed edge computing model is currently implemented among Tesla's connected cars and autopilot software. In the linked article, a network failure was noted as causing users to be unable to unlock their cars using their phones. In Tesla's design choice, some convenience features such as using the phone as a key and feature software updates are unavailable if network connectivity is lost. However, the core functions of maintaining occupant safety, manual control of vehicle dynamics, and manual ingress/egress procedures are preserved. Tesla provides users with a regular key to serve the same function as a paper map while on a hiking trail, to backup the core function should the more convenient option cease to work.
Whether it's a critical financial analysis spreadsheet, CAD drawing, or media publication we find users expect to access data from anywhere at any time to stay productive. That’s why edge computing has so much importance in the modern era? Because today we use cloud identities to authenticate and securely access data from anywhere, not just a specific place in the network like a VPN or office computer. Today we can use the limitless capacity of cloud storage to publish endless fidelity of data to meet our user's needs wherever they are. Today we can provide users with powerful edge computing devices with the capacity and capability to maintain productivity throughout system and network disruption. We can use these edge and cloud computing concepts to make sharing business data as easy as sharing on social media and with failsafe operating and endpoint security to protect ourselves.
All of this is why edge computing is important in today’s world. What's stopping your business from using smart edge devices like smartphones to empower users and present your business data securely while on the go?
Lucid Point Sr. Cloud Architect
Why do you need a data warehouse? You’ve already got a database, and it has all of your business information inside it. You’ve been getting reports from this database for years, why incur the additional expense of a data warehouse too? What is a data warehouse and its difference to a database?
A database, to most, is a repository of information from an application, typically a single application but sometimes from many. This database is designed to make transactional systems run efficiently and is typically an OLTP (online transaction processing) database. It allows concurrent access to the data in real-time in a secure manner while maintaining integrity, reducing redundancy, and restricting access to prohibited data.
So, if you were to ask us if what is a data warehouse, it is another layer added to an existing database or databases. It is designed to effectively and efficiently perform analytical requests on the data. A data warehouse imports data from one or more source datasets into its data structure. Regardless of the internal designs and processes underneath the data warehouse, it can now be used to run complex and multidimensional queries on the data without affecting the production data environment, and without being affected by such. These queries and reports can also run more quickly due to the way the data is structured, and because a data warehouse can accept data from multiple sources, analysis and reports can be run against a broad range of data, such as sales, and customer support, and application. Additionally, what a data warehouse is used for is storing historical data for longer periods of time while still being available for comparative analysis. This is especially true when dealing with trending data, where you don’t need the historical information or metadata cluttering the transactional/primary database, but which could be used to monitor growth trends. A growing use of this kind of warehouse is the use of AI/ML (Artificial Intelligence/Machine Learning) to analyze vast amounts of historical data to help make better-informed business decisions.
Alongside knowing what a data warehouse is, it is also important to note that many of the characteristics behind a transactional database do not work well with analytics. As the amount of data grows with the business, those reports used to make business decisions have been taking longer and longer to run. Likewise, a data warehouse does not work as a primary database as its data is not easily amenable to rapid or atomic change. In fact, one of the more common ways to utilize a data warehouse is with transient data, where data sources are loaded periodically from snapshots, analyzed, and then purged for the next batch of data. How is a data warehouse structured? The internal structure and organization of a data warehouse can differ from a traditional database, up to and including columnar structure, parallel, sharded, and clustered processing, which allows for greater processing ability. The benefits of a columnar vs row structure in a data warehouse will be covered in a future post.
The key reasons to use a data warehouse over a collection of disparate data sources are that it allows complex queries to be executed outside of the production environment, it allows related information spread across multiple sources to be analyzed together more easily, and it speeds up the analytical and reporting processes, especially when there are massive amounts of data involved. The best reason to use a data warehouse I have seen is that it off-loads the processing from the primary or production database, freeing up those resources and allowing the analysis and reporting to be run at any time. A data warehouse set in the cloud can also be spun down when not needed, and additional resources can be allocated as needed, both of which can result in significant cost savings without impacting either the production database or the data warehouse.
LucidPoint Sr. Cloud Enginee
In the cloud era, where version control, reliability, repeatability and idempotency are top of mind it seems like everything-as-code is the new norm. Public cloud and private infrastructure resources are most effective when their lifecycle, configuration, and security are managed in code and deployed through automation. These practices improve consistency, supportability, and tighten security holes. This powerful combination of source code management and automated deployment has been used in the software development space for a long time with great success. How can we take this concept and apply it to broader business areas? Why would we want to do this?--to make "happiness-as-code" a reality for everyone in our organization. Organized data leads to organized and happy humans. Check out this talk given by Seth Vargo, a Developer Advocate at Google, as he dives into the intricacies of everything-as-code and everything you should know about to get started.
At LucidPoint, we think all businesses are data-driven organizations. Whether you're managing source code, customer sales records, or even a staffing schedule, these are all data requiring multiple humans to interact with over time. Given the importance of tracking this business information, how do you manage copies of this data for backup? How do you collaborate on improvements of that data or configuration over time? How do you secure access to that information?
All of these practices can be embodied in an effective source control and collaboration system like GitLab. At LucidPoint we use GitLab not only for our own source code development, but also these additional business functions:
Gitlab has become a central repository for all of our electronic records that would otherwise be scattered across disconnected tools. Consolidating these items to a common platform lets us maintain a single pane of glass and make valuable integrations between different data types and tasks. All of the configuration rules, data templates, access controls, and more are managed as code directly in GitLab. Collaboration and resolving conflicts, whether in complex source code or a sales opportunity record, use GitLab's excellent branching & merging capabilities for resolution.
For example, as a Sales opportunity is tracked and eventually booked by our Sales team, all of the customer information, schedule, and resource requirements are captured together. This info feeds into resource scheduling and allows us to securely add access for delivery engineers & other teammates. As projects progress, tracking related support requests or enhancements needed to code are all managed in similar work streams with issue tracking and branching & merging for changes. Since all of this data is in one system, security policy is simplified through GitLab's group management. Sales and contact data directly feeds the delivery engineers for the work; no complex CRM to engineering tool integration is needed. The data naturally moves from task to task and across workstreams within the same GitLab environment.
GitLab helps LucidPoint manage all of our electronic records and collaborate across time zones, backup/restore the data, and secure access to our most important information. This is a happy data management solution with happy humans participating. Happiness-as-code achievement unlocked!
Author: Eric Lozano
Sr. Cloud Engineer
Continuous Integration and Continuous Deployment (CI/CD) management pipelines are a necessity in large companies, however, small teams can also take advantage of the many benefits they provide. Developers are able to focus on security, code quality, and business needs because the deployment process is entirely automated. These benefits keep a company more agile and focused when responding to new challenges.
Continuous integration (CI) is the process of merging developer’s work into the source code multiple times a day. When talking about it in a pipeline context, it is a set of tools and tests that check to ensure the proposed changes don’t break any of the existing code. Continuous deployment (CD) is the automation required to push out a valid update to users. Married together, these two concepts create the CI/CD management pipeline, an automated process that merges, tests, and deploys code in a predictable, efficient manner.
CI/CD management pipelines are now the norm for good reason. Beyond making developers happier, deployments have become more reliable with fewer errors which results in better allocation of technical resources. Any high powered development team needs... you guessed it, high powered programmers. In order to attract talent and retain it, developers want environments that enable them to create and build rather than fight through the minutiae of errors and broken deployment. Pipelines reduce the hands-on time, and open doors to solve issues quickly. When pipelines are built properly, they increase deployment reliability and reduce errors that make it to production environments. The effort up front to integrate proper unit tests, integration tests, and descriptive feedback returns on the investment many times over. When these pieces come together, developers get to spend more time enhancing features and building value instead of trying to tame a wild codebase. The combination of happy, engaged developers and increasing value of a companies’ product makes it a win-win situation for all parties.
As CI/CD management technologies, both commercial and open source, become more widely available, it has never been easier to build and deploy a basic pipeline for your development team. What was once only necessary for the big players can be leveraged by small teams without the need for heavy investment on the front end to get started. Git services such as GitLab provide powerful pipeline and CI/CD management tools to let developers focus on the pipeline stages themselves instead of the complexities of infrastructure to execute them. In modern or hosted CI environments, all of the development effort goes toward improvements of the pipeline stages and robust testing, reducing the overhead of deployments and increasing quality. In small teams, the collaboration and peer review is streamlined into a central workspace that acts as a single source of truth.
With the velocity of business in the modern world, every advantage to pursue revenue and create value is necessary. Pipelines enable developers and companies to build quality software at unprecedented speed. Keeping engineering teams engaged and customers happy is critical to success and pipelines take the first step in opening the doors for long term achievement. For more information on the specifics of pipelines that are used today, check out the introductory resources from GitLab and GitHub.
LucidPoint Cloud Engineer
Stuck waiting on IT resources? Is your business continuing to be held back by the length of time it takes to spin up VM servers or to add network and storage capacity? Is there a new application or capacity upgrade that's needed right away? What is corporate governance and how does one improve upon it?
The barriers to building new IT services for your business have never been lower -- the latest hardware and software technology have become commoditized and delivered through many flexible consumption models. In contrast with the value of high-velocity technology innovation are the complexities of maintaining security controls, reliable deployments, network scale, and lean resource allocation. These challenges have kept many common IT practices in the slow lane for too long, waiting for human approvals and waterfall style resource planning processes.
When your IT infrastructure and user base has a need for speed, public cloud self-service elasticity can help. Whether it's simply faster resource availability or prototyping the latest high-performance CPU and storage with a new application, self-service operations grow an organization's IT staffing without adding headcount.
How can you balance the high velocity needs of your organization with the controls required to keep your data, finances, and users secure? How do you prevent greedy users from going "Maverick"? And how can corporate governance improve these processes?
Governance policy to the rescue!
If you think governance only means virtual handcuffs, slowing progress down, and endless meetings which feel like "Fighting City Hall", you've got the wrong idea. It has been well-proven that when you know how to properly implement corporate governance policy, it helps IT users move faster. This is because more users can self-service and fulfill their own requests through automation while simultaneously staying within a business compliant financial, security, and consumption model.
If we use a transportation metaphor, imagine commuting to work in your new electric car without governance like speed limits, lane keeping, or traffic lights. You wouldn't be able to get to work since the streets would be gridlocked and full of crashed vehicles. The governance of the road helps us all move faster in the same direction. It's also important that the governance rules, like speed limits, are set reasonably-- fast enough for efficiency, yet slow enough for safety.
Knowing how to implement governance policy for corporate IT services and taking advantage of the automation inherent to public cloud is critical to enable self-service operations. Users can be free to do as much or as little as an organization decides. While defining the detailed governance rules themselves can be challenging when starting from scratch, eventually the rule book becomes well known and everyone can benefit. Technology partners like LucidPoint can help you get started introducing meaningful methods of how to improve corporate governance policy to your organization and provide data points from similarly sized peers in common industry verticals.
How do we improve corporate governance? It's no secret that public cloud providers want their tenants to use as much paid resource consumption as possible. After all, these are capitalistic providers and they're revenue driven. In fact, most public cloud providers have several "unlimited" and "uncapped" utilization quotas by default. This allows you to consume as much compute, storage, or network as you ask for -- but can also lead to accidental overages. What's interesting about elastic cloud resources at hyperscale is that a low quality or misconfigured request can drain your whole IT budget in an instant. Improving corporate governance in this way contrasts to traditional on-premises IT services, which are constrained by fixed quantities of compute and storage, public clouds provide resource elasticity to accommodate huge requests. If a user sends an expensive resource request, the cloud services can allocate resources dynamically, churn through the request, return results quickly, and bill you for the utilization. When cloud users don't have governance in place, or don't understand the nuanced details of cloud resource billing, surprisingly high cloud billing charges disrupt IT budgets. These dangers can kill the success of strategic cloud adoption or reinforce the slower processes to waterfall IT requests through manual scrutiny in order to police a requestor's behavior.
How to Improve Corporate Governance for Public Cloud:
1 - Use the cloud platform's security policy constraints to set a tight default security posture for all users and workloads. Administrators can override constraints as business needs justify it while ensuring the adopted defaults are inherently appropriate for your organization.
2 - Disable unlimited default quotas where possible. Set reasonable consumption limits within expected orders of magnitude for resource utilization. This prevents accidental billing overages.
a - Google BigQuery Custom Quotas
b - AWS Service Quotas
3 - Use billing analytics and budget tools to alert on current conditions and predict future spending. Controlling costs requires effort: both measurement and continuous improvement.
4 - Treat Cloud Service consumption differently than traditional on-premises IT infrastructure. The challenges and opportunities of cloud are inherently different when using elastic resources and may not align well to your existing processes for control and monitoring.
5 - Educate your user community on the power and responsibility of self-service operations. Move faster by moving together safely and intelligently.
With a solid foundation of governance to control data security, spending, and more, opening up the self-service pathways for broader user communities becomes possible. With self-service capabilities governed to business requirements, users anywhere in the organization can experiment, deploy, and grow the technology capabilities of your business. Free IT staffing through distributed self-service? Yes, please!
Key takeaway: In the cloud era, knowing how to improve corporate governance enables speed of innovation and reinforces safety. Governance prevents a Maverick, but satisfies the ongoing NEED FOR SPEED.
LucidPoint Sr. Solutions Architect