With the continued growth and evolution of cloud computing, there is an expanding number of ‘serverless’ solutions to choose from. But just what does this term ‘serverless’ actually mean, and how can it benefit you and your projects? What is a serverless architecture? It can be viewed as an evolution of the Cloud. Just as the move into a cloud environment meant a move away from physical servers into a more virtualized system, a move into the serverless realm indicates a move away from basic resource management. To better understand this, it’s important to note that there are two different kinds of ‘serverless’: Backend as a Service, and Function as a Service. Cloud Providers, Where Serverless Lives In a serverless model, cloud providers, like AWS or Google Cloud Platform (GCP), will run the physical servers in their data centers and then allot their resources to users who can deploy code into production. As mentioned above ‘serverless’ typically falls into two categories: Backend as a Service (BaaS) and Functions as a Service (FaaS). What is BaaS in serverless architecture? BaaS provides a specific service, such as storage, and handles all of the underlying requirements like operating system, low-level communication, sizing, and sometimes even services like backup and archiving for you. An application programming interface (API), and/or user interface being provided to allow you to integrate this service into your environment. Amazon S3, DynamoDB, AWS Cognito, Auth0, Google Cloud Storage, Firebase, etc. are examples of this Backend as a Service. FaaS provides the ability for you to create and execute server-side logic in a generic, virtualized, stateless environment that is fully managed by the provider. This logic is then deployed in containers in a fully managed platform and executed on demand. Google Cloud Functions and AWS Lambda are examples of this service. BaaS and FaaS services are generally intended to be used together to create applications, and it’s in this interaction you encounter fundamental differences in application architecture, for better or for worse. The traditional architecture was often simple client/server communication. The client application sends a request to the server, and the server processes the request (usually involving a database here) and returns a response. What is the change that serverless architecture provides in this model? Instead of the server handling authentication and processing tasks, these are handled by distinct microservices. Authentication, API request, database lookup, and retrieval can now all be done at the client level. Such a design comes with its own set of benefits and drawbacks. Things to think about when considering serverless
Serverless is an architecture which utilizes remotely provided services, either wholly provided or custom-built, to construct or augment an application. Despite the drawbacks, serverless offers some significant advantages, especially when it comes to development time and cost, scaling, global deployment, operational cost and management, and resilience. It’s definitely not the right approach for every problem, and it cannot replace all of your existing architectures, but being able to offload critical components can reduce your exposure to ‘tech debt’ and offers added protection against obsolescence. Quite often the best way forward is through either a phased rollout, replacing pieces of a legacy application with microservices until complete or by utilizing serverless options only in new development. It is important to recognize what a serverless architecture is and how it can be used as a tool, and implemented in a way to provide the desired service. This may take a change in design or architecture for those more accustomed to traditional, server-based solutions, but it is an avenue worth exploring.
Brandon BarkerLucidPoint Sr. Cloud Engineer
0 Comments
What is least privilege in the IT world? We are conditioned to always grant least privilege access through the daunting audits and the constant, mind numbing security reminders. In the support admin world, those two words, ‘least privilege’’ once spoken, are enough to quiet a noisy room of engineers. We often hear, “Well, we are just trying to maintain least privilege access so we can’t give you x.” Once those words are uttered to developers, you had better grab your battle gear because you know it will be an uphill fight for what needs to be done. But for the select few, nay, the fortunate few, when those two worlds collide there is often a productive conversation about what least privilege means.
Least Privilege Defined We can argue over what least privilege means in your context, in my context, or in someone else’s context but, let’s at least get a working definition. NIST defines the principle of what is least privilege as follows: “The principle that users and programs should only have the necessary privileges to complete their tasks.” It’s been a long time since I’ve been in an English class but I’m going to take a stab at this one. It seems to me there are several keywords in this sentence that help define what is least privilege. Necessary The first keyword is “necessary”. Necessary means you give what is needed but you do not give what is not needed. So what does this look like in an organization? As you are working on something you must think through what permissions are needed to do the job you are doing. Your scope of permissions is designed to get the current task done, not what needs to be done in the future. This demonstrates a thorough understanding of the task at hand and allows you to lead with those who own the permissions for what you need and don’t need. The “more is better” train-of-thought should never be entertained. Complete their tasks Next is “complete their tasks.” In order for a user or program to complete their tasks, they must be given the permissions necessary to be successful. So what does this look like in an organization? Security brings great value to an organization and is often seen as the enabler of innovation. Instead of saying no to everyone who needs access, entertain the idea. You are the expert, therefore, drive the conversation to understand and educate users as to why they do or do not need access to complete their tasks. Being a brick wall helps no one in the organization. Tools To Help Google Cloud is ready when you are to have these conversations. Even if you do not know where to start, we can help. Google Cloud Predefined Roles How easy can it be? Well, sometimes pretty easy. Google Cloud has used their knowledge of their services and how companies have implemented them and distilled access levels into predefined roles that it can be used to help guide you in giving people and programs access. Learn More Here Google Cloud IAM Recommender Still not convinced you have the right permissions for the right users or programs? IAM Recommender can help you see what permissions have and have not been used within the last 90 days. This gives you insight into what permissions are actually being used on Google Cloud. Learn More Here AnDrew Eller Solutions Architect ![]()
Controlling costs has become a challenge for companies as they move to the cloud. Whether you are a small or large business, everyone wants to save money. Teams are globally dispersed, projects are more specialized and changing initiatives contribute to complex cost reports. Beyond their intricacies, companies need to understand granting permissions to these reports and ensuring the correct teams have the right access in order to be successful. Business intelligence tools, like Looker, make the day to day management of these priorities easier.
Google Cloud Platform (GCP) has a vast amount of reporting capabilities, but introduces risk when different teams are accessing data in a production environment. In order to decouple data, we need BI tools. Looker, a Google owned product, can plug directly into a customer’s billing data. They also have out of the box marketplace features, so there is no extra implementation beyond setting up a connection. Using Looker gives teams much greater agency, trading capital expenditure for reduced operational expenditure. The beauty of Looker is that it allows non-technical staff to run “queries” and drill down to the information they need with little to no SQL experience. Companies simply need an administrator for Looker to enable teams, freeing up engineers from simple queries and creating reports. Looker can also be used to create customizable dashboards that can be managed and shared by individuals, teams, or departments. These dashboards allow users to drill down and explore beyond what is being displayed. For example, if a visualization only has December, you can drill down and change it to look at November instead. If you want it for later, save it as a Look and you will be able to plug it into any dashboard that you have access to! Consolidation and flexibility is the name of the game in an ever changing world of technology. Looker allows all employees to pull the data relevant to them without needing a dedicated team in order to build functionality for them. Enabling businesses to work faster and with more information leads to quick, effective decision making, propelling you forward to solve tomorrow’s problems today. Clay Hurford LucidPoint Cloud Engineer
March of 2020 began as you would expect: people looking forward to the Spring and planning those Summer vacations. All of those plans would soon unravel as the COVID-19 pandemic would dig its claws into the lives of everyone around the world.
Enter COVID-19 A client called on us for help. They needed to move to remote work as fast as possible. This client is in a regulated industry, with high-security obligations. The vast majority of their employees went into the office each and every day. Like many companies, their initial thought was to purchase, configure and ship laptops to their remote staff. The pandemic had different ideas: the supply chain was in shambles, laptops were hard to find, not to mention the nightmare of shipping, receiving, configuring and re-shipping hundreds of laptops. The company did have a small Virtual Desktop Infrastructure (VDI) solution, but it was already over capacity, and wasn’t very popular with the staff. Enter AWS Workspaces AWS Workspaces came to mind. It’s an alternative to laptops or desktops and perfect for the work-at-home scenario. You can think of it as Amazon’s take on VDI, and it’s easy to deploy and scale. AWS has a Windows Server 2016 installation designed for use as a desktop. They also offer Windows 10 Enterprise under the BYOL (Bring Your Own License) umbrella. Unlike traditional VDI implementations with a limited amount of computing resources available (and there is almost always contention for those resources), AWS has enough overall capacity, such that you never feel like you’re sharing resources with others. While the quota for how many Workspaces you can have is relatively low to start, there is a process in place to request an increase in these limits. I requested several thousand Workspaces for our client. While AWS wanted to verify they had the capacity to do so (keep in mind that I was one of many organizations that asked for large amounts of Workspaces in that same week), they rapidly authorized the quota increase. Since Workspaces, like almost all AWS services, provides a rich API for provisioning, I was able to quickly automate the setup of several thousand Workspaces. Workspaces play nicely with the systems you may already use, like Active Directory and management tools such as Ivanti or Endpoint Configuration Manager. They also provide their own application layering tools to customize images for various individuals or departments without managing a pile of different VM images if you don’t already have a strategy for managing those assets. This client had four distinct Active Directory server farms. I connected to all of them in one account and provisioned Workspaces out to each accordingly. This work took only hours to execute. The entire workforce could begin working from home, with all the conveniences of their computer behaving as if they were in the office, with the appropriate security controls still in place. Within days, AWS Workspaces had accommodated their needs. Additionally, the client could issue older hardware, like outdated laptops, that only needed to run the AWS Workspaces client. This technology saved the client tens or hundreds of thousands of dollars in new hardware costs while maintaining their regulatory bodies’ security. Additionally, once life returns to normal with people back in the office, you won’t have a pile of hardware that you purchased and no longer have a use for. Instead, you simply reduce the amount of AWS Workspaces that you have provisioned. AWS Workspaces offers flexibility in terms of billing, usage, and applications, plus, AWS Workspaces are easy to use. Customer satisfaction with the solution is fantastic, and their help desk couldn’t be happier. Overall, AWS Workspaces made a complicated process easy and I don’t know if it could have been any better even if we’d planned far in advance. Our goal at LucidPoint is to find the right IT solution for your purposes. Whether it’s AWS Workspaces or something else, we believe IT should serve your needs. Chris Moates Part 2: Technical debt: How to overcome the excessive weight of your outdated infrastructure1/13/2021
Jeff Darrish
|
Since I was 6 years old I knew I wanted to be a fighter pilot, therefore it was no surprise that Air Force pilot training was one of the best years of my life. One of the many highlights was my first flight in the T-38 Talon. The world’s first supersonic jet trainer aka “the white rocket”, was loved by everyone who flew it. We called that first flight or dollar ride the “zoom and boom”. The profile called for an unrestricted full afterburner climb and then accelerating well beyond Mach 1.0, breaking the sound barrier. What Chuck Yeager struggled to do, we got to achieve on our very first flight in the T-38.
One of the challenges with supersonic flight is as you approach the speed of sound, drag increases significantly. At LucidPoint we help our clients migrate software applications to the Cloud which seems a bit fitting given my previous profession. I love watching our team of rockstar engineers and architects help IT organizations adopt the goodness that comes with their applications running in public Clouds such as Google Cloud Platform (GCP) and AWS. On occasion some IT organizations are resistant to moving to the Cloud. Security concerns, lack of budget, lack of staff, fear of the unknown, fear of no longer having a job, are all common barriers to Cloud adoption. Sometimes the faster an organization is moving towards the Cloud, the more resistance there seems to be. Truth be told, not all applications are appropriate for the Cloud, just like not all airplanes are designed to be supersonic. So don’t let the “Cloud barrier” keep your business from going fast! Once you break through, there is smooth air on the other side. Who doesn’t want their application going Mach 2? If you need help determining which applications are fit for the Cloud or you just want to talk about airplanes give us a call - LucidPoint is here to help! |
Mike Fontaine
LucidPoint Co-Founder
With the fast-pace of agile development cycles, it’s hard enough to keep up with business demand for on-premises resources. Add on to that the operational aspects of monitoring and alerting, and you have a full time job. But what happens when your security team approaches you with a high severity Linux kernel CVE that could allow root privileges on the host node? You’ve got containerized workloads running all over the place and no easy way to patch without disruptions.
That’s one way Anthos stands out. With Anthos’s ability to configure GKE’s auto-upgrade, make use of Kubernetes pod disruption budget, and new features like surge upgrades, its able to deliver less operational headaches when it comes to remediating those security vulnerabilities. You are able to ensure availability of business services and keep your business safer by utilizing these features in Anthos.
For more information, read this Google Cloud blog about how you can help your operational efficiency with surge upgrades.
https://cloud.google.com/blog/products/containers-kubernetes/introducing-surge-upgrades-for-anthos-gke
AnDrew Eller
LucidPoint Engineer
Hybrid IT and IT infrastructure solutions that meet your business goals.
|
Contact Us |