GitLab brings three critical software functions under one umbrella: DevSecOps, or Development, Security, and Operations. The GitLab experience stores code, tracks changes, and automatically tests software for developers. It even helps them manage tasks. As a result, GitLab processes unfathomable amounts of data.
Using AI in this environment introduces all sorts of potential complications. Is the data they process secure? How do they scale their AI at a strategic level? And can they protect their clients' proprietary data?
For Ashley Kramer, Chief Strategy and Marketing Officer at GitLab, these challenges don’t stand in the way of embracing AI. Kramer has helped lead GitLab’s AI strategy so the company can use AI as a “force multiplier,” despite these security and data challenges. In our full webinar below, you’ll see how Kramer’s “top-down” approach to AI implementation has made it possible to embrace the AI future—even in the face of highly technical data challenges.
Watch the webinar instead: The Spotlight with Gitlab's Ashley Kramer
AI Adoption and Integration at an Organizational Level
When Kramer started at GitLab, there were other priorities beyond AI. Kramer pushed for its wider use, believing it could help them scale their business. The marketing team, focusing on customer-facing improvements, initially hesitated. But when tools like ChatGPT and Copilot arrived, they couldn’t longer ignore AI’s benefits. AI was going to change coding irrevocably. “[AI was affecting] things that help sell to developers, help developers write code, and we had a moment at GitLab,” said Kramer. “We realized we had to make AI a top priority.”
The new tools were a wake-up call for GitLab. Implementing AI could mean all sorts of competitive advantages for marketing, like improving how well GitLab targeted its messages.
However, implementing AI in an organization like GitLab also required some strategic thinking. Many businesses begin with a “bottom-up” approach: individual workers use AI to become more efficient, but AI isn’t part of the team’s overall strategy. Kramer determined to reverse this process. Leadership at GitLab would drive a top-down approach, building a collaborative strategy involving the input of the entire team.
Challenges and Best Practices in the “Top-Down” AI Approach
The top-down approach is easier said than done. In the webinar, interviewer Loreal Lynch of Jasper noted that AI adoption typically starts at an individual level. A developer might find AI assistance useful in building code or building a targeted message for the marketing team. It’s not as easy to connect AI to broader business outcomes.
But Kramer viewed the top-down approach as essential to providing top-line strategic outcomes for GitLab. To Kramer, both approaches could happen at the same time. Leadership can lead a top-down strategy while encouraging individuals to use AI if it makes them more efficient.
The first challenge: making AI work as a tool an entire team could implement. An individual using a tool like ChatGPT might code faster. But how would GitLab’s processes improve at a team level?
GitLab started by using Jasper AI for content creation—especially for creating a consistent brand voice. “The way we use [Jasper] is to help us not work,” said Kramer. “Start with a blank slate. What I love is it will intake our brand voice and tone, and it will help us get to a spot where then we can get a human in the loop.” From there, the team can optimize the messaging its heart’s content. But with Jasper supporting content development from the blank page, the entire team became more efficient. They’ve turned this into a repeatable process.
“My VP of Marketing, Operations, and Analytics is now writing a full strategy doc for marketing,” said Kramer. “And of course, the whole team will contribute.”
Another challenge was identifying the specific metrics to track its AI success. Were they saving time by using AI? Reducing cost? Kramer made a key point here: the goal wasn’t to use AI to replace jobs. It was to make the existing GitLab team more efficient at processes like customer targeting and marketing messaging. “One of our core values at GitLab is efficiency,” said Kramer. “And so it's all about: where are we gaining efficiencies?”
Embracing this systematic approach has turned AI into a force multiplier for GitLab. For example, they’ve embraced Zoom’s AI-powered note-taking feature. This means one less routine task for team members, freeing up resources for more complex thinking. “We had assistants take notes throughout our entire conversation,” noted Kramer. “Guess what? They don’t have to do that anymore."
Concerns in the Proliferation of AI
The “proliferation of AI” is not a blind march to success. GitLab, like any new tool, AI or otherwise, faces a few roadblocks. Foremost among these are security risks. GitLab ensures any of its tools undergoes rigorous scrutiny by its IT security to ensure data protection and system security remain paramount. According to Kramer, GitLab’s legal team assesses the data used to train AI models, ensuring it complies with the company’s ethical and legal standards.
AI also risks infringing on intellectual property. GitLab partners with AI providers using open-source code for training—in other words, they don’t use GitLab’s proprietary code. This safeguards their IP no matter what AI tool they’re using.
When it comes to data privacy, Kramer cited a cautionary tale from Samsung. Workers at Samsung accidentally leaked trade secrets via ChatGPT. A Samsung employee pasted confidential source code into the ChatGPT chat to look for errors. Another employee did the same, hoping ChatGPT would help optimize the code.
What the employees didn’t realize is that this information went into ChatGPT’s systems, spreading into the general pool of publicly shared AI information. Samsung found the leaks so devastating that it even banned the use of Generative AI.
Well aware of Samsung's problems, Kramer helped implement a decision-making process at GitLab. For legal purposes, decisions on AI adoption would involve a DRI (“directly responsible individual”) working with a collaborative team that includes product marketing, operations, IT security, and legal.
GitLab is transparent with its customers about the types of AI it uses. “We have a transparency center for our customers,” Kramer said. “They can understand how we’re leveraging AI.”
Additionally, GitLab partners with language model providers like Anthropic and Google, which use open-source code for trading—not customer code. This keeps GitLab’s proprietary code and data confidential, which ensures GitLab’s data isn’t used to train any external models.
AI’s Role in Code Security
The benefit of GitLab's top-down approach isn’t just that it can scale how it uses AI. GitLab can also integrate security procedures throughout its entire software development cycle. This goes beyond marketing messaging and code generation, including security and quality insurance.
According to Kramer, most people who think about AI focus too much on the actual writing of the code, and too little on the security of that code. “This is where GitLab shines,” said Kramer. “We are an end-to-end platform for software developers, security, and operations—professionals to build, secure, and deploy their code. But what I think too many companies in our space have focused on was the actual writing of the code. The generation of the code. Help me write code faster. That doesn't mean it's more secure. That doesn't mean that it's better and higher quality.”
The way GitLab uses AI ensures safe data. AI helps review code, run security scans, explain vulnerabilities, and check for quality post-deployment. Again, it’s the top-down approach that helps. “What we really want to do,” said Kramer,” is infuse AI through the rest of the software development cycle.”
Dealing with AI Sustainability and Cost Concerns
There’s a powerful drawback to AI in its current form: power. Or lack thereof. AI’s power demand should rise by 160% by 2030, according to Harvard Business Review. And the more processing power AI eats up, the more strain there will be on the electrical grid—creating issues from energy consumption to environmental impact.
Kramer mentioned that “a lot of pieces” go into weighing this side of AI. She says GitLab’s using a thoughtful approach. “How do we thoughtfully figure out how to infuse AI for good, without causing a huge economic impact?” Kramer stressed that it’s still a journey without an answer for now—but GitLab is staying on top of it.
Adoption Strategies and the Future of AI
GitLab is ensuring AI adoption with strong leadership support, seamless integration of AI into their workflow, and onboarding new hires with manageable AI tasks. Ideally, every team member at GitLab can leverage AI in some fashion to improve their efficiency and achieve better results.
Lynch concluded the webinar by looking at the future of AI. “What does success look like?” she asked. “What are the tradeoffs like?” Though it’s still too early to see where it’s heading, companies like GitLab are showing that AI can scale without losing quality or data integrity.