CISO JOURNEYAutoNationEmbracing a New Security Architecture for Accessto Internet and SaaS etailKen s: 22 billion28,0001300Company IT Footprint: AutoNation has approximately 28,000 employees. It isthe largest seller of cars in the United States. It has 300 locations with internetpoints of presence.“When ransomware attacks happen to other companies, thousandsof systems in their environment are crippled, in addition to havingserious impacts with having to pay a ransom. When this kind of eventhits the news, I get worried calls from executives, and it warms myheart to be able to tell them, ‘We’re fine.’”Ken Athanasiou, VP and Chief Information Security Officer, AutoNationExcerpt from: Stiennon, Richard “Security Transformation,” Secure Cloud Transformation: The CIO’s Journey.IT-Harvest Press, 2019, pp. 101-113.1

AutoNation Journey OverviewBusiness ObjectivesThe SolutionImpact Closing the digital divide —seamless user experience onlineand in stores Protect customer PII data Improved user experience:Less latency, fast Office 365 P rioritize cloud vs. data centerapp migration Better protection against threats E mbrace cloud security,particularly SSL inspection Easier to bring new locations,capabilities online Reduce infrastructure, MPLS costs A ddress security debt, processesfor inline security Greater traffic visibility, betterpolicy management Improve security, reporting,visibility, management C reate local internet breakouts Reduced security appliance costs Optimize for Office 365 Protection against zero-daythreats with Cloud Sandboxbehavioral analysis Enable new functional capabilities Connect 360 retail locations viathe cloud2

AutoNation is a retail organization with 300 locations sellingand servicing automobiles. KenAthanasiou, Chief InformationSecurity Officer at AutoNation,In the words of Ken Athanasiou:Flexibility, repeatability, and security throughcloud transformationdescribes how cloud trans-I’ve been at AutoNation for a little over three and a half years, as its first CISO.formation saved AutoNationAutoNation has about 28,000 employees and we are the largest seller of cars inmoney while enabling newthe United States. Prior to joining AutoNation, I was at American Eagle Outfitters ascapabilities. He stresses theits CISO for about seven years. Previously I was at JPMorgan Chase, as the retailimportance of timing and careful planning when embarkingon a major strategic initiativeline-of-business information security officer, or BISO, for five years.such as cloud transformation.Journey begins with a breachHe further expands on the im-AutoNation experienced a small breach with a third-party vendor in 2014 that ex-portance of being adaptableposed about 1,800 customer records. That was enough for our general counselorganizationally, and the willingness to make modificationsto plans along the way, andto start asking questions about ways to improve security. He and a few other executives brought in a couple of different firms to do some assessments and makeapproaching it with more of ansome recommendations, and one of those recommendations was to build out anagile methodology to preventindependent cyber security team that could help them reduce the risks.false starts and failures.Closing the digital divideAt the same time, there was beginning to be a digital divide, especially for thecustomers. There is also a digital divide between the brick-and-mortar stores wherevehicles are sold and our online presence. The intent is to close that divide andpresent to our customers a comprehensive user experience, where they can beginthe shopping, selection, and credit application process online. They can wanderinto a store, access what they did online, and move the process forward a fewsteps; they can then go home, make a few decisions, sleep on it, whatever, and theneither complete the purchase process online or come into the store the next day.Closing that digital gap and giving our customers the opportunity to participate in aunique buying experience has become a driver for this organization.Protecting customer dataThere are unique security challenges to being an online auto dealer. We takecredit applications every day, and credit applications are obviously about the most3

“ We’re extremely paranoidsensitive personally identifiable information (PII) that you can handle. We also process credit card transactions, so we deal with PCI requirements.about how we handle ourcustomers’ data.”When we have a credit application, we have every piece of information that a badguy needs to do some pretty robust identity-theft activities, so we’re extremely paranoid about how we handle our customers’ data. A critical element of this processis the ability to protect that type of data, while allowing customers to access it.Transitioning the CIOI was hired by a new CIO who had joined the organization just a few months beforeI did. He was brought in to do some transformational activity and had inheriteda significant amount of technology debt within the organization. We made someprogress under that CIO and did an enormous amount of work around solving someof that technology debt and getting security in place—and closing some of the mostcritical gaps that the organization had.Over the last year and a half or so, we’ve made some dramatic changes within thetechnology organization. We’ve been able to advance the maturity of the process,get completeness, and instill some robust frameworks.Backing off on cloud backupsThe decision to move to the cloud was made by the technology operations teamwith our disaster recovery (DR) capability. The misstep we made was to take legacyapplications that were heavily dependent upon very large hulking boxes of iron thatrun very hot and heavy and putting them into a cloud environment without actuallyrefactoring those applications.The transition didn’t go well. We were about four months late exiting the data centerand six months late in actually executing a test against the new cloud DR environment. As predicted, that test failed miserably. We had transactions that had beensub-second go to 60 seconds from the physical colocation to the cloud environment. It was an abject failure.One of the first conversations I had with the new technology lead was about fixingour DR environment. We needed to fix it fast, and we had a discussion about what’sthe right thing to do. Do we refactor these applications so that they can play well4

in the cloud? After some discussions with the application development teams, wedetermined it would take us approximately two to three years to fully refactor theapplications, based on the available resources, the workload, and the business requirements.We made a decision as a team at that stage that the organization could not suffer thattype of risk for such an extended period of time. The executives agreed with us andwe built a new colo data center. Brand new hardware, all sorts of beautiful, shiny newtoys in that data center, and we moved all those applications out of the cloud, backinto the traditional data center.Timing is everythingAlthough the decision to go to the cloud was a wonderful idea, the problem was thatthe time frame associated with doing that transition and the requirements of actuallyexecuting that transition weren’t fully understood.As with anything, if you don’t truly understand what you need to do, you’re likely tofail at it. Unless you are adaptable, and you are willing to make modifications to yourplans along the way, and approach it with more of an agile methodology, you will fail.Embracing cloud securityOn the client application side, one of the other things that I did when I first got toAutoNation was to install UTM (unified threat management) devices; these are basically SOHO, small home office types of appliances, that combine all the features andfunctionality of the next-gen firewall on a very small platform.We had 300 locations with internet points of presence. The networking team wasintending to deploy more than 300 little boxes across the entire country and that’swhen I decided it was time for us to learn more about this cool cloud-based networkfirewall solution that I’d heard of called Zscaler. That’s when I called up Jay and histeam and asked to meet with them.Instead of doing the little boxes of iron across the entire country and rolling trucks allover the place and having to manage that nightmare architecture, we went down theZscaler route, which was intense. I didn’t sleep for probably six months. I was worried5

about our exposure. Every time I went to bed, I expected to wake up to a majorbreach until we got Zscaler rolled out across the environment.Addressing the security debtThere were quite a few issues that we uncovered as soon as we wrapped Zscaler asa prophylactic around the environment—lack of robust patching and IT hygiene, theineffectiveness of the McAfee antivirus that we were running, broken update processes across the board, very old systems, middleware that wasn’t being patched.We looked at various engines that were out there, including hardware based. Threeyears ago, there really wasn’t any other cloud-based solution that was even comparable to the capabilities that Zscaler had. They were the only true, full-protocolfirewall in the cloud. They had the most robust capabilities. Everything else waspretty much just a web proxy. You can pump your traffic through that, but it’s definitely not the same thing.The rollout decision was a no-brainerZscaler was just a completely different architecture, so we made the decision topilot Zscaler and see how it looked and felt. We rolled out Zscaler to a couple ofstores and our corporate headquarters and we let that run for a little bit. Again, thevisibility we got into outbound bot traffic, and obvious infections and those sorts ofthings very quickly upped the urgency of getting a solution deployed across theentire environment.It became pretty much a no-brainer, and we made the decision to go forward withthis even if we had to break a bunch of stuff in order to filter traffic and gain control.We invested capital to drive some maturity into our patching processes and to improve our anti-malware controls.We took advantage of Zscaler’s anti-malware. When I was first talking with the Zscaler team, I was adamant that I wanted full-blown next-gen firewall capabilities,which would include filtering network-based malware detection and sandboxing.We are now pretty much fully deployed with Zscaler capabilities. We’re currently not making extensive use of its Zscaler Private Access to access our internal6

applications, although it’s compelling. We just simply haven’t had the opportunityto really push that out very far. But we’ve got pretty much everything else, like DLP,and all the obvious stuff around the URL filtering. We are now a heavy consumerof Zscaler capabilities and we’ve been very pleased with the controls that we gotfrom them.Penetration testing has become more difficult, and that’s agood thingWe do aggressive penetration testing using third-party vendors. It’s common forthem to become stymied by the Zscaler layer when they’re doing remote testing,because they simply can’t penetrate the malware and sandboxing controls and getanything to work. That’s also a result of the changes that we’ve made around patching. We’re using Tanium for endpoint management across our entire environment,which I am just in love with—it’s a fantastic piece of technology.So, with all these additional controls in place and obviously driving mature patchprocesses throughout our environment, our resiliency—our hardness, so to speak—has just gone several levels above where we were previously.Seeing is believing: The value of reportingFor reporting we don’t often use the stock stuff that comes out of Zscaler, but we dopull numbers from it to include in board presentations. I show the board a bunch ofgee-whiz numbers, and these gee-whiz numbers show for the most part how muchwe’re under attack. The reason I call these gee-whiz numbers is because everysingle one of these attacks was blocked or prevented by one of the engines that wehave in place. This particular set of numbers shows all the attacks or incidents thatwere blocked or prevented by our cloud-based firewall solution, Zscaler.It’s more of a validation that this is stuff that we would have to deal with if we didn’thave these controls in place, but the fact is, we do have these controls and, therefore, we wouldn’t consider these incidents or really anything all that important todeal with. We don’t react to them because they’re noise that is filtered out by theengines that we have in place.7

“ We’ve gotten down to aThe senior executive staff loves the numbers, because they look at them and theygo, “Holy smokes, that’s a lot!” Every now and again, we do see spikes in attackseven-day patch cycle,activity and I usually end up having to explain the spikes. “What happened thereand that’s not even awhere it jumped up so high?” they’ll ask, and then I’ll usually explain how therecritical or an urgent a zero-day exploit or a critical vulnerability that was discovered and we see anIf we have a critical patchenormous amount of traffic attacking that critical vulnerability because it was freshthat needs to be pushed,and new.we can do that in about24-hours.”“We’re fine”When ransomware attacks happen to other companies, thousands of systems intheir environment are crippled, in addition to having serious impacts with havingto pay a ransom. When this kind of event hits the news, I get worried calls fromexecutives, and it warms my heart to be able to tell them, “We’re fine.”We’ve gotten down to a seven-day patch cycle, and that’s not even a critical or anurgent cycle. If we have a critical patch that needs to be pushed, we can do that inabout 24-hours.Piloting the types of engines that can give visibility into the state of your environment, like the level of botnet traffic and infections, is something that you can thenuse to drive further activity, spending, and resource implementation.It’s important to really understand what is going on in your environment in termsof infections and risk levels in order to put something like Zscaler in place. Forexample, if you have 70 infections, you will find that maybe your patch processesare broken. Then, you could start piloting a couple of engines that look at EDR,response, and software distribution packages. Do a side-by-side comparison, andyou’ll find that your Microsoft SCCM product says you’re fully patched—but thenwhen you run something that’s independent of Microsoft against that, it says you’reonly patched at about 50%. Well, in that case, you’ve got things in your environmentthat are missing patches, that are two years old, so something’s wrong there. Again,that would drive further activity to resolve.8

Our Office 365 implementation needed more bandwidth ateach point of presenceAs we made the transition to Office 365, we had to learn how to implement theprocess. Originally, we were going to have everybody use the portal and not botherputting Office on individual machines. Also, we didn’t have the required bandwidth.It didn’t work, so we had to step back and change up how we do things as a process. Now we’re doing local installs of Office 365, and we’re executing the productdifferently. It works much better now.At the time, every one of our locations had from a T-1 up to a multiple T-1 type ofMPLS connection back to our data center, so very small pipes for the private circuitback to the data center.Internet access was also not all that great. You’re talking somewhere around tenmegabits per second type of connections to the internet, which, obviously, if you’renot careful with that type of an environment, you will easily clog those pipes andyou will have very degraded performance.Local internet breakouts helped reduce costsAfter we got the new technology leadership in place, we renegotiated with ourproviders and we significantly reduced our network bandwidth cost and jackedup our bandwidth ten-fold. We went from very small MPLS circuits to ten and 20megabits per second MPLS circuits back to our data center, and 150 megabitsper second connections to the internet for the most part. We had very significantincreases in performance and capability for internet bandwidth, and again, that wasprimarily due to a technology gap, lack of planning, and a lack of understanding ofthe bandwidth requirements involving our most used applications. We still do haveMPLS circuits and we have internet circuits. The vast majority of our traffic goesdirect to the internet, but we do have internal applications, like our CRM and someother systems, that we just simply backhaul across the MPLS circuit.Today, we are still using a hybrid network. We have considered doing away withthose MPLS circuits and going full internet, maybe using things like a ZPA, but we’venot made that transition at this point.9

“ When we do acquisitionsWe don’t use Zscaler for mobile devices at this point. That’s another one of thosethings that’s on the list, but we have not actually executed against it. We’re transi-or divestitures, it’stioning from Intune over to AirWatch right now for MDM, and once we completevery easy to enroll athat, we’ll go back and look at what else we could do in that location in ourenvironment.”Improving the user experienceOne of the other advantages that we’ve gotten out of Zscaler for some of our othercloud-based applications is that the connection speeds through Zscaler are actuallypretty robust. This goes back to the peering that Zscaler has done with a lot of theother larger providers out there, like with Microsoft and Office 365. Our number ofhops—even though we have to do a tunnel from our external router into the nearestZscaler cloud, and from Zscaler to Office 365—is only one or two hops, versusgoing directly from our dealership to the internet; it would actually take longer to getto the service than going through Zscaler.Instead of inducing additional latency because of those pairing connections, we getall our controls in place and we see very minimal latency and, in some cases, ourconnections are actually even faster.Taking advantage of cloud capabilitiesThere are multiple inherent advantages of moving to the cloud. You get betterresiliency, you get better scalability, you get a lot of really cool abilities that youcan’t get out of a standard colo environment, and as you re-architect your legacyapplications, as you build new applications so that they’re actually cloud-focusedand can natively take advantage of those capabilities, I expect that we will continueto see more and more of these applications move into this model.Another advantage is in mergers and acquisitions. For M&A, Zscaler has been a bigwin for us. When we do acquisitions or divestitures, it’s very easy to enroll a newlocation in our environment. We don’t have to roll a piece of security hardware outthere. For the acquired entity, we simply configure the tunnels for the internet boundtraffic to Zscaler and we’re covered.One of the things that we’ve found to be a little interesting is when we divest a dealership, the acquirer comes in, and they may ask us what we do for our network security.10

They ask us where our firewall is located. Our response to that has been—we useZscaler, so you don’t have a physical firewall in there. You’re going to have to figuresomething out. They don’t like that answer because they’re used to just takingwhatever was there and making use of it.When we do an acquisition, we do more of a rip-and-replace for the technologyenvironment. We may purchase computers with an acquisition, butthen we generally will rip them out. We’ll resell them to someoneelse and put our stuff in.Ready to transform your company?Create business value with Zscaler today.CONTACT USREQUEST / Zscaler Zscaler was founded in 2008 on a simple but powerful concept: asapplications move to the cloud, security needs to move there as well. Today, we are helpingthousands of global organizations transform into cloud-enabled operations. 2019 Zscaler, Inc. All rights reserved. Zscaler is either (i) a registered trademark or service mark or (ii) a trademarkor service mark of Zscaler, Inc. in the United States and/or other countries. Any other trademarks are the properties oftheir respective owners.11

particularly SSL inspection Address security debt, processes for inline security Create local internet breakouts Optimize for Office 365 Impact Improved user experience: Less latency, fast Office 365 Better protection against threats Easier to bring new locations, capabilities online Greater traffic visibility, better