未加星标

The Serverless Show: re:Invent Recap

字体大小 | |
[数据库(综合) 所属分类 数据库(综合) | 发布者 店小二05 | 时间 2018 | 作者 红领巾 ] 0人收藏点击收藏

For this episode, Hillel from Protego was joined by Forrest Brazeal , a senior cloud architect at Trek10 and an official AWS Serverless Community Hero . Forrest explained, “Trek10 is an AWS Advanced Consulting Partner, and we focus pretty specifically on serverless, cloud-native architectures in AWS. That means I spend most of my time building stuff for clients. I can’t always talk about a lot of it specifically, unfortunately. But basically this year I’ve been building a lot of event-sourced architectures, taking databases, turning them inside out, moving that to streaming systems and Lambda consumers, things like that. I’ve been working a lot with AppSync as well and managed GraphQL APIs.

“Then I do a lot of other things on the side between the various writing that I do at Trek10 and I’m the author of the Serverless Superheroes series on A Cloud Guru, I do the webcomic, FaaS and Furious, and I’ve also hosted the Think FaaS serverless podcast, for a while. The gimmick there was that each episode was five minutes long, which is the length of a Lambda function. Unfortunately, Lambda has upped that runtime now to 15 minutes, so I have to either put a whole lot more effort into those podcasts or discontinue them, so stay tuned there.

Serverless Heroes
The Serverless Show: re:Invent Recap

Forrest continued, “There are a few of us Serverless Heroes in the community; I think 10 worldwide. We do various advocacy efforts, including speaking in the community and just trying to help people understand what’s so great about these technologies.”

Forrest was kind enough to show us the AWS Serverless Hero medal Hillel asked about.

re:Invent Recap

Hillel stated, “re:Invent, particularly for serverless, was really about the maturing of the technologies that make up the serverless landscape in AWS, and how people can do more in a cloud-native, serverless way than they could in the past. I thought we would try to go through some of those announcements and talk about them specifically, and how they enable you to do your job really well.

WebSockets for Lambda and API Gateway

“The first one on my list is WebSockets for Lambda and API Gateway. I was really excited about that. I thought, ‘This is something that internally, at Protego, we struggled with quite a bit. How do you maintain more stateful connections with web frontends, dashboards, UI, and mobile applications against something like Lambda, which is really stateless and ephemeral by its nature and wants to respond and go away?” This was plugging a big hole for us in terms of managing a WebSockets environment, not changing Lambda to be stateful, but creating something in the middle, like API Gateway that can make that transition. How do you see that?”

Forrest replied, “Absolutely. I mean it’s something a lot of us have wanted for a long time. Honestly, we’ve had it for a little while with AppSync subscriptions, and that’s been very successful if you’re in that rather limited subset of use cases where AppSync subscriptions and GraphQL are going to make sense for you. I actually think that’s a broader set of use cases than people sometimes give it credit for, but the fact is that putting it as an API Gateway brings us to a whole, much broader class of applications.

“I think you’re absolutely right that making it event-driven and allowing Lambda to respond to these requests as they occur, once that connection is held open, is fantastic. It’s a great way to architect this. The type of applications you’re going to see using this, obviously, it’s going to bring more interactive web apps to Lambda and to serverless in ways that we weren’t able to have before. I don’t know how this will integrate with things like Amplify, which is a mobile-first, web-first programming model for AWS, essentially, that has been heavily AppSync driven. I’m not sure if that will get integrated with API Gateway as well. I would love to see that happen, but that’s something I would definitely predict taking place in the future.”

Bring-Your-Own-Runtime
The Serverless Show: re:Invent Recap
Hillel continued, “Number two on my list was the bring-your-own-runtime announcement. Obviously, we at Protego were part of that announcement. We were really excited to get our name mentioned onstage by Werner Vogels. But I’m actually more excited about some of the flexibility that bring your own runtime and then we’ll talk about layers and nested applications.

“Bring your own runtime was mostly about accelerating language support in Lambda, so really letting people bring new languages in, possibly optimize cold-start times, and things like that. I think it’s a great way to add flexibility to how Lambda can be used. Then obviously, layers and nested applications in the serverless application repository, really a way of composing software and maybe in a more mature way in managing your software, bringing in third-party components. Let’s start with the bring your own runtime. What’s your take on that?

The Appeal of Limitations
The Serverless Show: re:Invent Recap

Forrest replied, “Obviously, it does open up some cases that we didn’t have before. We’ve seen people start to talk about being able to move mainframes now with COBOL support and things like that. But for me, the biggest draw of Lambda and serverless in general was always that it actually limited a little bit what you could do. You had these zip files. I like to say zip it and ship it was the deployment model for Lambda. As you broadened out with creating these customized deployment packages and bringing in a lot of native dependencies, it actually got more difficult to do, and that forced you to scope down what you were putting in Lambda and be able to create applications that were relatively minimal.

“Honestly, that was great discipline. It was an interesting limitation to have. Yes, it got frustrating at times, absolutely, and I’ve certainly spent my time fighting with native python packages and trying to get cryptography and things like that into Lambda is always frustrating. Having some more customizability around that is potentially going to be nice. I only worry that it will cause people to wind up creating these bloated Lambda applications that are potentially less performant. I see this as being something that, if people aren’t careful, could really bring their cold-start times up as they drag a lot more stuff into Lambda.”

Layers and Nested Applications

Forrest continued, “Then moving on to layers and nested applications, I’ve mentioned before that I really feel like they’re the killer things that have come out for serverless at re:Invent this year with the possible exception of one other service that we’re going to talk about. But these are the two things that I wanted to see the serverless app repo do when it was announced at last year’s re:Invent. I think when a lot of us heard about the Serverless App Repo first, our minds immediately went to, ‘Oh, this is a serverless marketplace. This is going to make it easy for people to share serverless projects they’ve been working on and get paid for them.’ That really wasn’t what it was at all, and it really wound up with a lot of questions people had around, ‘How do I do security here and why would I just go and trust, as an enterprise, some random serverless app that somebody published out there on the app repo and then go take that and have to deploy it and maintain it in my environment?’ It was like all the disadvantages of having to run something yourself with none of the advantages of being able to trust something that somebody else writes. That was a challenge for serverless app repo getting started, but the layers thing I actually like a lot more because it takes the thing that people actually wanted which was dependencies, right, because you’re already packaging these up and you’re putting them into your serverless functions. You just had all these problems with dependencies that you had to manage and keep up to date over time. Lambda has kind of a legendary problem where you can leave a function sitting out there, and the dependencies just rot, and no one updates them. There’s whole startups that are dedicated to solving this problem and identifying those. I know at Protego you probably thought about that a lot as well.

“But the layers gives you a cleaner way to manage that while still giving you something that I think is really important which is, as far as I understand it right now, I know this is early days. You can’t ever update a function just by updating a layer. You actually have to go and touch the function to make sure the layer gets updated. That’s really, really key to this as far as I’m concerned. But it just makes it so much easier to push out those changes, to share code so you don’t have these bloated deployment processes where tons of code gets duplicated and rolled up into different functions. It can make that Lambda repository creation so much nicer and easier. That all is something I’m really looking forward to.”

Reducing Waste

Forrest continued, “The nested applications is the other side of that coin, again. I think they finally realized that the real use case for the serverless app repo is not in sharing random, public serverless apps, but in creating a place inside of an organization where you can discover tools that you’ve created and just reduce some of the waste that’s caused by people creating duplicate services inside of organizations. I think that’s going to be great. It’s going to open up whole new classes of composing applications. I really can’t wait to play with it.”

Hillel agreed, “Yeah, I think a lot of that makes sense. Going back to your comment on the runtimes. I absolutely see the risk in runtimes, and part of me wishes there was more limitation on who could build a runtime and what the use cases for runtimes were because I think I wrote somewhere in a blog post we did. We’ve done a version of a runtime with security baked into it for some of our stuff, and I wrote right away, ‘We don’t want to do it that way. We don’t think that we should be owning runtimes. Runtimes is really about languages and language support and some other features you want really underlying.’ That’s something that hopefully gets used in that very specific way and not more generally as, ‘Hey, I’ve got a whole pile of COBOL code,’ like you said, ‘I want to throw it into a Lambda function and see what happens.’

“I think the layers side is a much cleaner way to architect software. I think there’s a few more things that need to happen there in terms of how layers can interact with each other and with the function definition and things like that to make them a little bit more self-contained and robust, but I definitely think it’s a great step in the right direction. I think a lot of it’s about who manages things, right? I mean the risk with the runtime API is that you’re now shifting more things you have to manage back to you or someone else, right?

“Amazon was managing the runtimes, and now suddenly you need to manage runtimes and that’s maybe a step backwards. Whereas, layers and, I guess, nested applications as well potentially saying, ‘Hey, I could have somebody else managing some part of my application, and they could focus on that. They can update that and they can patch that and they can improve that, and I can benefit from that,’ and then there’s still some control. Like you said, you’ve got to touch a Lambda function’s code to update the layers it’s using, so it’s not like you’ve got some third party automatically changing things, pulling up the rug underneath you, but you do shift some of that ownership to somebody else, which I think is a healthy thing.

“That’s been the serverless mantra is, ‘The more you can shift responsibility for the things you don’t care about to somebody else, the more you can focus on what you do care about.’ I think layers potentially has that ability. Runtimes could also be used that way, but could also be abused.”


The Serverless Show: re:Invent Recap
Layers Shift Us in The Direction We Want to Go

Forrest concurred, “Exactly, yeah. The layers shift us in the direction we want to be going. As serverless developers, runtimes potentially could shift us in the wrong direction. I think the next service we’re going to touch on probably also could trend you in the wrong direction.”

The Announcement We’re Excited About, But Not Sure Why…

Hillel said, “So let’s talk about Firecracker. The announcement that I think everyone is excited about, but no one is really sure why they’re so excited about it. Let’s talk about what that means and why do we care now about something that probably will impact our future more than our present?”

Forrest replied, “Right, so practically it means nothing to me and I don’t care about it now. Firecracker is the virtual machine manager for Lambda; AWS has open-sourced it. It’s a strategic play, I think, for AWS for a couple of reasons. First, obviously, they’ve received a lot of criticism for not being as active in the open-source communities. This is a great win for Adrian Cockcroft and his team. But I think it’s really a play for them to take away some of the big arguments against Lambda, which rightly or wrongly, a lot of folks, especially in the enterprise, look at serverless strategically and say, ‘Oh, I don’t want to go that way because I don’t want to lock myself into a provider. I would like to be able to be portable and move if I have pricing concerns or something else goes wrong.’ That’s an argument that I have a lot of problems with and I’ve written about it and argued about it on my podcast saying, ‘This is an overblown concern. You need to evaluate your risk more intelligently than that.’”

The Knee-Jerk Reaction That’s Not Going Away

Forrest continued, “But it’s still a conversation that’s not going away. It’s a knee-jerk reaction for a lot of people. Firecracker is interesting in that it takes away some of those rationales. You can now say, ‘Oh, well, if, for some reason, I have problem with AWS I can always go and take Firecracker. I can run it in my own data center and potentially move my Lambda code.’ But in terms of actually opening up new build scenarios, it really doesn’t do that. It just pulls all that management that AWS was doing on your behalf and says, ‘Oh, hey, you’re going to have to run this yourself.’ That’s the antithesis of what we want to be doing as serverless developers.

“I would not recommend this to people. I would not talk to my clients who are trying to get out of data centers and say, ‘Hey, don’t worry about it. Just run Firecracker in your data center.’ I wouldn’t want that, and frankly, they wouldn’t that, either. It’s interesting. This is a really minor tangent. When I talk to people now that are just starting to think about moving to the cloud in 2018, they get that cloud-native doesn’t just mean lift and shift. Five years ago, this wasn’t the case, but today they look at the cloud and they see the value of the managed services. That really is the draw for them. That’s what they’re hungering to be able to use. Honestly, I don’t know how attractive Firecracker is going to be to those folks, either, except as kind of a hedge, and that’s really how I view it. It’s a hedge.”


The Serverless Show: re:Invent Recap

Hillel stated, “I agree and I was excited about the technology. I was excited about where the technology might end up and what it might enable, and I agree that embracing more open source is a healthy thing. I also fear what people might start doing with this and the backwards steps they might take with it. Hopefully it gets used primarily by Amazon, themselves, for some cooler things on the edge, on-prem, and in the fog, or wherever, in the haze, in the mist. Then maybe by some other companies to try to extend that use case, and hopefully the rest of us ignore it blissfully and just know, ‘Hey, it made its way into another place that enabled us to just get more compute that we care about in those places.’”

New Step Functions Support for Fargate, DynamoDB and SageMaker

Hillel continued, “Let’s talk about some of the new Step Functions support for Fargate, DynamoDB and SageMaker. I know that at Protego, we’re a big serverless consumer as well as a security company. We’ve always struggled quite a bit with how to do stateful things that involve running processes with state transactions and things like that. Step Functions has been a useful tool for us for a lot of that. How much does the support for Step Functions to be able to drive new processes like Fargate processes, new tasks, and activity with DynamoDB and SageMaker and other things? How much does that mature step function, does that solve a big gap or do you think this is just more cosmetic?”

Forrest replied, “Yeah, it’s a great question. I have a lot of thoughts about this. Let me try to parse them. I’ve loved Step Functions for a long time. I was one of the first people, I think, to build significant tooling on top of it. We were working with it within just a few weeks of its launch at re:Invent 2016. I’ve had a big wish list for the service for a long time, particularly things like dynamic parallelism, something a lot of people have asked for, the ability to launch an arbitrary number of Lambda functions in parallel and then be able to fan that state back in at the end of the parallel step. That’s a tricky challenge that I know the Step Functions team has been working on.


The Serverless Show: re:Invent Recap
“So that would have been higher on my list than something like this, but I’m still happy to see them add support for other services. You mentioned Fargate, DynamoDB, and SageMaker. There are a couple others, like you can place things on an SQS queue, though I don’t think you can read from an SQS queue. There’s an SNS integration, just a couple of boilerplate, serverless type services like that. You can do things like putting something in a DynamoDB table, reading something from that DynamoDB table, and now you can just create that as a Step Functions action rather than having to write code inside of a Lambda function to do that for you.

“It takes away a little bit of boilerplate code, which is always something I’m happy to see in the serverless world. These are things that, I think, Azure has had a little bit more support for in the past, just really composable services and being able to plug services together using events.

Yeah, I’m happy to see Step Functions go in that direction. It’s a few things that may be useful in some cases in terms of the actual integrations that were announced at re:Invent. I’m more excited about this pattern being extended over time, more event sources being added. Tim Bray, who’s one of those old guard legends at AWS and is heavily involved with Step Functions, has a blog post where he talks specifically about what they were able to release and what their visions are for this feature, which I think he calls “Connectors,” in the future. I would definitely recommend reading that to get an idea for what their vision is for this service.

“But yeah, Step Functions is great. Today, this makes it a little bit better. It’s not game-changing, and I just hope to see them continue in this direction in the future. I think they will.”

Hillel replied, “Yeah, absolutely and I’d love some arbitrary parallelism and fanout. That’s something we’ve been waiting for for a long time, so hopefully that gets solved soon. I think that’ll be a much bigger step in terms of making Step Functions be able to replace a lot of things you got to do in Lambda functions today.

Dynamo Becoming a Serverless Service

Hillel continued, “Let’s talk about data a little bit. DynamoDB, I noted two announcements I thought were really interesting and important. One was the support for transactions with DynamoDB. I thought that was interesting, useful and I think would simplify a lot of code that people have to write around using DynamoDB as a database. Then the on-demand scaling piece, I think, at least, the announcement was really exciting to me. I haven’t yet seen that we can throw out a whole lot of scaling code yet, but I think we can, and I think that was really a step towards Dynamo being a truly serverless service.”


The Serverless Show: re:Invent Recap
Forrest stated, “Yeah, obviously on-demand, a lot of people have said this is what truly transforms DynamoDB into a ‘serverless database.’ It has been the thing that has bugged me about DynamoDB for a long time. It always gets dropped into serverless architecture conversations, and yet, you have to go provision capacity for it. That was especially true before autoscaling was released, but even after that, you still had to go in and set that baseline provision capacity and had to know what that would be, and you’re paying for that whether or not you’re actually using that capacity.

“But now with on-demand, obviously, you don’t have to do that. It’s really pay-for-use. The thing that remains to be seen a little bit, and like you, I haven’t seen this in action, because again, this is so new, is how closely that scaling will track with usage, because DynamoDB autoscaling still has a little bit of a delay in it, so if you have really sensitive, spiky workloads, you still might need to provision some higher capacity to make sure you can handle that before Dynamo got the message and scaled up. I suspect the on-demand scaling from a couple things I’ve seen may be the same way, so you still may need to have some headroom provisioned just to protect yourself from that. But again, this comes from a point of me not having had experience with this yet, so we’ll have to see. Obviously, it’s a great step in the right direction, and overall really, I’ve been amazed by how far the DynamoDB service has come just in the last really 12 months since re:Invent 2017.

“I published an article just before re:Invent 2017 that was pretty extensively researched. I talked to a lot of people who were doing billion-record-size DynamoDB tables and had had a lot of war stories about it. We had come up with a lot of reasons why DynamoDB was lacking a little bit once you actually got right down to it and tried to use it. A lot of that was around things like hotkeys, and if you had enough traffic on a really small part of your dataset, you wound up losing a lot of the scaling benefits. The team would recommend doing things like going to different DynamoDB tables to avoid having all of your capacity used up on a small number of records, and so that was really the antithesis of what DynamoDB should be used for.

“But the team has really come along with in the last 12 months they’ve released adaptive capacity, autoscaling, now this on-demand stuff, and then they’ve come up with some really big operational things like the backup and restore, the point and time recovery for DynamoDB, just really an amazing set of features for what you and I probably would tend to think of as a service that’s more mature, who’s been around for a while, but it’s taken a gigantic quantum leap forward and really has made itself one of the most compelling, if not the most compelling, cloud-native data source that’s out there.

“I said all that without talking about transactions at all. That’s because I think there’s a lot of use cases where you don’t need transactions on DynamoDB. Rick Houlihan, who’s a principal architect for DynamoDB, has a great re:Invent session that I’ve been plugging the heck out of on Twitter because it blew my mind, where he talks about DynamoDB single tables and when you do and do not need DynamoDB transactions, and he makes the really good point that you don’t want transactions if what you’re looking for is relational behavior in a DynamoDB table. That’s not the point of releasing this feature. You’re going to have pain if you try to use it that way. It’s really more about things like trying to manage atomic updates to multiple items at the same time. In a single-table structure, in that case, DynamoDB transactions could be great.

Hillel agreed, “I think it’s one of those things that will very quickly get blown out of proportion, but what excites me about these things is there are basically two classes of announcements around Dynamo over the past year. There’s been all the things you thought Dynamo should do well, and it didn’t necessarily and those things getting better, right, particularly around not having to care about a lot of things. The less I need to care about as a user, as a consumer of Dynamo, would be better. I want to throw my data in there. I want to query my data. I want to touch my data, and I want it to just sort of magically work, right? I think Dynamo is iterating more and more towards that, and that’s great.

“The other type of innovation I like is where it just simplifies the way you do things. For the subset of people, and I think it’s a small subset, but for the subset of people who are really dealing with managing state across multiple tables and had a lot of code around, ‘Hey, what happens if this or that failed and what do I have to roll back?’ and things like that. I love the idea that that just becomes much more simple for them to manage. Anywhere where the database can make your code simpler and less confusing and more robust, easier to test, that’s a great thing.

“I think it’s a small subset of consumers of DynamoDB who are going to care about this and probably a bigger subset of people who think they care about this and hopefully don’t fall into the trap of using it where they don’t need it. But for me, it’s about trying to make everything three lines of code. If everything could be three lines of code, I think it would be really easy to test everything, and debug everything, and expand everything. The more those services are managed and simple to use, the better.


The Serverless Show: re:Invent Recap
“Let’s mention Aurora for a second. Aurora, there are a bunch of things, but I saw the data API announcement, and to me, that was interesting because the notion of doing away with some of the stateful connections you need to do with SQL databases in a serverless application, which is kind of the antithesis, but not necessarily aligned with the way a serverless application should be written. The data API seemed like a step in the right direction. Do you see it that way?”

Forrest replied, “I do see it as a step in the right direction. Yeah, my understanding is that there’s a long way to go here, and as a security company, I’m sure that you’re all over that because at least the initial information of the data API, you’re just putting raw SQL into a VTL template for AppSync, or for whatever service is connecting to the data API, and there’s no way to prepare or escape those SQL strings, so it’s a SQL injection playground, basically. So that’s something they’re going to have to figure out.

Hillel stated, “What you’re saying is the opportunity of a data API, for security, is always to basically make a more formal way to build a query and avoid some of the string concatenation problems. This data API didn’t really go that direction probably so they could roll something out more simply and more quickly, but yeah, I’d love to see it iterate in that direction of being more structured.”

Forrest responded, “Correct. It gets into the larger thing with Aurora serverless which is the penalty you pay for having a capacity-based database that can scale down to zero and then scale up is you’re going to have some latency and performance considerations around that. I haven’t used Aurora serverless a lot. Even though it’s been GA for a little while, mainly because it hasn’t played that nicely with Lambda. You still have those connection issues, and frankly, DynamoDB just made a lot more sense as being something that was truly HTTP-based.

“But I actually expect that one thing I’ll be doing a lot of in 2019, particularly as the data API comes out of beta and they start to address some of these really early knee-jerk concerns that people have, and it becomes a useful and reasonable thing for people to work with, I actually expect I’ll be spending a lot of time with it and starting to get an understanding of what those performance tradeoffs are and where the advantage of having that SQL interface and having the relational model underneath actually makes sense and is more useful than the existing options that we have.


The Serverless Show: re:Invent Recap
“I’m definitely excited about it. I think it’s a step in the right direction. I think it’s a step toward honestly probably something that’s going to be more accessible to a lot of people than DynamoDB is because DynamoDB is great, but let’s be honest. It’s just not a programming model that most developers are comfortable working with, not in the way that SQL is and should be. When we get an ORM for the Aurora data API, probably a lot of people will start using it.” Managed Streaming for Kafka

Hillel said, “Let’s talk about managed streaming for Kafka. That’s an announcement that I think slipped under my radar, but it was something you pointed out as really interesting. Why is that and here’s another service that’s kind of parallel to it, an existing Amazon service that they’re rolling out I guess because a lot of customer demand.”

Forrest stated, “Sure, and I think it just plays right into, because we’re talking about serverless, streaming is such a huge part of the serverless puzzle in terms of architectures and not even just streaming, but event sourcing. I mentioned early on that I’ve been doing a lot of work with this, so a lot of work with Kinesis specifically over the past year, taking relational databases, turning them into event sources, placing events on a stream, having multiple consumers that read off that stream, and then fanning those out for lots of different consumers so that you can effectively take those events and turn them into materialized views. You’re not limited to one way of looking at your data with that event source becomes a single source of truth, and then whether you want to have DynamoDB consumers that materialize that data. You want to put that into a relational database. You want to go with something like elastic search, put that behind AppSync. All those different options can be available to you simultaneously, and you can build this very flexible and extensible backend that’s available to lots of different types of consumers.


The Serverless Show: re:Invent Recap
“That’s a really interesting architectural pattern that has worked really well for the clients that I’ve been talking to at Trek10, but they’ve been limited to Kinesis for that as kind of that backbone for where those events are being stored. Kafka just opens that up even more. The other service that we probably should mention here that just came out at re:Invent, the Quantum Ledge Database, which can take that stream of transactions, that central ledger can make it immutable and can allow you potentially to have trust and central authority. I’m trying not to say the evil blockchain word, but that actually fulfills the use case that a lot of people need when they say what they need is blockchain.

“Those two services go hand-in-hand for me in a way, even though QLDB, from what I understand, is much more serverlessly priced. It’s developed Amazon in-house and they’ve been using it for a while, so it’s not a greenfield service necessarily, even though it’s just now being released to the public, whereas managed streaming for Kafka, they’re taking a third-party tool that a lot of their customers were using and they’re just making it available in kind of the same way they made things available like elastic search. It’s a hosted service and you’re going to pay for it similarly by the hour to the way you’re paying for Kinesis by the shard hour. So not truly serverless pricing model in that case, but Kafka is just something that so many people like and enjoy more than Kinesis, so Amazon is known for listening to their customers and they’ve made it available to folks.”

Hillel agreed. “I think the whole streaming and fanout model is really critical to a lot of the way data needs to get processed in a serverless application. I think it’s something people often miss when they try to figure out how to move, how to do their initial migration. And it’s really critical people understand that serverless works differently, data gets managed differently, and the right way to do things is not necessarily the way they did it in the past. I think Kinesis is maybe a part of that, SQS is part of that in some cases, but Kafka really, I think, adds a whole level of flexibility for people who want to work it that way, so that’s really cool.

“Yeah, it’s not priced serverlessly, but I imagine they’ll get there, but the fact that it’s available for us is really important.

Forrest replied, “I don’t know that they would ever price their Kafka service serverlessly. It would kind of surprise me if they got there. If anything, I’d be more likely to see Kinesis priced that way. I’ll just make a quick plug here while we’re on this subject. If you’re interested in that whole event-sourcing pattern for serverless and you want to get your head around it because it’s pretty counterintuitive, there’s an article I did an interview with Rob Gruhl at Nordstrom that’s on A Cloud Guru’s blog specifically on this topic. He dives really deep into what they’ve done there. It might be my favorite technical interview I’ve done this year, so if you want to learn more about that, please go check that out. I think you’ll enjoy it.”

Hillel stated, “One last one I wanted to bring up is the integration of IDEs. Honestly, we just started playing with them now. The concept excites me more than seeing how it really works yet, but just the idea that AWS is getting better at trying to embrace the whole development pipeline and not kind of say, ‘Here’s my API. Figure it out yourself,’ is exciting. Have you heard any feedback on that?”

Forrest replied, “Not feedback on the integration specifically, though, I’m going to be getting into that here very soon, actually, before the end of 2018. But I think you’re right. The bigger story here is the larger story, which is AWS has lagged behind in developer tooling for a long time. They’ve been way behind Azure, which, for a while, has had that ability to kind of have a hybrid debug and deployment model between your local machine and the cloud.

“The advice for AWS developers for a while now, especially in the serverless world, has been don’t try to test and debug this stuff locally. You’re going to have pain and you’re going to wind up creating these constructs that are artificial and don’t really match what you’re getting in the cloud. You should go ahead and just deploy this stuff to the cloud and test it there. But the reality on the ground, as every serverless developer knows, is that just introduces a lot of latency into your development cycle. It is phenomenally helpful, whenever you can, to be able to mock and test stuff locally just as you’re in the process of writing those Lambda functions. It really does speed up what you’re doing.

“Yeah, I think the IDE integrations help a little bit with that, the step-through debugging, potentially. I’d like to see more in the future similar to some of the things that the Serverless Framework has with serverless-offline. But really, ultimately, in the long term, and I’ve come to this over the second half of 2018 as I’ve been developing serverless architectures that have used less Lambda and more things like AppSync where the only code you’re writing is potentially a little bit of VTL, a little bit of config, and a lot of the CRUD API functionality is just handled for you by the service. Really, your development reduces to a handful of YAML templates.

“If you’re doing that and I think that is actually a trend that’s only going to increase over time, there is really less value for you in testing and debugging locally. You have to put those services in the cloud, figure out things like permissions issues and configuration problems that you may have in your infrastructure as code. I think that’s the trend, so I wouldn’t get too used to local test and debug for Lambda functions. The Lambda-less future is where I’m looking forward to.

Hillel quipped, “So zero code, all YAML.

“A lot of announcements. I think we touched on the key ones, the ones that really show us how the platform is evolving to something that lets most of us deploy our applications this way and manage less infrastructure, worry about less overhead, and focus more on the value we create. Now we get to my favorite part of the show we do every week, which is just our favorite tweets of the week.

Tweets of the Week

Forrest said, “I have to plug my former Trek10 colleague, Jared Short, who’s now over at Serverless Framework. He had something after the managed Kafka service came out kind of playing on the folks over at Hacker News who always have the same reaction to this kind of thing. This is him quoting a I don’t know if it’s an imaginary or real Hacker News user.

“We run three nodes… At $.42/hr for the managed kafka, compared to $.192/hr self hosted… we’ll keep it self hosted for now…” I love HN math.

Real world math: Over 1 year that is ~$2k difference, ~20 hours of engineering time. Maintenance isn’t free, it obscures true cost.

― Jared Short (@ShortJared) November 30, 2018

“I love this because he’s pointing out the hidden costs that people always forget when they try to do a straight up comparison of serverless costs versus what it would cost to run something locally. Yes, you may have some sort of marginal savings in dollars if you run that infrastructure yourself or if you host it on a lower-level service, but you’re forgetting all the technical debt you accrue, the time to market that you’re losing, and then, of course, all of the maintenance over time that you have to do, and it really doesn’t even compare. That’s why we love managed services. We love serverless.”

Hillel said, “I agree and it resonates really well with me because a lot of conversations that we had when we started the company a year and a half ago with people trying to understand what they’re doing with serverless, and those who had kind of made the journey early, it was something along the lines of, ‘Yeah, we shifted to Lambda from EC2 because we thought we could save a few thousand dollars in compute cost or something.’ But then the real cost savings were our DevOps team shrunk by two or three people. We were operating less stuff, managing less stuff. We could repurpose those people for things we really care about, and that was much more exciting. Three DevOps people is a lot more money a year than most people’s computer bill.”

Forrest agreed, “Absolutely. Just those developer hours, engineering hours, it’s amazing how expensive they are and how easy it is to forget what those people could be doing that actually provides more business value than maintaining some undifferentiated low-level service.”

Hillel selected a tweet from Jeremy Daly sharing quoting Chris Munns in his session.

Don’t make puppies cry by using wildcards in #IAM roles says @chrismunns ! #reInvent2018 #reInvent #serverless #security pic.twitter.com/LFpq3J7gLi

― Jeremy Daly (@jeremy_daly) November 27, 2018

“I liked the illustration. I love people talking about not making puppies cry, but I also like the sentiment of focusing on using the right permissions for your serverless components. One of the things I talk about a lot is one of the reasons serverless applications can be much more secure than most other applications is their fine-grained architecture in their nature. You tend to divide things into smaller and smaller pieces because of the nature of stateless and ephemeral compute and because you’re using APIs and resources from the cloud, and you now have the opportunity to apply IAM rules across the board to all these things. Why are we not doing it well? Why are we not using the tools that are available and the processes out there to really shrink down what everything can do? There’s so much security value in that that I think it gets missed. For me, that’s something we focus a lot on at Protego. We spend a tremendous amount of time trying to solve well. It’s not an easy problem to solve, but we do believe that getting it right is really important, plus not making puppies cry is really important.

本文数据库(综合)相关术语:系统安全软件

代码区博客精选文章
分页:12
转载请注明
本文标题:The Serverless Show: re:Invent Recap
本站链接:https://www.codesec.net/view/620917.html


1.凡CodeSecTeam转载的文章,均出自其它媒体或其他官网介绍,目的在于传递更多的信息,并不代表本站赞同其观点和其真实性负责;
2.转载的文章仅代表原创作者观点,与本站无关。其原创性以及文中陈述文字和内容未经本站证实,本站对该文以及其中全部或者部分内容、文字的真实性、完整性、及时性,不作出任何保证或承若;
3.如本站转载稿涉及版权等问题,请作者及时联系本站,我们会及时处理。
登录后可拥有收藏文章、关注作者等权限...
技术大类 技术大类 | 数据库(综合) | 评论(0) | 阅读(70)