My new book guides you through the start-to-finish build of a real world web application in Go — covering topics like how to structure your code, manage dependencies, create dynamic database-driven pages, and how to authenticate and authorize users securely. Go 1.25 introduced a new http.CrossOriginProtection middleware to the standard library — and it got me wondering: Have we finally reached the point where CSRF attacks can be prevented without relying on a token-based check (like double-submit cookies)? Can we build secure web applications without bringing in third-party packages like justinas/nosurf or gorilla/csrf ? And I think the answer now may be a cautious “yes” — so long as a few important conditions are met. If you want to skip the explanations and just want to see what those conditions are, you can click here. The http.CrossOriginProtection middleware The new http.CrossOriginProtection middleware works by checking the values in a request's Sec-Fetch-Site and Origin headers to determine where the request is coming from. It will automatically reject any non-safe requests that are not from the same origin, and will send the client a 403 Forbidden response. The http.CrossOriginProtection middleware has some limitations, which we'll discuss in a moment, but it is robust and simple to use, and a great addition to the standard library. How it works Modern browsers automatically include the Sec-Fetch-Site header in requests. This header indicates the relationship between the origin of the page making the request, and the origin of the page being requested. Two pages are considered to have the same origin if their scheme, hostname and port (if present) exactly match, in which case the browser will include a Sec-Fetch-Site: same-origin header in the request. If the two pages don't have the same origin, the Sec-Fetch-Site header will be set to a different value to indicate this, and http.CrossOriginProtection will reject the request. If no Sec-Fetch-Site header is present, http.CrossOriginProtection will fall back to checking the Origin header. Specifically, it will compare the request's Origin header and Host header to see if they match. If they don't match, then it considers the request to not be from the same origin and it will reject it. If neither the Sec-Fetch-Site nor Origin headers are present, then it assumes the request is not coming from web browser and will always allow the request to proceed. The checks described above only take place on requests with non-safe methods ( POST , PUT , etc.). Requests with safe HTTP methods ( GET , OPTIONS , etc.) are always allowed to proceed. If you're interested in learning more about the design and decision making behind http.CrossOriginProtection , the original proposal by Filippo Valsorda is an excellent read. At its simplest, you can use it like this: File: main.go package main import ( "fmt" "log/slog" "net/http" "os" ) func main() { mux := http.NewServeMux() mux.HandleFunc("/", home) slog.Info("starting server on :4000") // Wrap the mux with the http.NewCrossOriginProtection middleware. err := http.ListenAndServe(":4000", http.NewCrossOriginProtection(mux)) if err != nil { slog.Error(err.Error()) os.Exit(1) } } func home(w http.ResponseWriter, r *http.Request) { fmt.Fprint(w, "Hello!") } If you want, it's also possible to configure the behavior of http.CrossOriginProtection . Configuration options include being able to add trusted origins (from which cross-origin requests are allowed), and the ability to use a custom handler for rejected requests instead of the default 403 Forbidden response. When I've wanted to customize the behavior, I've been using a pattern like this: File: main.go package main import ( "fmt" "log/slog" "net/http" "os" ) func main() { mux := http.NewServeMux() mux.HandleFunc("/", home) slog.Info("starting server on :4000") err := http.ListenAndServe(":4000", preventCSRF(mux)) if err != nil { slog.Error(err.Error()) os.Exit(1) } } func preventCSRF(next http.Handler) http.Handler { cop := http.NewCrossOriginProtection() cop.AddTrustedOrigin("https://foo.example.com") cop.SetDenyHandler(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { w.WriteHeader(http.StatusBadRequest) w.Write([]byte("CSRF check failed")) })) return cop.Handler(next) } func home(w http.ResponseWriter, r *http.Request) { fmt.Fprint(w, "Hello!") } Limitations The big limitation of http.CrossOriginProtection is that it is only effective at blocking requests from modern browsers. Your application will still be vulnerable to CSRF attacks coming from older (generally pre-2020) browsers which do not include at least one of the Sec-Fetch-Site or Origin headers in requests. Right now, browser support for the Sec-Fetch-Site header is at 92%, and for Origin it is 95%. So — in general — relying on http.CrossOriginProtection is not sufficient as your only protection against CSRF. It's also important to note that the Sec-Fetch-Site header is only sent when your application has a "trustworthy origin" — which basically means that your application needs to be using HTTPS in production for http.CrossOriginProtection to work to its full potential. And you should also be aware that when no Sec-Fetch-Site header is present in a request, and it falls back to comparing the Origin and Host headers, the Host header does not include the scheme. This limitation means that http.CrossOriginProtection will wrongly allow cross-origin requests from http://{host} to https://{host} when there is no Sec-Fetch-Site header present but there is an Origin header. To mitigate this risk, you should ideally configure your application to use HTTP Strict Transport Security (HSTS). Enforcing TLS 1.3 Looking into this got me wondering... What if you're already planning to use HTTPS and enforce TLS 1.3 as the minimum supported TLS version? Could you be confident that all web browsers which support TLS 1.3 also support either the Sec-Fetch-Site or Origin headers? As far as I can tell from the MDN compatibility data and tables from Can I Use, the answer is "yes" for (almost) all major browsers. Older browsers which don't support TLS 1.3 simply won't be able to connect to your application. For the modern major browsers that do support TLS 1.3 and can connect, you can be confident that at least one of the Sec-Fetch-Site or Origin headers are supported — and therefore http.CrossOriginProtection will work effectively. If you enforce TLS 1.3 as the minimum version: The only exception to this I can see is Firefox v60-69 (2018-2019), which did not support the Sec-Fetch-Site header and did not send the Origin header for POST requests. This means that http.CrossOriginProtection will not work effectively to block requests originating from that browser. Can I Use puts usage of Firefox v60-69 at 0%, so the risk here appears very low — but there are probably some computers somewhere in the world still running it. Also, we only have this information for the major browsers — Chrome/Chromium, Firefox, Edge, Safari, Opera and Internet Explorer. But of course, other browsers exist. Most of them are forks of Chromium or Firefox and therefore will likely be OK, but there's no guarantee here and it is hard to quantify the risk. So if you use HTTPS and enforce TLS 1.3, it's a huge step forward in making sure that http.CrossOriginProtection can work effectively. However, there remains a non-zero risk that comes from Firefox v60-69 and non-major browsers, so you may want to add some defense-in-depth and utilize SameSite cookies too. We'll talk more about SameSite cookies in a moment, but first we need to take a quick detour and discuss the difference between the terms origin and site. Cross-site vs cross-origin In the world of web specifications and web browsers, cross-site and cross-origin are subtly different things, and in a security context like this it's important to understand the difference and be exact about what we mean. I'll quickly explain. Two websites have the same origin if they share the exact same scheme, hostname, and port (if present). So https://example.com and https://www.example.com are not the same origin because the hostnames ( example.com and www.example.com ) are different. A request between them would be cross-origin. Two websites are 'same site' if they share the same scheme and registerable domain. Note: The registerable domain is the part of the hostname just before (and including) the effective TLD . Here are a few examples: For https://www.google.com/ the TLD is com and the registerable domain is google.com . the TLD is and the registerable domain is . For https://login.mail.ucla.edu the TLD is edu and the registerable domain is ucla.edu . the TLD is and the registerable domain is . For https://www.gov.uk , the TLD is gov.uk and the registerable domain is www.gov.uk . You can find the complete list of effective TLDs here. So https://example.com , https://www.example.com and https://login.admin.example.com are all considered to be the same site because the scheme ( https ) and registerable domain ( example.com ) are the same. A request between these would not be considered to be cross-site, but it would be cross-origin. Note: Some browser versions use a different definition of same-site which doesn't require the same scheme, only the same registrable domain. For these browser versions, https://admin.example.com and http://blog.example.com would also be considered same-site. Nowadays, this is typically referred to as schemaless same-site, but in historical versions or documentation it may have just been called same-site. So what are the points that I'm building up to here? Go's http.CrossOriginProtection middleware is accurately and appropriately named. It blocks cross-origin requests. It's more strict than it would be if it only blocked cross-site requests, because it also blocks requests from other origins under the same site (i.e. registrable domain). This is useful because it helps to prevent a situation where your janky-not-been-updated-in-the-last-decade WordPress blog at https://blog.example.com is compromised and used to launch a request forgery attack at your important https://admin.example.com website. When most people — myself included — casually talk about "CSRF attacks", what we are referring to most of the time is actually cross-origin request forgery, not just cross-site request forgery. It's a shame that CSRF is the commonly used and known acronym to describe this family of attacks, because most of the time CORF would be more accurate and appropriate. But hey! That's the messy world we live in. For the rest of this post though, I'll use the term CORF instead of CSRF when that is exactly what I mean. SameSite cookies The SameSite cookie attribute has generally been supported by web browsers since 2017, and by Go since v1.11. If you set the SameSite=Lax or SameSite=Strict attributes on a cookie, that cookie will only be included in requests to the same site that set it. In turn, that prevents cross-site request forgery attacks (but not cross-origin attacks from within the same site). There is some good news here — all major browsers that support TLS 1.3 also fully support SameSite cookies, with no exceptions that I can see. So if you enforce TLS 1.3, you can be confident that all the major browsers using your application will respect the SameSite attribute. This means that by using SameSite=Lax or SameSite=Strict on your cookies, you cover off the risk of cross-site request forgeries from Firefox v60-69 that we talked about earlier. Putting it all together If you combine using HTTPS, enforcing TLS 1.3 as the minimum version, using SameSite=Lax or SameSite=Strict cookies appropriately, and using the http.CrossOriginProtection middleware in your application, as far as I can see there are only two unmitigated CSRF/CORF risks from major browsers: CORF attacks from within the same site (i.e. from another subdomain under your registrable domain) in Firefox v60-69. CORF attacks from a HTTP version of your origin, from browsers that do not support the Sec-Fetch-Site header. For the first of these risks, if you don't have any other websites under your registrable domain, or you're confident that the websites are secure and uncompromised, then this might be a risk that you're willing to accept given the extremely low usage of Firefox v60-69. For the second, if you don't support HTTP on your origin at all (including redirects) then this isn't something you need to worry about. Otherwise, you can mitigate the risk by including a HSTS header on your HTTPS responses. At the start of this article, I said that not using a token-based CSRF check might be OK under certain conditions. So let's run through what those are: