summaryrefslogtreecommitdiff
path: root/meta/3rd/OpenResty
diff options
context:
space:
mode:
Diffstat (limited to 'meta/3rd/OpenResty')
-rw-r--r--meta/3rd/OpenResty/library/ngx.lua2068
1 files changed, 737 insertions, 1331 deletions
diff --git a/meta/3rd/OpenResty/library/ngx.lua b/meta/3rd/OpenResty/library/ngx.lua
index c97a2f7f..eacf1a66 100644
--- a/meta/3rd/OpenResty/library/ngx.lua
+++ b/meta/3rd/OpenResty/library/ngx.lua
@@ -2,34 +2,30 @@
---@class ngx : table
---
---- The `ngx.null` constant is a `NULL` light userdata usually used to represent nil values in Lua tables etc and is similar to the [lua-cjson](http://www.kyne.com.au/~mark/software/lua-cjson.php) library's `cjson.null` constant.
---- This constant was first introduced in the `v0.5.0rc5` release.
+--- The `ngx.null` constant is a `NULL` light userdata usually used to represent nil values in Lua tables etc and is similar to the `lua-cjson` library's `cjson.null` constant.
---@field null userdata
---
--- Read and write the current request's response status. This should be called
--- before sending out the response headers.
---
--- ```lua
+--- -- set
--- ngx.status = ngx.HTTP_CREATED
+--- -- get
--- status = ngx.status
--- ```
---
---- Setting `ngx.status` after the response header is sent out has no effect but leaving an error message in your Nginx's error log file:
----
---- attempt to set ngx.status after sending out response headers
+--- Setting `ngx.status` after the response header is sent out has no effect but leaving an error message in your NGINX's error log file:
+--- attempt to set ngx.status after sending out response headers
---@field status ngx.http.status_code
---
----
--- Returns `true` if the response headers have been sent (by ngx_lua), and `false` otherwise.
---
---- This API was first introduced in ngx_lua v0.3.1rc6.
---@field headers_sent boolean
---
----
---- Returns `true` if the current request is an Nginx subrequest, or `false` otherwise.
+--- Returns `true` if the current request is an NGINX subrequest, or `false` otherwise.
---@field is_subrequest boolean
---
----
ngx = {}
---@class ngx.OK
@@ -173,12 +169,11 @@ ngx.STDERR = 0
--- ngx.ctx table
---
---- This table can be used to store per-request Lua context data and has a life time identical to the current request (as with the Nginx variables).
+--- This table can be used to store per-request Lua context data and has a life time identical to the current request (as with the NGINX variables).
---
--- Consider the following example,
---
--- ```nginx
----
--- location /test {
--- rewrite_by_lua_block {
--- ngx.ctx.foo = 76
@@ -195,7 +190,6 @@ ngx.STDERR = 0
--- Then `GET /test` will yield the output
---
--- ```bash
----
--- 79
--- ```
---
@@ -204,7 +198,6 @@ ngx.STDERR = 0
--- Every request, including subrequests, has its own copy of the table. For example:
---
--- ```nginx
----
--- location /sub {
--- content_by_lua_block {
--- ngx.say("sub pre: ", ngx.ctx.blah)
@@ -227,7 +220,6 @@ ngx.STDERR = 0
--- Then `GET /main` will give the output
---
--- ```bash
----
--- main pre: 73
--- sub pre: nil
--- sub post: 32
@@ -239,7 +231,6 @@ ngx.STDERR = 0
--- Internal redirection will destroy the original request `ngx.ctx` data (if any) and the new request will have an empty `ngx.ctx` table. For instance,
---
--- ```nginx
----
--- location /new {
--- content_by_lua_block {
--- ngx.say(ngx.ctx.foo)
@@ -257,7 +248,6 @@ ngx.STDERR = 0
--- Then `GET /orig` will give
---
--- ```bash
----
--- nil
--- ```
---
@@ -268,18 +258,16 @@ ngx.STDERR = 0
--- Overriding `ngx.ctx` with a new Lua table is also supported, for example,
---
--- ```lua
----
--- ngx.ctx = { foo = 32, bar = 54 }
--- ```
---
---- When being used in the context of [init_worker_by_lua*](#init_worker_by_lua), this table just has the same lifetime of the current Lua handler.
+--- When being used in the context of `init_worker_by_lua*`, this table just has the same lifetime of the current Lua handler.
---
--- The `ngx.ctx` lookup requires relatively expensive metamethod calls and it is much slower than explicitly passing per-request data along by your own function arguments. So do not abuse this API for saving your own function arguments because it usually has quite some performance impact.
---
---- Because of the metamethod magic, never "local" the `ngx.ctx` table outside your Lua function scope on the Lua module level due to [worker-level data sharing](#data-sharing-within-an-nginx-worker). For example, the following is bad:
+--- Because of the metamethod magic, never "local" the `ngx.ctx` table outside your Lua function scope on the Lua module level due to `worker-level data sharing`. For example, the following is bad:
---
--- ```lua
----
--- -- mymodule.lua
--- local _M = {}
---
@@ -298,7 +286,6 @@ ngx.STDERR = 0
--- Use the following instead:
---
--- ```lua
----
--- -- mymodule.lua
--- local _M = {}
---
@@ -317,28 +304,26 @@ ngx.thread = {}
---@class ngx.thread : thread
---- Kills a running "light thread" created by [ngx.thread.spawn](#ngxthreadspawn). Returns a true value when successful or `nil` and a string describing the error otherwise.
+--- Kills a running "light thread" created by `ngx.thread.spawn`. Returns a true value when successful or `nil` and a string describing the error otherwise.
---
---- According to the current implementation, only the parent coroutine (or "light thread") can kill a thread. Also, a running "light thread" with pending Nginx subrequests (initiated by [ngx.location.capture](#ngxlocationcapture) for example) cannot be killed due to a limitation in the Nginx core.
----
---- This API was first enabled in the `v0.9.9` release.
+--- According to the current implementation, only the parent coroutine (or "light thread") can kill a thread. Also, a running "light thread" with pending NGINX subrequests (initiated by `ngx.location.capture` for example) cannot be killed due to a limitation in the NGINX core.
---
---@param thread ngx.thread
----@return boolean,string
+---@return boolean ok
+---@return string? error
function ngx.thread.kill(thread) end
--- Waits on one or more child "light threads" and returns the results of the first "light thread" that terminates (either successfully or with an error).
---
---- The arguments `thread1`, `thread2`, and etc are the Lua thread objects returned by earlier calls of [ngx.thread.spawn](#ngxthreadspawn).
+--- The arguments `thread1`, `thread2`, and etc are the Lua thread objects returned by earlier calls of `ngx.thread.spawn`.
---
---- The return values have exactly the same meaning as [coroutine.resume](#coroutineresume), that is, the first value returned is a boolean value indicating whether the "light thread" terminates successfully or not, and subsequent values returned are the return values of the user Lua function that was used to spawn the "light thread" (in case of success) or the error object (in case of failure).
+--- The return values have exactly the same meaning as `coroutine.resume`, that is, the first value returned is a boolean value indicating whether the "light thread" terminates successfully or not, and subsequent values returned are the return values of the user Lua function that was used to spawn the "light thread" (in case of success) or the error object (in case of failure).
---
--- Only the direct "parent coroutine" can wait on its child "light thread", otherwise a Lua exception will be raised.
---
---- The following example demonstrates the use of `ngx.thread.wait` and [ngx.location.capture](#ngxlocationcapture) to emulate [ngx.location.capture_multi](#ngxlocationcapture_multi):
+--- The following example demonstrates the use of `ngx.thread.wait` and `ngx.location.capture` to emulate `ngx.location.capture_multi`:
---
--- ```lua
----
--- local capture = ngx.location.capture
--- local spawn = ngx.thread.spawn
--- local wait = ngx.thread.wait
@@ -370,7 +355,6 @@ function ngx.thread.kill(thread) end
--- And below is an example demonstrating the "wait any" model:
---
--- ```lua
----
--- function f()
--- ngx.sleep(0.2)
--- ngx.say("f: hello")
@@ -413,50 +397,47 @@ function ngx.thread.kill(thread) end
---
--- And it will generate the following output:
---
----
--- f thread created: running
--- g thread created: running
--- g: hello
--- res: g done
---
----
---- This API was first enabled in the `v0.7.0` release.
----
---@param ... ngx.thread
----@return boolean,any
+---@return boolean ok
+---@return any ret_or_error
function ngx.thread.wait(...) end
--- Spawns a new user "light thread" with the Lua function `func` as well as those optional arguments `arg1`, `arg2`, and etc. Returns a Lua thread (or Lua coroutine) object represents this "light thread".
---
--- "Light threads" are just a special kind of Lua coroutines that are scheduled by the ngx_lua module.
---
---- Before `ngx.thread.spawn` returns, the `func` will be called with those optional arguments until it returns, aborts with an error, or gets yielded due to I/O operations via the [Nginx API for Lua](#nginx-api-for-lua) (like [tcpsock:receive](#tcpsockreceive)).
+--- Before `ngx.thread.spawn` returns, the `func` will be called with those optional arguments until it returns, aborts with an error, or gets yielded due to I/O operations via the NGINX APIs for lua (like `tcpsock:receive`).
---
--- After `ngx.thread.spawn` returns, the newly-created "light thread" will keep running asynchronously usually at various I/O events.
---
---- All the Lua code chunks running by [rewrite_by_lua](#rewrite_by_lua), [access_by_lua](#access_by_lua), and [content_by_lua](#content_by_lua) are in a boilerplate "light thread" created automatically by ngx_lua. Such boilerplate "light thread" are also called "entry threads".
+--- All the Lua code chunks running by `rewrite_by_lua`, `access_by_lua`, and `content_by_lua` are in a boilerplate "light thread" created automatically by ngx_lua. Such boilerplate "light thread" are also called "entry threads".
---
---- By default, the corresponding Nginx handler (e.g., [rewrite_by_lua](#rewrite_by_lua) handler) will not terminate until
+--- By default, the corresponding NGINX handler (e.g., `rewrite_by_lua` handler) will not terminate until
---
--- 1. both the "entry thread" and all the user "light threads" terminates,
---- 1. a "light thread" (either the "entry thread" or a user "light thread" aborts by calling [ngx.exit](#ngxexit), [ngx.exec](#ngxexec), [ngx.redirect](#ngxredirect), or [ngx.req.set_uri(uri, true)](#ngxreqset_uri), or
+--- 1. a "light thread" (either the "entry thread" or a user "light thread" aborts by calling `ngx.exit`, `ngx.exec`, `ngx.redirect`, or `ngx.req.set_uri(uri, true)`, or
--- 1. the "entry thread" terminates with a Lua error.
---
--- When the user "light thread" terminates with a Lua error, however, it will not abort other running "light threads" like the "entry thread" does.
---
---- Due to the limitation in the Nginx subrequest model, it is not allowed to abort a running Nginx subrequest in general. So it is also prohibited to abort a running "light thread" that is pending on one ore more Nginx subrequests. You must call [ngx.thread.wait](#ngxthreadwait) to wait for those "light thread" to terminate before quitting the "world". A notable exception here is that you can abort pending subrequests by calling [ngx.exit](#ngxexit) with and only with the status code `ngx.ERROR` (-1), `408`, `444`, or `499`.
+--- Due to the limitation in the NGINX subrequest model, it is not allowed to abort a running NGINX subrequest in general. So it is also prohibited to abort a running "light thread" that is pending on one ore more NGINX subrequests. You must call `ngx.thread.wait` to wait for those "light thread" to terminate before quitting the "world". A notable exception here is that you can abort pending subrequests by calling `ngx.exit` with and only with the status code `ngx.ERROR` (-1), `408`, `444`, or `499`.
---
--- The "light threads" are not scheduled in a pre-emptive way. In other words, no time-slicing is performed automatically. A "light thread" will keep running exclusively on the CPU until
---
--- 1. a (nonblocking) I/O operation cannot be completed in a single run,
---- 1. it calls [coroutine.yield](#coroutineyield) to actively give up execution, or
---- 1. it is aborted by a Lua error or an invocation of [ngx.exit](#ngxexit), [ngx.exec](#ngxexec), [ngx.redirect](#ngxredirect), or [ngx.req.set_uri(uri, true)](#ngxreqset_uri).
+--- 1. it calls `coroutine.yield` to actively give up execution, or
+--- 1. it is aborted by a Lua error or an invocation of `ngx.exit`, `ngx.exec`, `ngx.redirect`, or `ngx.req.set_uri(uri, true)`.
---
--- For the first two cases, the "light thread" will usually be resumed later by the ngx_lua scheduler unless a "stop-the-world" event happens.
---
---- User "light threads" can create "light threads" themselves. And normal user coroutines created by [coroutine.create](#coroutinecreate) can also create "light threads". The coroutine (be it a normal Lua coroutine or a "light thread") that directly spawns the "light thread" is called the "parent coroutine" for the "light thread" newly spawned.
+--- User "light threads" can create "light threads" themselves. And normal user coroutines created by `coroutine.create` can also create "light threads". The coroutine (be it a normal Lua coroutine or a "light thread") that directly spawns the "light thread" is called the "parent coroutine" for the "light thread" newly spawned.
---
---- The "parent coroutine" can call [ngx.thread.wait](#ngxthreadwait) to wait on the termination of its child "light thread".
+--- The "parent coroutine" can call `ngx.thread.wait` to wait on the termination of its child "light thread".
---
--- You can call coroutine.status() and coroutine.yield() on the "light thread" coroutines.
---
@@ -464,13 +445,12 @@ function ngx.thread.wait(...) end
---
--- 1. the current "light thread" already terminates (either successfully or with an error),
--- 1. its parent coroutine is still alive, and
---- 1. its parent coroutine is not waiting on it with [ngx.thread.wait](#ngxthreadwait).
+--- 1. its parent coroutine is not waiting on it with `ngx.thread.wait`.
---
--- The following example demonstrates the use of coroutine.yield() in the "light thread" coroutines
--- to do manual time-slicing:
---
--- ```lua
----
--- local yield = coroutine.yield
---
--- function f()
@@ -500,7 +480,6 @@ function ngx.thread.wait(...) end
---
--- Then it will generate the output
---
----
--- 0
--- 1
--- f 1
@@ -510,11 +489,9 @@ function ngx.thread.wait(...) end
--- f 3
--- 4
---
----
---- "Light threads" are mostly useful for making concurrent upstream requests in a single Nginx request handler, much like a generalized version of [ngx.location.capture_multi](#ngxlocationcapture_multi) that can work with all the [Nginx API for Lua](#nginx-api-for-lua). The following example demonstrates parallel requests to MySQL, Memcached, and upstream HTTP services in a single Lua handler, and outputting the results in the order that they actually return (similar to Facebook's BigPipe model):
+--- "Light threads" are mostly useful for making concurrent upstream requests in a single NGINX request handler, much like a generalized version of `ngx.location.capture_multi` that can work with all the NGINX APIs for lua. The following example demonstrates parallel requests to MySQL, Memcached, and upstream HTTP services in a single Lua handler, and outputting the results in the order that they actually return (similar to Facebook's BigPipe model):
---
--- ```lua
----
--- -- query mysql, memcached, and a remote http service at the same time,
--- -- output the results in the order that they
--- -- actually return the results.
@@ -554,122 +531,90 @@ function ngx.thread.wait(...) end
--- ngx.thread.spawn(query_http) -- create thread 3
--- ```
---
---- This API was first enabled in the `v0.7.0` release.
----
---@param func function
---@param ... any
---@return ngx.thread
-function ngx.thread.spawn(func,...) end
+function ngx.thread.spawn(func, ...) end
--- NGINX worker methods
-ngx.worker={}
+ngx.worker = {}
---- This function returns a boolean value indicating whether the current Nginx worker process already starts exiting. Nginx worker process exiting happens on Nginx server quit or configuration reload (aka HUP reload).
----
---- This API was first introduced in the `0.9.3` release.
+--- This function returns a boolean value indicating whether the current NGINX worker process already starts exiting. NGINX worker process exiting happens on NGINX server quit or configuration reload (aka HUP reload).
---
---@return boolean
function ngx.worker.exiting() end
---- Returns the ordinal number of the current Nginx worker processes (starting from number 0).
+--- Returns the ordinal number of the current NGINX worker processes (starting from number 0).
---
--- So if the total number of workers is `N`, then this method may return a number between 0
--- and `N - 1` (inclusive).
---
---- This function returns meaningful values only for Nginx 1.9.1+. With earlier versions of Nginx, it
---- always returns `nil`.
----
---- See also [ngx.worker.count](#ngxworkercount).
----
---- This API was first introduced in the `0.9.20` release.
----
---@return number
function ngx.worker.id() end
---- Returns the total number of the Nginx worker processes (i.e., the value configured
---- by the [worker_processes](https://nginx.org/en/docs/ngx_core_module.html#worker_processes)
+--- Returns the total number of the NGINX worker processes (i.e., the value configured
+--- by the `worker_processes`
--- directive in `nginx.conf`).
---
---- This API was first introduced in the `0.9.20` release.
----
---@return number
function ngx.worker.count() end
---- This function returns a Lua number for the process ID (PID) of the current Nginx worker process. This API is more efficient than `ngx.var.pid` and can be used in contexts where the [ngx.var.VARIABLE](#ngxvarvariable) API cannot be used (like [init_worker_by_lua](#init_worker_by_lua)).
----
---- This API was first introduced in the `0.9.5` release.
+--- This function returns a Lua number for the process ID (PID) of the current NGINX worker process. This API is more efficient than `ngx.var.pid` and can be used in contexts where the `ngx.var.VARIABLE` API cannot be used (like `init_worker_by_lua`).
---
---@return number
function ngx.worker.pid() end
---@class ngx.config : table
---
---- This string field indicates the current Nginx subsystem the current Lua environment is based on. For this module, this field always takes the string value `"http"`.
---- For [ngx_stream_lua_module](https://github.com/openresty/stream-lua-nginx-module#readme), however, this field takes the value `"stream"`.
---- This field was first introduced in the `0.10.1`.
+--- This string field indicates the current NGINX subsystem the current Lua environment is based on. For this module, this field always takes the string value `"http"`.
+--- For `ngx_stream_lua_module`, however, this field takes the value `"stream"`.
---@field subsystem '"http"'|'"stream"'
---
----
---- This field takes an integral value indicating the version number of the current Nginx core being used. For example, the version number `1.4.3` results in the Lua number 1004003.
---- This API was first introduced in the `0.9.3` release.
+--- This field takes an integral value indicating the version number of the current NGINX core being used. For example, the version number `1.4.3` results in the Lua number 1004003.
---@field nginx_version number
---
----
--- This field takes an integral value indicating the version number of the current `ngx_lua` module being used.
--- For example, the version number `0.9.3` results in the Lua number 9003.
---- This API was first introduced in the `0.9.3` release.
---@field ngx_lua_version number
---
----
---- This boolean field indicates whether the current Nginx is a debug build, i.e., being built by the `./configure` option `--with-debug`.
---- This field was first introduced in the `0.8.7`.
+--- This boolean field indicates whether the current NGINX is a debug build, i.e., being built by the `./configure` option `--with-debug`.
---@field debug boolean
---
ngx.config = {}
---- Returns the Nginx server "prefix" path, as determined by the `-p` command-line option when running the Nginx executable, or the path specified by the `--prefix` command-line option when building Nginx with the `./configure` script.
----
---- This function was first introduced in the `0.9.2`.
+--- Returns the NGINX server "prefix" path, as determined by the `-p` command-line option when running the NGINX executable, or the path specified by the `--prefix` command-line option when building NGINX with the `./configure` script.
---
---@return string
function ngx.config.prefix() end
---- This function returns a string for the Nginx `./configure` command's arguments string.
----
---- This API was first introduced in the `0.9.5` release.
+--- This function returns a string for the NGINX `./configure` command's arguments string.
---
---@return string
function ngx.config.nginx_configure() end
-ngx.timer={}
+ngx.timer = {}
---@alias ngx.timer.callback fun(premature:boolean, ...:any)
--- Returns the number of pending timers.
---
---- This directive was first introduced in the `v0.9.20` release.
----
---@return number
function ngx.timer.pending_count() end
--- Returns the number of timers currently running.
---
---- This directive was first introduced in the `v0.9.20` release.
----
----@return count
+---@return integer
function ngx.timer.running_count() end
---- Similar to the [ngx.timer.at](#ngxtimerat) API function, but
+--- Similar to the `ngx.timer.at` API function, but
---
--- 1. `delay` *cannot* be zero,
---- 1. timer will be created every `delay` seconds until the current Nginx worker process starts exiting.
+--- 2. timer will be created every `delay` seconds until the current NGINX worker process starts exiting.
---
--- When success, returns a "conditional true" value (but not a `true`). Otherwise, returns a "conditional false" value and a string describing the error.
---
---- This API also respect the [lua_max_pending_timers](#lua_max_pending_timers) and [lua_max_running_timers](#lua_max_running_timers).
----
---- This API was first introduced in the `v0.10.9` release.
+--- This API also respect the `lua_max_pending_timers` and `lua_max_running_timers`.
---
---@param delay number the interval to execute the timer on
---@param callback ngx.timer.callback the function to call
@@ -678,7 +623,7 @@ function ngx.timer.running_count() end
---@return string? error
function ngx.timer.every(delay, callback, ...) end
---- Creates an Nginx timer with a user callback function as well as optional user arguments.
+--- Creates an NGINX timer with a user callback function as well as optional user arguments.
---
--- The first argument, `delay`, specifies the delay for the timer,
--- in seconds. One can specify fractional seconds like `0.001` to mean 1
@@ -689,32 +634,31 @@ function ngx.timer.every(delay, callback, ...) end
--- The second argument, `callback`, can
--- be any Lua function, which will be invoked later in a background
--- "light thread" after the delay specified. The user callback will be
---- called automatically by the Nginx core with the arguments `premature`,
+--- called automatically by the NGINX core with the arguments `premature`,
--- `user_arg1`, `user_arg2`, and etc, where the `premature`
--- argument takes a boolean value indicating whether it is a premature timer
--- expiration or not, and `user_arg1`, `user_arg2`, and etc, are
--- those (extra) user arguments specified when calling `ngx.timer.at`
--- as the remaining arguments.
---
---- Premature timer expiration happens when the Nginx worker process is
---- trying to shut down, as in an Nginx configuration reload triggered by
---- the `HUP` signal or in an Nginx server shutdown. When the Nginx worker
+--- Premature timer expiration happens when the NGINX worker process is
+--- trying to shut down, as in an NGINX configuration reload triggered by
+--- the `HUP` signal or in an NGINX server shutdown. When the NGINX worker
--- is trying to shut down, one can no longer call `ngx.timer.at` to
--- create new timers with nonzero delays and in that case `ngx.timer.at` will return a "conditional false" value and
--- a string describing the error, that is, "process exiting".
---
---- Starting from the `v0.9.3` release, it is allowed to create zero-delay timers even when the Nginx worker process starts shutting down.
+--- It is allowed to create zero-delay timers even when the NGINX worker process starts shutting down.
---
--- When a timer expires, the user Lua code in the timer callback is
--- running in a "light thread" detached completely from the original
--- request creating the timer. So objects with the same lifetime as the
---- request creating them, like [cosockets](#ngxsockettcp), cannot be shared between the
+--- request creating them, like `cosockets`, cannot be shared between the
--- original request and the timer user callback function.
---
--- Here is a simple example:
---
--- ```nginx
----
--- location / {
--- ...
--- log_by_lua_block {
@@ -737,7 +681,6 @@ function ngx.timer.every(delay, callback, ...) end
--- One can also create infinite re-occurring timers, for instance, a timer getting triggered every `5` seconds, by calling `ngx.timer.at` recursively in the timer callback function. Here is such an example,
---
--- ```lua
----
--- local delay = 5
--- local handler
--- handler = function (premature)
@@ -759,45 +702,43 @@ function ngx.timer.every(delay, callback, ...) end
--- end
--- ```
---
---- It is recommended, however, to use the [ngx.timer.every](#ngxtimerevery) API function
+--- It is recommended, however, to use the `ngx.timer.every` API function
--- instead for creating recurring timers since it is more robust.
---
--- Because timer callbacks run in the background and their running time
--- will not add to any client request's response time, they can easily
--- accumulate in the server and exhaust system resources due to either
--- Lua programming mistakes or just too much client traffic. To prevent
---- extreme consequences like crashing the Nginx server, there are
+--- extreme consequences like crashing the NGINX server, there are
--- built-in limitations on both the number of "pending timers" and the
---- number of "running timers" in an Nginx worker process. The "pending
+--- number of "running timers" in an NGINX worker process. The "pending
--- timers" here mean timers that have not yet been expired and "running
--- timers" are those whose user callbacks are currently running.
---
---- The maximal number of pending timers allowed in an Nginx
---- worker is controlled by the [lua_max_pending_timers](#lua_max_pending_timers)
+--- The maximal number of pending timers allowed in an NGINX
+--- worker is controlled by the `lua_max_pending_timers`
--- directive. The maximal number of running timers is controlled by the
---- [lua_max_running_timers](#lua_max_running_timers) directive.
+--- `lua_max_running_timers` directive.
---
--- According to the current implementation, each "running timer" will
--- take one (fake) connection record from the global connection record
---- list configured by the standard [worker_connections](http://nginx.org/en/docs/ngx_core_module.html#worker_connections) directive in
+--- list configured by the standard `worker_connections` directive in
--- `nginx.conf`. So ensure that the
---- [worker_connections](http://nginx.org/en/docs/ngx_core_module.html#worker_connections) directive is set to
+--- `worker_connections` directive is set to
--- a large enough value that takes into account both the real connections
--- and fake connections required by timer callbacks (as limited by the
---- [lua_max_running_timers](#lua_max_running_timers) directive).
----
---- A lot of the Lua APIs for Nginx are enabled in the context of the timer
---- callbacks, like stream/datagram cosockets ([ngx.socket.tcp](#ngxsockettcp) and [ngx.socket.udp](#ngxsocketudp)), shared
---- memory dictionaries ([ngx.shared.DICT](#ngxshareddict)), user coroutines ([coroutine.*](#coroutinecreate)),
---- user "light threads" ([ngx.thread.*](#ngxthreadspawn)), [ngx.exit](#ngxexit), [ngx.now](#ngxnow)/[ngx.time](#ngxtime),
---- [ngx.md5](#ngxmd5)/[ngx.sha1_bin](#ngxsha1_bin), are all allowed. But the subrequest API (like
---- [ngx.location.capture](#ngxlocationcapture)), the [ngx.req.*](#ngxreqstart_time) API, the downstream output API
---- (like [ngx.say](#ngxsay), [ngx.print](#ngxprint), and [ngx.flush](#ngxflush)) are explicitly disabled in
+--- `lua_max_running_timers` directive).
+---
+--- A lot of the Lua APIs for NGINX are enabled in the context of the timer
+--- callbacks, like stream/datagram cosockets (`ngx.socket.tcp` and `ngx.socket.udp`), shared
+--- memory dictionaries (`ngx.shared.DICT`), user coroutines (`coroutine.*`),
+--- user "light threads" (`ngx.thread.*`), `ngx.exit`, `ngx.now`/`ngx.time`,
+--- `ngx.md5`/`ngx.sha1_bin`, are all allowed. But the subrequest API (like
+--- `ngx.location.capture`), the `ngx.req.*` API, the downstream output API
+--- (like `ngx.say`, `ngx.print`, and `ngx.flush`) are explicitly disabled in
--- this context.
---
---- You can pass most of the standard Lua values (nils, booleans, numbers, strings, tables, closures, file handles, and etc) into the timer callback, either explicitly as user arguments or implicitly as upvalues for the callback closure. There are several exceptions, however: you *cannot* pass any thread objects returned by [coroutine.create](#coroutinecreate) and [ngx.thread.spawn](#ngxthreadspawn) or any cosocket objects returned by [ngx.socket.tcp](#ngxsockettcp), [ngx.socket.udp](#ngxsocketudp), and [ngx.req.socket](#ngxreqsocket) because these objects' lifetime is bound to the request context creating them while the timer callback is detached from the creating request's context (by design) and runs in its own (fake) request context. If you try to share the thread or cosocket objects across the boundary of the creating request, then you will get the "no co ctx found" error (for threads) or "bad request" (for cosockets). It is fine, however, to create all these objects inside your timer callback.
----
---- This API was first introduced in the `v0.8.0` release.
+--- You can pass most of the standard Lua values (nils, booleans, numbers, strings, tables, closures, file handles, and etc) into the timer callback, either explicitly as user arguments or implicitly as upvalues for the callback closure. There are several exceptions, however: you *cannot* pass any thread objects returned by `coroutine.create` and `ngx.thread.spawn` or any cosocket objects returned by `ngx.socket.tcp`, `ngx.socket.udp`, and `ngx.req.socket` because these objects' lifetime is bound to the request context creating them while the timer callback is detached from the creating request's context (by design) and runs in its own (fake) request context. If you try to share the thread or cosocket objects across the boundary of the creating request, then you will get the "no co ctx found" error (for threads) or "bad request" (for cosockets). It is fine, however, to create all these objects inside your timer callback.
---
---@param delay number
---@param callback ngx.timer.callback
@@ -812,18 +753,15 @@ function ngx.timer.at(delay, callback, ...) end
--- For example,
---
--- ```lua
----
--- ngx.say(ngx.unescape_uri("b%20r56+7"))
--- ```
---
--- gives the output
---
----
--- b r56 7
---
----
---@param str string
----@return str
+---@return string
function ngx.unescape_uri(str) end
--- Escape `str` as a URI component.
@@ -834,9 +772,7 @@ function ngx.escape_uri(str) end
--- Returns the binary form of the SHA-1 digest of the `str` argument.
---
---- This function requires SHA-1 support in the Nginx build. (This usually just means OpenSSL should be installed while building Nginx).
----
---- This function was first introduced in the `v0.5.0rc6`.
+--- This function requires SHA-1 support in the NGINX build. (This usually just means OpenSSL should be installed while building NGINX).
---
---@param str string
---@return any
@@ -844,11 +780,9 @@ function ngx.sha1_bin(str) end
--- Calculates the CRC-32 (Cyclic Redundancy Code) digest for the `str` argument.
---
---- This method performs better on relatively short `str` inputs (i.e., less than 30 ~ 60 bytes), as compared to [ngx.crc32_long](#ngxcrc32_long). The result is exactly the same as [ngx.crc32_long](#ngxcrc32_long).
----
---- Behind the scene, it is just a thin wrapper around the `ngx_crc32_short` function defined in the Nginx core.
+--- This method performs better on relatively short `str` inputs (i.e., less than 30 ~ 60 bytes), as compared to `ngx.crc32_long`. The result is exactly the same as `ngx.crc32_long`.
---
---- This API was first introduced in the `v0.3.1rc8` release.
+--- Behind the scene, it is just a thin wrapper around the `ngx_crc32_short` function defined in the NGINX core.
---
---@param str string
---@return number
@@ -856,11 +790,9 @@ function ngx.crc32_short(str) end
--- Calculates the CRC-32 (Cyclic Redundancy Code) digest for the `str` argument.
---
---- This method performs better on relatively long `str` inputs (i.e., longer than 30 ~ 60 bytes), as compared to [ngx.crc32_short](#ngxcrc32_short). The result is exactly the same as [ngx.crc32_short](#ngxcrc32_short).
+--- This method performs better on relatively long `str` inputs (i.e., longer than 30 ~ 60 bytes), as compared to `ngx.crc32_short`. The result is exactly the same as `ngx.crc32_short`.
---
---- Behind the scene, it is just a thin wrapper around the `ngx_crc32_long` function defined in the Nginx core.
----
---- This API was first introduced in the `v0.3.1rc8` release.
+--- Behind the scene, it is just a thin wrapper around the `ngx_crc32_long` function defined in the NGINX core.
---
---@param str string
---@return number
@@ -868,7 +800,7 @@ function ngx.crc32_long(str) end
--- Returns the binary form of the MD5 digest of the `str` argument.
---
---- See [ngx.md5](#ngxmd5) if the hexadecimal form of the MD5 digest is required.
+--- See `ngx.md5` if the hexadecimal form of the MD5 digest is required.
---
---@param str string
---@return string
@@ -879,7 +811,6 @@ function ngx.md5_bin(str) end
--- For example,
---
--- ```nginx
----
--- location = /md5 {
--- content_by_lua_block { ngx.say(ngx.md5("hello")) }
--- }
@@ -887,24 +818,21 @@ function ngx.md5_bin(str) end
---
--- yields the output
---
----
--- 5d41402abc4b2a76b9719d911017c592
---
----
---- See [ngx.md5_bin](#ngxmd5_bin) if the raw binary MD5 digest is required.
+--- See `ngx.md5_bin` if the raw binary MD5 digest is required.
---
---@param str string
---@return string
function ngx.md5(str) end
---- Computes the [HMAC-SHA1](https://en.wikipedia.org/wiki/HMAC) digest of the argument `str` and turns the result using the secret key `<secret_key>`.
+--- Computes the `HMAC-SHA1` digest of the argument `str` and turns the result using the secret key `<secret_key>`.
---
---- The raw binary form of the `HMAC-SHA1` digest will be generated, use [ngx.encode_base64](#ngxencode_base64), for example, to encode the result to a textual representation if desired.
+--- The raw binary form of the `HMAC-SHA1` digest will be generated, use `ngx.encode_base64`, for example, to encode the result to a textual representation if desired.
---
--- For example,
---
--- ```lua
----
--- local key = "thisisverysecretstuff"
--- local src = "some string we want to sign"
--- local digest = ngx.hmac_sha1(key, src)
@@ -913,14 +841,8 @@ function ngx.md5(str) end
---
--- yields the output
---
----
--- R/pvxzHC4NLtj7S+kXFg/NePTmk=
---
----
---- This API requires the OpenSSL library enabled in the Nginx build (usually by passing the `--with-http_ssl_module` option to the `./configure` script).
----
---- This function was first introduced in the `v0.3.1rc29` release.
----
---@param secret_key string
---@param str string
---@return string
@@ -967,7 +889,7 @@ ngx.re = {}
--- -- m[1] == "美好"
--- ```
---
---- The `o` option is useful for performance tuning, because the regex pattern in question will only be compiled once, cached in the worker-process level, and shared among all requests in the current Nginx worker process. The upper limit of the regex cache can be tuned via the `lua_regex_cache_max_entries` directive.
+--- The `o` option is useful for performance tuning, because the regex pattern in question will only be compiled once, cached in the worker-process level, and shared among all requests in the current NGINX worker process. The upper limit of the regex cache can be tuned via the `lua_regex_cache_max_entries` directive.
---
---@alias ngx.re.options string
@@ -1055,7 +977,6 @@ ngx.re = {}
--- Below is an example:
---
--- ```lua
----
--- local s = "hello, 1234"
--- local from, to, err = ngx.re.find(s, "([0-9]+)", "jo")
--- if from then
@@ -1179,7 +1100,6 @@ function ngx.re.gmatch(subject, regex, options) end
--- The optional fourth argument, `ctx`, can be a Lua table holding an optional `pos` field. When the `pos` field in the `ctx` table argument is specified, `ngx.re.match` will start matching from that offset (starting from 1). Regardless of the presence of the `pos` field in the `ctx` table, `ngx.re.match` will always set this `pos` field to the position *after* the substring matched by the whole pattern in case of a successful match. When match fails, the `ctx` table will be left intact.
---
--- ```lua
----
--- local ctx = {}
--- local m, err = ngx.re.match("1234, hello", "[0-9]+", "", ctx)
--- -- m[0] = "1234"
@@ -1187,7 +1107,6 @@ function ngx.re.gmatch(subject, regex, options) end
--- ```
---
--- ```lua
----
--- local ctx = { pos = 2 }
--- local m, err = ngx.re.match("1234, hello", "[0-9]+", "", ctx)
--- -- m[0] = "234"
@@ -1276,7 +1195,7 @@ function ngx.re.gsub(subject, regex, replace, options) end
--- Do not use backlashes to escape dollar signs; it will not work as expected.
---@alias ngx.re.replace.string string
---- When `replace` is a function, it will be invoked with the capture table as the argument to generate the replace string literal for substitution. The capture table fed into the `replace` function is exactly the same as the return value of [ngx.re.match](#ngxrematch). Here is an example:
+--- When `replace` is a function, it will be invoked with the capture table as the argument to generate the replace string literal for substitution. The capture table fed into the `replace` function is exactly the same as the return value of `ngx.re.match`. Here is an example:
---
--- ```lua
--- local func = function (m)
@@ -1315,279 +1234,169 @@ function ngx.decode_base64(str) end
--- Encodes `str` to a base64 digest.
---
---- Since the `0.9.16` release, an optional boolean-typed `no_padding` argument can be specified to control whether the base64 padding should be appended to the resulting digest (default to `false`, i.e., with padding enabled).
+--- An optional boolean-typed `no_padding` argument can be specified to control whether the base64 padding should be appended to the resulting digest (default to `false`, i.e., with padding enabled).
---
---@param str string
---@param no_padding boolean
----@return string@newstr
+---@return string
function ngx.encode_base64(str, no_padding) end
---- Fetching the shm-based Lua dictionary object for the shared memory zone named `DICT` defined by the [lua_shared_dict](#lua_shared_dict) directive.
----
---- Shared memory zones are always shared by all the Nginx worker processes in the current Nginx server instance.
----
---- The resulting object `dict` has the following methods:
----
---- * [get](#ngxshareddictget)
---- * [get_stale](#ngxshareddictget_stale)
---- * [set](#ngxshareddictset)
---- * [safe_set](#ngxshareddictsafe_set)
---- * [add](#ngxshareddictadd)
---- * [safe_add](#ngxshareddictsafe_add)
---- * [replace](#ngxshareddictreplace)
---- * [delete](#ngxshareddictdelete)
---- * [incr](#ngxshareddictincr)
---- * [lpush](#ngxshareddictlpush)
---- * [rpush](#ngxshareddictrpush)
---- * [lpop](#ngxshareddictlpop)
---- * [rpop](#ngxshareddictrpop)
---- * [llen](#ngxshareddictllen)
---- * [ttl](#ngxshareddictttl)
---- * [expire](#ngxshareddictexpire)
---- * [flush_all](#ngxshareddictflush_all)
---- * [flush_expired](#ngxshareddictflush_expired)
---- * [get_keys](#ngxshareddictget_keys)
---- * [capacity](#ngxshareddictcapacity)
---- * [free_space](#ngxshareddictfree_space)
----
---- All these methods are *atomic* operations, that is, safe from concurrent accesses from multiple Nginx worker processes for the same `lua_shared_dict` zone.
----
---- Here is an example:
----
---- ```nginx
----
---- http {
---- lua_shared_dict dogs 10m;
---- server {
---- location /set {
---- content_by_lua_block {
---- local dogs = ngx.shared.dogs
---- dogs:set("Jim", 8)
---- ngx.say("STORED")
---- }
---- }
---- location /get {
---- content_by_lua_block {
---- local dogs = ngx.shared.dogs
---- ngx.say(dogs:get("Jim"))
---- }
---- }
---- }
---- }
---- ```
----
---- Let us test it:
----
---- ```bash
----
---- $ curl localhost/set
---- STORED
----
---- $ curl localhost/get
---- 8
----
---- $ curl localhost/get
---- 8
---- ```
----
---- The number `8` will be consistently output when accessing `/get` regardless of how many Nginx workers there are because the `dogs` dictionary resides in the shared memory and visible to *all* of the worker processes.
+--- Fetching the shm-based Lua dictionary object for the shared memory zone named `DICT` defined by the `lua_shared_dict` directive.
---
---- The shared dictionary will retain its contents through a server config reload (either by sending the `HUP` signal to the Nginx process or by using the `-s reload` command-line option).
+--- All these methods are *atomic* operations, that is, safe from concurrent accesses from multiple NGINX worker processes for the same `lua_shared_dict` zone.
---
---- The contents in the dictionary storage will be lost, however, when the Nginx server quits.
+--- The shared dictionary will retain its contents through a server config reload (either by sending the `HUP` signal to the NGINX process or by using the `-s reload` command-line option).
---
---- This feature was first introduced in the `v0.3.1rc22` release.
+--- The contents in the dictionary storage will be lost, however, when the NGINX server quits.
---
---@type table<string,ngx.shared.DICT>
-ngx.shared={}
+ngx.shared = {}
---@class ngx.shared.DICT
-local DICT={}
+local DICT = {}
---- Retrieving the value in the dictionary [ngx.shared.DICT](#ngxshareddict) for the key `key`. If the key does not exist or has expired, then `nil` will be returned.
+--- Retrieve a value. If the key does not exist or has expired, then `nil` will be returned.
---
--- In case of errors, `nil` and a string describing the error will be returned.
---
--- The value returned will have the original data type when they were inserted into the dictionary, for example, Lua booleans, numbers, or strings.
---
---- The first argument to this method must be the dictionary object itself, for example,
----
--- ```lua
----
---- local cats = ngx.shared.cats
---- local value, flags = cats.get(cats, "Marry")
---- ```
----
---- or use Lua's syntactic sugar for method calls:
----
---- ```lua
----
--- local cats = ngx.shared.cats
--- local value, flags = cats:get("Marry")
--- ```
---
---- These two forms are fundamentally equivalent.
----
--- If the user flags is `0` (the default), then no flags value will be returned.
---
---- This feature was first introduced in the `v0.3.1rc22` release.
----
---- See also [ngx.shared.DICT](#ngxshareddict).
----
---@param key string
----@return any,number
+---@return any?
+---@return ngx.shared.DICT.flags?|string? flags_or_error
function DICT:get(key) end
---- Similar to the [get](#ngxshareddictget) method but returns the value even if the key has already expired.
+--- Similar to the `get` method but returns the value even if the key has already expired.
---
--- Returns a 3rd value, `stale`, indicating whether the key has expired or not.
---
--- Note that the value of an expired key is not guaranteed to be available so one should never rely on the availability of expired items.
---
---- This method was first introduced in the `0.8.6` release.
----
---- See also [ngx.shared.DICT](#ngxshareddict).
----
----@param key string
----@return any,number,boolean
+---@param key string
+---@return any? value
+---@return ngx.shared.DICT.flags|string flags_or_error
+---@return boolean stale
function DICT:get_stale(key) end
---- Unconditionally sets a key-value pair into the shm-based dictionary [ngx.shared.DICT](#ngxshareddict). Returns three values:
+---@alias ngx.shared.DICT.error string
+---| '"no memory"' # not enough available memory to store a value
+---| '"exists"' # called add() on an existing value
+---| '"not found"' # called a method (replace/ttl/expire) on an absent value
+---| '"not a number"' # called incr() on a non-number value
+---| '"value not a list"' # called list methods (lpush/lpop/rpush/rpop/llen) on a non-list value
+
+--- Optional user flags associated with a shm value.
---
---- * `success`: boolean value to indicate whether the key-value pair is stored or not.
---- * `err`: textual error message, can be `"no memory"`.
---- * `forcible`: a boolean value to indicate whether other valid items have been removed forcibly when out of storage in the shared memory zone.
+--- The user flags is stored as an unsigned 32-bit integer internally. Defaults to `0`.
---
---- The `value` argument inserted can be Lua booleans, numbers, strings, or `nil`. Their value type will also be stored into the dictionary and the same data type can be retrieved later via the [get](#ngxshareddictget) method.
+---@alias ngx.shared.DICT.flags integer
+
+--- Expiration time of an shm value (in seconds)
---
---- The optional `exptime` argument specifies expiration time (in seconds) for the inserted key-value pair. The time resolution is `0.001` seconds. If the `exptime` takes the value `0` (which is the default), then the item will never expire.
+--- The time resolution is `0.001` seconds.
+--- If this value is set to `0` (the default), the shm value will never expire.
---
---- The optional `flags` argument specifies a user flags value associated with the entry to be stored. It can also be retrieved later with the value. The user flags is stored as an unsigned 32-bit integer internally. Defaults to `0`. The user flags argument was first introduced in the `v0.5.0rc2` release.
+---@alias ngx.shared.DICT.exptime number
+
+--- Unconditionally sets a key-value pair into the shm-based dictionary.
---
---- When it fails to allocate memory for the current key-value item, then `set` will try removing existing items in the storage according to the Least-Recently Used (LRU) algorithm. Note that, LRU takes priority over expiration time here. If up to tens of existing items have been removed and the storage left is still insufficient (either due to the total capacity limit specified by [lua_shared_dict](#lua_shared_dict) or memory segmentation), then the `err` return value will be `no memory` and `success` will be `false`.
+--- When it fails to allocate memory for the current key-value item, then `set` will try removing existing items in the storage according to the Least-Recently Used (LRU) algorithm. Note that, LRU takes priority over expiration time here. If up to tens of existing items have been removed and the storage left is still insufficient (either due to the total capacity limit specified by `lua_shared_dict` or memory segmentation), then the `err` return value will be `no memory` and `success` will be `false`.
---
--- If this method succeeds in storing the current item by forcibly removing other not-yet-expired items in the dictionary via LRU, the `forcible` return value will be `true`. If it stores the item without forcibly removing other valid items, then the return value `forcible` will be `false`.
---
---- The first argument to this method must be the dictionary object itself, for example,
----
--- ```lua
----
---- local cats = ngx.shared.cats
---- local succ, err, forcible = cats.set(cats, "Marry", "it is a nice cat!")
---- ```
----
---- or use Lua's syntactic sugar for method calls:
----
---- ```lua
----
--- local cats = ngx.shared.cats
--- local succ, err, forcible = cats:set("Marry", "it is a nice cat!")
--- ```
---
---- These two forms are fundamentally equivalent.
----
---- This feature was first introduced in the `v0.3.1rc22` release.
----
--- Please note that while internally the key-value pair is set atomically, the atomicity does not go across the method call boundary.
---
---- See also [ngx.shared.DICT](#ngxshareddict).
----
----@param key string
----@param value any
----@param exptime number
----@param flags number
----@return boolean,string,boolean@ success, err, forcible
+---@param key string
+---@param value any
+---@param exptime? ngx.shared.DICT.exptime
+---@param flags? ngx.shared.DICT.flags
+---@return boolean ok # whether the key-value pair is stored or not
+---@return ngx.shared.DICT.error? error
+---@return boolean forcible # indicates whether other valid items have been removed forcibly when out of storage in the shared memory zone.
function DICT:set(key, value, exptime, flags) end
---- Similar to the [set](#ngxshareddictset) method, but never overrides the (least recently used) unexpired items in the store when running out of storage in the shared memory zone. In this case, it will immediately return `nil` and the string "no memory".
----
---- This feature was first introduced in the `v0.7.18` release.
+--- Similar to the `set` method, but never overrides the (least recently used) unexpired items in the store when running out of storage in the shared memory zone. In this case, it will immediately return `nil` and the string "no memory".
---
---- See also [ngx.shared.DICT](#ngxshareddict).
----
----@param key string
----@param value any
----@param exptime number
----@param flags number
----@return boolean,string@ ok, err
+---@param key string
+---@param value any
+---@param exptime? ngx.shared.DICT.exptime
+---@param flags? ngx.shared.DICT.flags
+---@return boolean ok # whether the key-value pair is stored or not
+---@return ngx.shared.DICT.error? error
+---@return boolean forcible # indicates whether other valid items have been removed forcibly when out of storage in the shared memory zone.
function DICT:safe_set(key, value, exptime, flags) end
---- Just like the [set](#ngxshareddictset) method, but only stores the key-value pair into the dictionary [ngx.shared.DICT](#ngxshareddict) if the key does *not* exist.
+--- Just like the `set` method, but only stores the key-value pair if the key does *not* exist.
---
--- If the `key` argument already exists in the dictionary (and not expired for sure), the `success` return value will be `false` and the `err` return value will be `"exists"`.
---
---- This feature was first introduced in the `v0.3.1rc22` release.
----
---- See also [ngx.shared.DICT](#ngxshareddict).
----
----@param key string
----@param value any
----@param exptime number
----@param flags number
----@return boolean,string,boolean@ success, err, forcible
+---@param key string
+---@param value any
+---@param exptime? ngx.shared.DICT.exptime
+---@param flags? ngx.shared.DICT.flags
+---@return boolean ok # whether the key-value pair is stored or not
+---@return ngx.shared.DICT.error? error
+---@return boolean forcible # indicates whether other valid items have been removed forcibly when out of storage in the shared memory zone.
function DICT:add(key, value, exptime, flags) end
---- Similar to the [add](#ngxshareddictadd) method, but never overrides the (least recently used) unexpired items in the store when running out of storage in the shared memory zone. In this case, it will immediately return `nil` and the string "no memory".
----
---- This feature was first introduced in the `v0.7.18` release.
+--- Similar to the `add` method, but never overrides the (least recently used) unexpired items in the store when running out of storage in the shared memory zone. In this case, it will immediately return `nil` and the string "no memory".
---
---- See also [ngx.shared.DICT](#ngxshareddict).
----
----@param key string
----@param value any
----@param exptime number
----@param flags number
----@return boolean,string@ ok, err
+---@param key string
+---@param value any
+---@param exptime? ngx.shared.DICT.exptime
+---@param flags? ngx.shared.DICT.flags
+---@return boolean ok # whether the key-value pair is stored or not
+---@return ngx.shared.DICT.error? error
+---@return boolean forcible # indicates whether other valid items have been removed forcibly when out of storage in the shared memory zone.
function DICT:safe_add(key, value, exptime, flags) end
---- Just like the [set](#ngxshareddictset) method, but only stores the key-value pair into the dictionary [ngx.shared.DICT](#ngxshareddict) if the key *does* exist.
+--- Just like the `set` method, but only stores the key-value pair if the key *does* exist.
---
--- If the `key` argument does *not* exist in the dictionary (or expired already), the `success` return value will be `false` and the `err` return value will be `"not found"`.
---
---- This feature was first introduced in the `v0.3.1rc22` release.
----
---- See also [ngx.shared.DICT](#ngxshareddict).
----
----@param key string
----@param value any
----@param exptime number
----@param flags number
----@return boolean,string,boolean@ success, err, forcible
+---@param key string
+---@param value any
+---@param exptime? ngx.shared.DICT.exptime
+---@param flags? ngx.shared.DICT.flags
+---@return boolean ok # whether the key-value pair is stored or not
+---@return ngx.shared.DICT.error? error
+---@return boolean forcible # indicates whether other valid items have been removed forcibly when out of storage in the shared memory zone.
function DICT:replace(key, value, exptime, flags) end
-
---- Unconditionally removes the key-value pair from the shm-based dictionary [ngx.shared.DICT](#ngxshareddict).
+--- Unconditionally removes the key-value pair.
---
--- It is equivalent to `ngx.shared.DICT:set(key, nil)`.
---
---- This feature was first introduced in the `v0.3.1rc22` release.
----
---- See also [ngx.shared.DICT](#ngxshareddict).
----
---@param key string
----@return void
function DICT:delete(key) end
---- Increments the (numerical) value for `key` in the shm-based dictionary [ngx.shared.DICT](#ngxshareddict) by the step value `value`. Returns the new resulting number if the operation is successfully completed or `nil` and an error message otherwise.
+--- Increments the (numerical) value for `key` by the step value `value`. Returns the new resulting number if the operation is successfully completed or `nil` and an error message otherwise.
---
--- When the key does not exist or has already expired in the shared dictionary,
---
--- 1. if the `init` argument is not specified or takes the value `nil`, this method will return `nil` and the error string `"not found"`, or
--- 1. if the `init` argument takes a number value, this method will create a new `key` with the value `init + value`.
---
---- Like the [add](#ngxshareddictadd) method, it also overrides the (least recently used) unexpired items in the store when running out of storage in the shared memory zone.
----
---- The optional `init_ttl` argument specifies expiration time (in seconds) of the value when it is initialized via the `init` argument. The time resolution is `0.001` seconds. If `init_ttl` takes the value `0` (which is the default), then the item will never expire. This argument cannot be provided without providing the `init` argument as well, and has no effect if the value already exists (e.g., if it was previously inserted via [set](#ngxshareddictset) or the likes).
+--- Like the `add` method, it also overrides the (least recently used) unexpired items in the store when running out of storage in the shared memory zone.
---
---- **Note:** Usage of the `init_ttl` argument requires the `resty.core.shdict` or `resty.core` modules from the [lua-resty-core](https://github.com/openresty/lua-resty-core) library. Example:
+--- The optional `init_ttl` argument specifies expiration time (in seconds) of the value when it is initialized via the `init` argument. This argument cannot be provided without providing the `init` argument as well, and has no effect if the value already exists (e.g., if it was previously inserted via `set` or the likes).
---
--- ```lua
----
---- require "resty.core"
----
--- local cats = ngx.shared.cats
--- local newval, err = cats:incr("black_cats", 1, 0, 0.1)
---
@@ -1607,101 +1416,79 @@ function DICT:delete(key) end
---
--- The `value` argument and `init` argument can be any valid Lua numbers, like negative numbers or floating-point numbers.
---
---- This method was first introduced in the `v0.3.1rc22` release.
----
---- The optional `init` parameter was first added in the `v0.10.6` release.
---
---- The optional `init_ttl` parameter was introduced in the `v0.10.12rc2` release.
----
---- See also [ngx.shared.DICT](#ngxshareddict).
----
----@param key string
----@param value number
----@param init number
----@param init_ttl number
----@return number | nil ,string,boolean@newval, err, forcible
+---@param key string
+---@param value number
+---@param init number
+---@param init_ttl ngx.shared.DICT.exptime
+---@return integer? new
+---@return ngx.shared.DICT.error? error
+---@return boolean forcible
function DICT:incr(key, value, init, init_ttl) end
---- Inserts the specified (numerical or string) `value` at the head of the list named `key` in the shm-based dictionary [ngx.shared.DICT](#ngxshareddict). Returns the number of elements in the list after the push operation.
+--- Inserts the specified (numerical or string) `value` at the head of the list named `key`.
---
--- If `key` does not exist, it is created as an empty list before performing the push operation. When the `key` already takes a value that is not a list, it will return `nil` and `"value not a list"`.
---
--- It never overrides the (least recently used) unexpired items in the store when running out of storage in the shared memory zone. In this case, it will immediately return `nil` and the string "no memory".
---
---- This feature was first introduced in the `v0.10.6` release.
----
---- See also [ngx.shared.DICT](#ngxshareddict).
----
----@param key string
----@param value string| number
----@return number ,string @length, err
+---@param key string
+---@param value any
+---@return number? len # number of elements in the list after the push operation
+---@return ngx.shared.DICT.error? error
function DICT:lpush(key,value) end
---- Similar to the [lpush](#ngxshareddictlpush) method, but inserts the specified (numerical or string) `value` at the tail of the list named `key`.
----
---- This feature was first introduced in the `v0.10.6` release.
+--- Similar to the `lpush` method, but inserts the specified (numerical or string) `value` at the tail of the list named `key`.
---
---- See also [ngx.shared.DICT](#ngxshareddict).
----
----@param key string
----@param value any
----@return number ,string @length, err
+---@param key string
+---@param value any
+---@return number? len # number of elements in the list after the push operation
+---@return ngx.shared.DICT.error? error
function DICT:rpush(key, value) end
---- Removes and returns the first element of the list named `key` in the shm-based dictionary [ngx.shared.DICT](#ngxshareddict).
+--- Removes and returns the first element of the list named `key`.
---
--- If `key` does not exist, it will return `nil`. When the `key` already takes a value that is not a list, it will return `nil` and `"value not a list"`.
---
---- This feature was first introduced in the `v0.10.6` release.
----
---- See also [ngx.shared.DICT](#ngxshareddict).
----
----@param key string
----@return any,string @val,err
+---@param key string
+---@return any? value
+---@return ngx.shared.DICT.error? error
function DICT:lpop(key) end
---- Removes and returns the last element of the list named `key` in the shm-based dictionary [ngx.shared.DICT](#ngxshareddict).
+--- Removes and returns the last element of the list named `key`.
---
--- If `key` does not exist, it will return `nil`. When the `key` already takes a value that is not a list, it will return `nil` and `"value not a list"`.
---
---- This feature was first introduced in the `v0.10.6` release.
----
---- See also [ngx.shared.DICT](#ngxshareddict).
----
----@param key string
----@return any,string@val,err
+---@param key string
+---@return any? value
+---@return ngx.shared.DICT.error? error
function DICT:rpop(key) end
-
---- Returns the number of elements in the list named `key` in the shm-based dictionary [ngx.shared.DICT](#ngxshareddict).
+--- Returns the number of elements in the list named `key`.
---
--- If key does not exist, it is interpreted as an empty list and 0 is returned. When the `key` already takes a value that is not a list, it will return `nil` and `"value not a list"`.
---
---- This feature was first introduced in the `v0.10.6` release.
----
---- See also [ngx.shared.DICT](#ngxshareddict).
----
---@param key string
----@return number,string @len,err
+---@return number? len
+---@return ngx.shared.DICT.error? error
function DICT:llen(key) end
---- Retrieves the remaining TTL (time-to-live in seconds) of a key-value pair in the shm-based dictionary [ngx.shared.DICT](#ngxshareddict). Returns the TTL as a number if the operation is successfully completed or `nil` and an error message otherwise.
+--- Retrieves the remaining TTL (time-to-live in seconds) of a key-value pair.
+---
+--- Returns the TTL as a number if the operation is successfully completed or `nil` and an error message otherwise.
---
--- If the key does not exist (or has already expired), this method will return `nil` and the error string `"not found"`.
---
---- The TTL is originally determined by the `exptime` argument of the [set](#ngxshareddictset), [add](#ngxshareddictadd), [replace](#ngxshareddictreplace) (and the likes) methods. It has a time resolution of `0.001` seconds. A value of `0` means that the item will never expire.
+--- The TTL is originally determined by the `exptime` argument of the `set`, `add`, `replace` (and the likes) methods. It has a time resolution of `0.001` seconds. A value of `0` means that the item will never expire.
---
--- Example:
---
--- ```lua
----
---- require "resty.core"
----
--- local cats = ngx.shared.cats
--- local succ, err = cats:set("Marry", "a nice cat", 0.5)
---
@@ -1710,30 +1497,19 @@ function DICT:llen(key) end
--- local ttl, err = cats:ttl("Marry")
--- ngx.say(ttl) -- 0.3
--- ```
----
---- This feature was first introduced in the `v0.10.11` release.
----
---- **Note:** This method requires the `resty.core.shdict` or `resty.core` modules from the [lua-resty-core](https://github.com/openresty/lua-resty-core) library.
----
---- See also [ngx.shared.DICT](#ngxshareddict).
----
----@param key string
----@return number,string @ttl,err
+---@param key string
+---@return number? ttl
+---@return ngx.shared.DICT.error? error
function DICT:ttl(key) end
---- Updates the `exptime` (in second) of a key-value pair in the shm-based dictionary [ngx.shared.DICT](#ngxshareddict). Returns a boolean indicating success if the operation completes or `nil` and an error message otherwise.
----
---- If the key does not exist, this method will return `nil` and the error string `"not found"`.
+--- Updates the `exptime` (in second) of a key-value pair.
---
---- The `exptime` argument has a resolution of `0.001` seconds. If `exptime` is `0`, then the item will never expire.
+--- Returns a boolean indicating success if the operation completes or `nil` and an error message otherwise.
---
---- Example:
+--- If the key does not exist, this method will return `nil` and the error string `"not found"`.
---
--- ```lua
----
---- require "resty.core"
----
--- local cats = ngx.shared.cats
--- local succ, err = cats:set("Marry", "a nice cat", 0.1)
---
@@ -1745,37 +1521,24 @@ function DICT:ttl(key) end
--- ngx.say(val) -- "a nice cat"
--- ```
---
---- This feature was first introduced in the `v0.10.11` release.
----
---- **Note:** This method requires the `resty.core.shdict` or `resty.core` modules from the [lua-resty-core](https://github.com/openresty/lua-resty-core) library.
----
---- See also [ngx.shared.DICT](#ngxshareddict).
----
----@param key string
----@param exptime number
----@return boolean,string @success,err
+---@param key string
+---@param exptime ngx.shared.DICT.exptime
+---@return boolean ok
+---@return ngx.shared.DICT.error? error
function DICT:expire(key, exptime) end
--- Flushes out all the items in the dictionary. This method does not actuall free up all the memory blocks in the dictionary but just marks all the existing items as expired.
---
---- This feature was first introduced in the `v0.5.0rc17` release.
----
---- See also [ngx.shared.DICT.flush_expired](#ngxshareddictflush_expired) and [ngx.shared.DICT](#ngxshareddict).
----
function DICT:flush_all() end
--- Flushes out the expired items in the dictionary, up to the maximal number specified by the optional `max_count` argument. When the `max_count` argument is given `0` or not given at all, then it means unlimited. Returns the number of items that have actually been flushed.
---
---- Unlike the [flush_all](#ngxshareddictflush_all) method, this method actually frees up the memory used by the expired items.
----
---- This feature was first introduced in the `v0.6.3` release.
----
---- See also [ngx.shared.DICT.flush_all](#ngxshareddictflush_all) and [ngx.shared.DICT](#ngxshareddict).
+--- Unlike the `flush_all` method, this method actually frees up the memory used by the expired items.
---
---@param max_count number
----@return number@flushed
+---@return number flushed
function DICT:flush_expired(max_count) end
@@ -1783,45 +1546,28 @@ function DICT:flush_expired(max_count) end
---
--- By default, only the first 1024 keys (if any) are returned. When the `<max_count>` argument is given the value `0`, then all the keys will be returned even there is more than 1024 keys in the dictionary.
---
---- **CAUTION** Avoid calling this method on dictionaries with a very large number of keys as it may lock the dictionary for significant amount of time and block Nginx worker processes trying to access the dictionary.
+--- **CAUTION** Avoid calling this method on dictionaries with a very large number of keys as it may lock the dictionary for significant amount of time and block NGINX worker processes trying to access the dictionary.
---
---- This feature was first introduced in the `v0.7.3` release.
----
----@param max_count number
----@return string[]@keys
+---@param max_count number
+---@return string[] keys
function DICT:get_keys(max_count) end
---- Retrieves the capacity in bytes for the shm-based dictionary [ngx.shared.DICT](#ngxshareddict) declared with
---- the [lua_shared_dict](#lua_shared_dict) directive.
----
---- Example:
+--- Retrieves the capacity in bytes for the shm-based dictionary.
---
--- ```lua
----
---- require "resty.core.shdict"
----
--- local cats = ngx.shared.cats
--- local capacity_bytes = cats:capacity()
--- ```
---
---- This feature was first introduced in the `v0.10.11` release.
----
---- **Note:** This method requires the `resty.core.shdict` or `resty.core` modules from the [lua-resty-core](https://github.com/openresty/lua-resty-core) library.
----
---- This feature requires at least Nginx core version `0.7.3`.
----
---- See also [ngx.shared.DICT](#ngxshareddict).
----
---@return number
function DICT:capacity() end
---- Retrieves the free page size in bytes for the shm-based dictionary [ngx.shared.DICT](#ngxshareddict).
+--- Retrieves the free page size in bytes for the shm-based dictionary.
---
---- **Note:** The memory for ngx.shared.DICT is allocated via the Nginx slab allocator which has each slot for
---- data size ranges like \~8, 9\~16, 17\~32, ..., 1025\~2048, 2048\~ bytes. And pages are assigned to a slot if there
---- is no room in already assigned pages for the slot.
+--- **Note:** The memory for ngx.shared.DICT is allocated via the NGINX slab allocator which has each slot for
+--- data size ranges like \~8, 9\~16, 17\~32, ..., 1025\~2048, 2048\~ bytes. And pages are assigned to a slot if there is no room in already assigned pages for the slot.
---
--- So even if the return value of the `free_space` method is zero, there may be room in already assigned pages, so
--- you may successfully set a new key value pair to the shared dict without getting `true` for `forcible` or
@@ -1831,28 +1577,15 @@ function DICT:capacity() end
--- slot and there is no free page, you may get `true` for `forcible` or non nil `err` from the
--- `ngx.shared.DICT.set` method.
---
---- Example:
----
--- ```lua
----
---- require "resty.core.shdict"
----
--- local cats = ngx.shared.cats
--- local free_page_bytes = cats:free_space()
--- ```
---
---- This feature was first introduced in the `v0.10.11` release.
----
---- **Note:** This method requires the `resty.core.shdict` or `resty.core` modules from the [lua-resty-core](https://github.com/openresty/lua-resty-core) library.
----
---- This feature requires at least Nginx core version `1.11.7`.
----
---- See also [ngx.shared.DICT](#ngxshareddict).
----
---@return number
function DICT:free_space() end
---- Read and write Nginx variable values.
+--- Read and write NGINX variable values.
---
--- Usage:
---
@@ -1861,11 +1594,10 @@ function DICT:free_space() end
--- ngx.var.some_nginx_variable_name = value
---```
---
---- Note that only already defined Nginx variables can be written to.
+--- Note that only already defined NGINX variables can be written to.
--- For example:
---
--- ```nginx
----
--- location /foo {
--- set $my_var ''; # this line is required to create $my_var at config time
--- content_by_lua_block {
@@ -1875,32 +1607,30 @@ function DICT:free_space() end
--- }
--- ```
---
---- That is, Nginx variables cannot be created on-the-fly.
+--- That is, NGINX variables cannot be created on-the-fly.
---
---- Some special Nginx variables like `$args` and `$limit_rate` can be assigned a value,
+--- Some special NGINX variables like `$args` and `$limit_rate` can be assigned a value,
--- many others are not, like `$query_string`, `$arg_PARAMETER`, and `$http_NAME`.
---
---- Nginx regex group capturing variables `$1`, `$2`, `$3`, and etc, can be read by this
+--- NGINX regex group capturing variables `$1`, `$2`, `$3`, and etc, can be read by this
--- interface as well, by writing `ngx.var[1]`, `ngx.var[2]`, `ngx.var[3]`, and etc.
---
---- Setting `ngx.var.Foo` to a `nil` value will unset the `$Foo` Nginx variable.
+--- Setting `ngx.var.Foo` to a `nil` value will unset the `$Foo` NGINX variable.
---
--- ```lua
----
--- ngx.var.args = nil
--- ```
---
---- **CAUTION** When reading from an Nginx variable, Nginx will allocate memory in the per-request memory pool which is freed only at request termination. So when you need to read from an Nginx variable repeatedly in your Lua code, cache the Nginx variable value to your own Lua variable, for example,
+--- **CAUTION** When reading from an NGINX variable, NGINX will allocate memory in the per-request memory pool which is freed only at request termination. So when you need to read from an NGINX variable repeatedly in your Lua code, cache the NGINX variable value to your own Lua variable, for example,
---
--- ```lua
----
--- local val = ngx.var.some_var
--- --- use the val repeatedly later
--- ```
---
---- to prevent (temporary) memory leaking within the current request's lifetime. Another way of caching the result is to use the [ngx.ctx](#ngxctx) table.
+--- to prevent (temporary) memory leaking within the current request's lifetime. Another way of caching the result is to use the `ngx.ctx` table.
---
---- Undefined Nginx variables are evaluated to `nil` while uninitialized (but defined) Nginx variables are evaluated to an empty Lua string.
+--- Undefined NGINX variables are evaluated to `nil` while uninitialized (but defined) NGINX variables are evaluated to an empty Lua string.
---
--- This API requires a relatively expensive metamethod call and it is recommended to avoid using it on hot code paths.
---
@@ -2114,10 +1844,9 @@ ngx.var.time_local = nil
---@type string
ngx.var.uri = nil
---- Updating query arguments via the Nginx variable `$args` (or `ngx.var.args` in Lua) at runtime is also supported:
+--- Updating query arguments via the NGINX variable `$args` (or `ngx.var.args` in Lua) at runtime is also supported:
---
--- ```lua
----
--- ngx.var.args = "a=3&b=42"
--- local args, err = ngx.req.get_uri_args()
--- ```
@@ -2125,7 +1854,6 @@ ngx.var.uri = nil
--- Here the `args` table will always look like
---
--- ```lua
----
--- {a = 3, b = 42}
--- ```
---
@@ -2198,12 +1926,10 @@ ngx.var.upstream_status = nil
ngx.req = {}
--- Returns a boolean indicating whether the current request is an "internal request", i.e.,
---- a request initiated from inside the current Nginx server instead of from the client side.
+--- a request initiated from inside the current NGINX server instead of from the client side.
---
--- Subrequests are all internal requests and so are requests after internal redirects.
---
---- This API was first introduced in the `v0.9.20` release.
----
---@return boolean
function ngx.req.is_internal() end
@@ -2211,28 +1937,21 @@ function ngx.req.is_internal() end
---
--- Current possible values are 2.0, 1.0, 1.1, and 0.9. Returns `nil` for unrecognized values.
---
---- This method was first introduced in the `v0.7.17` release.
----
---@return '2.0'|'1.0'|'1.1'|'0.9'|'nil'
function ngx.req.http_version() end
--- Set the current request's request body using the in-memory data specified by the `data` argument.
---
---- If the request body has not been read yet, call [ngx.req.read_body](#ngxreqread_body) first (or turn on [lua_need_request_body](#lua_need_request_body) to force this module to read the request body. This is not recommended however). Additionally, the request body must not have been previously discarded by [ngx.req.discard_body](#ngxreqdiscard_body).
+--- If the request body has not been read yet, call `ngx.req.read_body` first (or turn on `lua_need_request_body` to force this module to read the request body. This is not recommended however). Additionally, the request body must not have been previously discarded by `ngx.req.discard_body`.
---
--- Whether the previous request body has been read into memory or buffered into a disk file, it will be freed or the disk file will be cleaned up immediately, respectively.
---
---- This function was first introduced in the `v0.3.1rc18` release.
----
---- See also [ngx.req.set_body_file](#ngxreqset_body_file).
----
---@param data any
function ngx.req.set_body_data(data) end
---- Returns a Lua table holding all the current request POST query arguments (of the MIME type `application/x-www-form-urlencoded`). Call [ngx.req.read_body](#ngxreqread_body) to read the request body first or turn on the [lua_need_request_body](#lua_need_request_body) directive to avoid errors.
+--- Returns a Lua table holding all the current request POST query arguments (of the MIME type `application/x-www-form-urlencoded`). Call `ngx.req.read_body` to read the request body first or turn on the `lua_need_request_body` directive to avoid errors.
---
--- ```nginx
----
--- location = /test {
--- content_by_lua_block {
--- ngx.req.read_body()
@@ -2260,7 +1979,6 @@ function ngx.req.set_body_data(data) end
--- Then
---
--- ```bash
----
--- # Post request with the body 'foo=bar&bar=baz&bar=blah'
--- $ curl --data 'foo=bar&bar=baz&bar=blah' localhost/test
--- ```
@@ -2268,7 +1986,6 @@ function ngx.req.set_body_data(data) end
--- will yield the response body like
---
--- ```bash
----
--- foo: bar
--- bar: baz, blah
--- ```
@@ -2280,7 +1997,6 @@ function ngx.req.set_body_data(data) end
--- With the settings above,
---
--- ```bash
----
--- # POST request with body 'a%20b=1%61+2'
--- $ curl -d 'a%20b=1%61+2' localhost/test
--- ```
@@ -2288,14 +2004,12 @@ function ngx.req.set_body_data(data) end
--- will yield:
---
--- ```bash
----
--- a b: 1a 2
--- ```
---
--- Arguments without the `=<value>` parts are treated as boolean arguments. `POST /test` with the request body `foo&bar` will yield:
---
--- ```bash
----
--- foo: true
--- bar: true
--- ```
@@ -2303,19 +2017,17 @@ function ngx.req.set_body_data(data) end
--- That is, they will take Lua boolean values `true`. However, they are different from arguments taking empty string values. `POST /test` with request body `foo=&bar=` will return something like
---
--- ```bash
----
--- foo:
--- bar:
--- ```
---
--- Empty key arguments are discarded. `POST /test` with body `=hello&=world` will yield empty outputs for instance.
---
---- Note that a maximum of 100 request arguments are parsed by default (including those with the same name) and that additional request arguments are silently discarded to guard against potential denial of service attacks. Since `v0.10.13`, when the limit is exceeded, it will return a second value which is the string `"truncated"`.
+--- Note that a maximum of 100 request arguments are parsed by default (including those with the same name) and that additional request arguments are silently discarded to guard against potential denial of service attacks. When the limit is exceeded, it will return a second value which is the string `"truncated"`.
---
--- However, the optional `max_args` function argument can be used to override this limit:
---
--- ```lua
----
--- local args, err = ngx.req.get_post_args(10)
--- if err == "truncated" then
--- -- one can choose to ignore or reject the current request here
@@ -2325,20 +2037,19 @@ function ngx.req.set_body_data(data) end
--- This argument can be set to zero to remove the limit and to process all request arguments received:
---
--- ```lua
----
--- local args, err = ngx.req.get_post_args(0)
--- ```
---
--- Removing the `max_args` cap is strongly discouraged.
---
---@param max_args number
----@return table,string@args,err
+---@return table args
+---@return string|'"truncated"' error
function ngx.req.get_post_args(max_args) end
--- Returns a Lua table holding all the current request URL query arguments.
---
--- ```nginx
----
--- location = /test {
--- content_by_lua_block {
--- local args, err = ngx.req.get_uri_args()
@@ -2361,7 +2072,6 @@ function ngx.req.get_post_args(max_args) end
--- Then `GET /test?foo=bar&bar=baz&bar=blah` will yield the response body
---
--- ```bash
----
--- foo: bar
--- bar: baz, blah
--- ```
@@ -2371,14 +2081,12 @@ function ngx.req.get_post_args(max_args) end
--- Keys and values are unescaped according to URI escaping rules. In the settings above, `GET /test?a%20b=1%61+2` will yield:
---
--- ```bash
----
--- a b: 1a 2
--- ```
---
--- Arguments without the `=<value>` parts are treated as boolean arguments. `GET /test?foo&bar` will yield:
---
--- ```bash
----
--- foo: true
--- bar: true
--- ```
@@ -2386,17 +2094,15 @@ function ngx.req.get_post_args(max_args) end
--- That is, they will take Lua boolean values `true`. However, they are different from arguments taking empty string values. `GET /test?foo=&bar=` will give something like
---
--- ```bash
----
--- foo:
--- bar:
--- ```
---
--- Empty key arguments are discarded. `GET /test?=hello&=world` will yield an empty output for instance.
---
---- Updating query arguments via the Nginx variable `$args` (or `ngx.var.args` in Lua) at runtime is also supported:
+--- Updating query arguments via the NGINX variable `$args` (or `ngx.var.args` in Lua) at runtime is also supported:
---
--- ```lua
----
--- ngx.var.args = "a=3&b=42"
--- local args, err = ngx.req.get_uri_args()
--- ```
@@ -2404,18 +2110,16 @@ function ngx.req.get_post_args(max_args) end
--- Here the `args` table will always look like
---
--- ```lua
----
--- {a = 3, b = 42}
--- ```
---
--- regardless of the actual request query string.
---
---- Note that a maximum of 100 request arguments are parsed by default (including those with the same name) and that additional request arguments are silently discarded to guard against potential denial of service attacks. Since `v0.10.13`, when the limit is exceeded, it will return a second value which is the string `"truncated"`.
+--- Note that a maximum of 100 request arguments are parsed by default (including those with the same name) and that additional request arguments are silently discarded to guard against potential denial of service attacks. When the limit is exceeded, it will return a second value which is the string `"truncated"`.
---
--- However, the optional `max_args` function argument can be used to override this limit:
---
--- ```lua
----
--- local args, err = ngx.req.get_uri_args(10)
--- if err == "truncated" then
--- -- one can choose to ignore or reject the current request here
@@ -2425,63 +2129,57 @@ function ngx.req.get_post_args(max_args) end
--- This argument can be set to zero to remove the limit and to process all request arguments received:
---
--- ```lua
----
--- local args, err = ngx.req.get_uri_args(0)
--- ```
---
--- Removing the `max_args` cap is strongly discouraged.
---
---@param max_args number
----@return table,string@args, err
+---@return table args
+---@return string|'"truncated"' error
function ngx.req.get_uri_args(max_args) end
--- Rewrite the current request's (parsed) URI by the `uri` argument. The `uri` argument must be a Lua string and cannot be of zero length, or a Lua exception will be thrown.
---
---- The optional boolean `jump` argument can trigger location rematch (or location jump) as [ngx_http_rewrite_module](http://nginx.org/en/docs/http/ngx_http_rewrite_module.html)'s [rewrite](http://nginx.org/en/docs/http/ngx_http_rewrite_module.html#rewrite) directive, that is, when `jump` is `true` (default to `false`), this function will never return and it will tell Nginx to try re-searching locations with the new URI value at the later `post-rewrite` phase and jumping to the new location.
+--- The optional boolean `jump` argument can trigger location rematch (or location jump) as `ngx_http_rewrite_module`'s `rewrite` directive, that is, when `jump` is `true` (default to `false`), this function will never return and it will tell NGINX to try re-searching locations with the new URI value at the later `post-rewrite` phase and jumping to the new location.
---
--- Location jump will not be triggered otherwise, and only the current request's URI will be modified, which is also the default behavior. This function will return but with no returned values when the `jump` argument is `false` or absent altogether.
---
---- For example, the following Nginx config snippet
+--- For example, the following NGINX config snippet
---
--- ```nginx
----
--- rewrite ^ /foo last;
--- ```
---
--- can be coded in Lua like this:
---
--- ```lua
----
--- ngx.req.set_uri("/foo", true)
--- ```
---
---- Similarly, Nginx config
+--- Similarly, NGINX config
---
--- ```nginx
----
--- rewrite ^ /foo break;
--- ```
---
--- can be coded in Lua as
---
--- ```lua
----
--- ngx.req.set_uri("/foo", false)
--- ```
---
--- or equivalently,
---
--- ```lua
----
--- ngx.req.set_uri("/foo")
--- ```
---
---- The `jump` argument can only be set to `true` in [rewrite_by_lua*](#rewrite_by_lua). Use of jump in other contexts is prohibited and will throw out a Lua exception.
+--- The `jump` argument can only be set to `true` in `rewrite_by_lua*`. Use of jump in other contexts is prohibited and will throw out a Lua exception.
---
--- A more sophisticated example involving regex substitutions is as follows
---
--- ```nginx
----
--- location /test {
--- rewrite_by_lua_block {
--- local uri = ngx.re.sub(ngx.var.uri, "^/test/(.*)", "/$1", "o")
@@ -2494,7 +2192,6 @@ function ngx.req.get_uri_args(max_args) end
--- which is functionally equivalent to
---
--- ```nginx
----
--- location /test {
--- rewrite ^/test/(.*) /$1 break;
--- proxy_pass http://my_backend;
@@ -2504,17 +2201,15 @@ function ngx.req.get_uri_args(max_args) end
--- Note: this function throws a Lua error if the `uri` argument
--- contains unsafe characters (control characters).
---
---- Note that it is not possible to use this interface to rewrite URI arguments and that [ngx.req.set_uri_args](#ngxreqset_uri_args) should be used for this instead. For instance, Nginx config
+--- Note that it is not possible to use this interface to rewrite URI arguments and that `ngx.req.set_uri_args` should be used for this instead. For instance, NGINX config
---
--- ```nginx
----
--- rewrite ^ /foo?a=3? last;
--- ```
---
--- can be coded as
---
--- ```nginx
----
--- ngx.req.set_uri_args("a=3")
--- ngx.req.set_uri("/foo", true)
--- ```
@@ -2522,170 +2217,128 @@ function ngx.req.get_uri_args(max_args) end
--- or
---
--- ```nginx
----
--- ngx.req.set_uri_args({a = 3})
--- ngx.req.set_uri("/foo", true)
--- ```
---
---- Starting from `0.10.16` of this module, this function accepts an
---- optional boolean `binary` argument to allow arbitrary binary URI
---- data. By default, this `binary` argument is false and this function
---- will throw out a Lua error such as the one below when the `uri`
---- argument contains any control characters (ASCII Code 0 ~ 0x08, 0x0A ~ 0x1F and 0x7F).
----
+--- An optional boolean `binary` argument allows arbitrary binary URI data. By default, this `binary` argument is false and this function will throw out a Lua error such as the one below when the `uri` argument contains any control characters (ASCII Code 0 ~ 0x08, 0x0A ~ 0x1F and 0x7F).
---
--- [error] 23430#23430: *1 lua entry thread aborted: runtime error:
--- content_by_lua(nginx.conf:44):3: ngx.req.set_uri unsafe byte "0x00"
--- in "\x00foo" (maybe you want to set the 'binary' argument?)
---
----
---- This interface was first introduced in the `v0.3.1rc14` release.
----
---@param uri string
---@param jump boolean
---@param binary boolean
function ngx.req.set_uri(uri, jump, binary) end
---- Append new data chunk specified by the `data_chunk` argument onto the existing request body created by the [ngx.req.init_body](#ngxreqinit_body) call.
----
---- When the data can no longer be hold in the memory buffer for the request body, then the data will be flushed onto a temporary file just like the standard request body reader in the Nginx core.
----
---- It is important to always call the [ngx.req.finish_body](#ngxreqfinish_body) after all the data has been appended onto the current request body.
+--- Append new data chunk specified by the `data_chunk` argument onto the existing request body created by the `ngx.req.init_body` call.
---
---- This function can be used with [ngx.req.init_body](#ngxreqinit_body), [ngx.req.finish_body](#ngxreqfinish_body), and [ngx.req.socket](#ngxreqsocket) to implement efficient input filters in pure Lua (in the context of [rewrite_by_lua*](#rewrite_by_lua) or [access_by_lua*](#access_by_lua)), which can be used with other Nginx content handler or upstream modules like [ngx_http_proxy_module](http://nginx.org/en/docs/http/ngx_http_proxy_module.html) and [ngx_http_fastcgi_module](http://nginx.org/en/docs/http/ngx_http_fastcgi_module.html).
+--- When the data can no longer be hold in the memory buffer for the request body, then the data will be flushed onto a temporary file just like the standard request body reader in the NGINX core.
---
---- This function was first introduced in the `v0.5.11` release.
+--- It is important to always call the `ngx.req.finish_body` after all the data has been appended onto the current request body.
---
---- See also [ngx.req.init_body](#ngxreqinit_body).
+--- This function can be used with `ngx.req.init_body`, `ngx.req.finish_body`, and `ngx.req.socket` to implement efficient input filters in pure Lua (in the context of `rewrite_by_lua*` or `access_by_lua*`), which can be used with other NGINX content handler or upstream modules like `ngx_http_proxy_module` and `ngx_http_fastcgi_module`.
---
---@param data_chunk any
function ngx.req.append_body(data_chunk) end
---- Overrides the current request's request method with the `method_id` argument. Currently only numerical [method constants](#http-method-constants) are supported, like `ngx.HTTP_POST` and `ngx.HTTP_GET`.
+--- Overrides the current request's request method with the `method_id` argument. Currently only numerical `method constants` are supported, like `ngx.HTTP_POST` and `ngx.HTTP_GET`.
---
---- If the current request is an Nginx subrequest, then the subrequest's method will be overridden.
----
---- This method was first introduced in the `v0.5.6` release.
----
---- See also [ngx.req.get_method](#ngxreqget_method).
+--- If the current request is an NGINX subrequest, then the subrequest's method will be overridden.
---
---@param method_id ngx.http.method
function ngx.req.set_method(method_id) end
---- Retrieves the current request's request method name. Strings like `"GET"` and `"POST"` are returned instead of numerical [method constants](#http-method-constants).
----
---- If the current request is an Nginx subrequest, then the subrequest's method name will be returned.
----
---- This method was first introduced in the `v0.5.6` release.
+--- Retrieves the current request's request method name. Strings like `"GET"` and `"POST"` are returned instead of numerical `method constants`.
---
---- See also [ngx.req.set_method](#ngxreqset_method).
+--- If the current request is an NGINX subrequest, then the subrequest's method name will be returned.
---
----@return string@method_name
+---@return string
function ngx.req.get_method() end
---- Returns a read-only cosocket object that wraps the downstream connection. Only [receive](#tcpsockreceive) and [receiveuntil](#tcpsockreceiveuntil) methods are supported on this object.
+--- Returns a read-only cosocket object that wraps the downstream connection. Only `receive` and `receiveuntil` methods are supported on this object.
---
--- In case of error, `nil` will be returned as well as a string describing the error.
---
---- The socket object returned by this method is usually used to read the current request's body in a streaming fashion. Do not turn on the [lua_need_request_body](#lua_need_request_body) directive, and do not mix this call with [ngx.req.read_body](#ngxreqread_body) and [ngx.req.discard_body](#ngxreqdiscard_body).
+--- The socket object returned by this method is usually used to read the current request's body in a streaming fashion. Do not turn on the `lua_need_request_body` directive, and do not mix this call with `ngx.req.read_body` and `ngx.req.discard_body`.
---
---- If any request body data has been pre-read into the Nginx core request header buffer, the resulting cosocket object will take care of this to avoid potential data loss resulting from such pre-reading.
+--- If any request body data has been pre-read into the NGINX core request header buffer, the resulting cosocket object will take care of this to avoid potential data loss resulting from such pre-reading.
--- Chunked request bodies are not yet supported in this API.
---
---- Since the `v0.9.0` release, this function accepts an optional boolean `raw` argument. When this argument is `true`, this function returns a full-duplex cosocket object wrapping around the raw downstream connection socket, upon which you can call the [receive](#tcpsockreceive), [receiveuntil](#tcpsockreceiveuntil), and [send](#tcpsocksend) methods.
----
---- When the `raw` argument is `true`, it is required that no pending data from any previous [ngx.say](#ngxsay), [ngx.print](#ngxprint), or [ngx.send_headers](#ngxsend_headers) calls exists. So if you have these downstream output calls previously, you should call [ngx.flush(true)](#ngxflush) before calling `ngx.req.socket(true)` to ensure that there is no pending output data. If the request body has not been read yet, then this "raw socket" can also be used to read the request body.
+--- An optional boolean `raw` argument can be provided. When this argument is `true`, this function returns a full-duplex cosocket object wrapping around the raw downstream connection socket, upon which you can call the `receive`, `receiveuntil`, and `send` methods.
---
---- You can use the "raw request socket" returned by `ngx.req.socket(true)` to implement fancy protocols like [WebSocket](https://en.wikipedia.org/wiki/WebSocket), or just emit your own raw HTTP response header or body data. You can refer to the [lua-resty-websocket library](https://github.com/openresty/lua-resty-websocket) for a real world example.
+--- When the `raw` argument is `true`, it is required that no pending data from any previous `ngx.say`, `ngx.print`, or `ngx.send_headers` calls exists. So if you have these downstream output calls previously, you should call `ngx.flush(true)` before calling `ngx.req.socket(true)` to ensure that there is no pending output data. If the request body has not been read yet, then this "raw socket" can also be used to read the request body.
---
---- This function was first introduced in the `v0.5.0rc1` release.
+--- You can use the "raw request socket" returned by `ngx.req.socket(true)` to implement fancy protocols like `WebSocket`, or just emit your own raw HTTP response header or body data. You can refer to the `lua-resty-websocket library` for a real world example.
---
---@param raw boolean
----@return tcpsock,string @tcpsock,err
+---@return tcpsock? socket
+---@return string? error
function ngx.req.socket(raw) end
---- Completes the construction process of the new request body created by the [ngx.req.init_body](#ngxreqinit_body) and [ngx.req.append_body](#ngxreqappend_body) calls.
+--- Completes the construction process of the new request body created by the `ngx.req.init_body` and `ngx.req.append_body` calls.
---
---- This function can be used with [ngx.req.init_body](#ngxreqinit_body), [ngx.req.append_body](#ngxreqappend_body), and [ngx.req.socket](#ngxreqsocket) to implement efficient input filters in pure Lua (in the context of [rewrite_by_lua*](#rewrite_by_lua) or [access_by_lua*](#access_by_lua)), which can be used with other Nginx content handler or upstream modules like [ngx_http_proxy_module](http://nginx.org/en/docs/http/ngx_http_proxy_module.html) and [ngx_http_fastcgi_module](http://nginx.org/en/docs/http/ngx_http_fastcgi_module.html).
----
---- This function was first introduced in the `v0.5.11` release.
----
---- See also [ngx.req.init_body](#ngxreqinit_body).
+--- This function can be used with `ngx.req.init_body`, `ngx.req.append_body`, and `ngx.req.socket` to implement efficient input filters in pure Lua (in the context of `rewrite_by_lua*` or `access_by_lua*`), which can be used with other NGINX content handler or upstream modules like `ngx_http_proxy_module` and `ngx_http_fastcgi_module`.
---
function ngx.req.finish_body() end
---- Returns the original raw HTTP protocol header received by the Nginx server.
+--- Returns the original raw HTTP protocol header received by the NGINX server.
---
--- By default, the request line and trailing `CR LF` terminator will also be included. For example,
---
--- ```lua
----
--- ngx.print(ngx.req.raw_header())
--- ```
---
--- gives something like this:
---
----
--- GET /t HTTP/1.1
--- Host: localhost
--- Connection: close
--- Foo: bar
---
----
----
--- You can specify the optional
--- `no_request_line` argument as a `true` value to exclude the request line from the result. For example,
---
--- ```lua
----
--- ngx.print(ngx.req.raw_header(true))
--- ```
---
--- outputs something like this:
---
----
--- Host: localhost
--- Connection: close
--- Foo: bar
---
----
----
---- This method was first introduced in the `v0.7.17` release.
----
--- This method does not work in HTTP/2 requests yet.
---
---@param no_request_line boolean
----@return string @str
+---@return string
function ngx.req.raw_header(no_request_line) end
--- Returns a floating-point number representing the timestamp (including milliseconds as the decimal part) when the current request was created.
---
---- The following example emulates the `$request_time` variable value (provided by [ngx_http_log_module](http://nginx.org/en/docs/http/ngx_http_log_module.html)) in pure Lua:
+--- The following example emulates the `$request_time` variable value (provided by `ngx_http_log_module`) in pure Lua:
---
--- ```lua
----
--- local request_time = ngx.now() - ngx.req.start_time()
--- ```
---
---- This function was first introduced in the `v0.7.7` release.
----
---- See also [ngx.now](#ngxnow) and [ngx.update_time](#ngxupdate_time).
----
---@return number
function ngx.req.start_time() end
---- Creates a new blank request body for the current request and inializes the buffer for later request body data writing via the [ngx.req.append_body](#ngxreqappend_body) and [ngx.req.finish_body](#ngxreqfinish_body) APIs.
+--- Creates a new blank request body for the current request and inializes the buffer for later request body data writing via the `ngx.req.append_body` and `ngx.req.finish_body` APIs.
---
---- If the `buffer_size` argument is specified, then its value will be used for the size of the memory buffer for body writing with [ngx.req.append_body](#ngxreqappend_body). If the argument is omitted, then the value specified by the standard [client_body_buffer_size](http://nginx.org/en/docs/http/ngx_http_core_module.html#client_body_buffer_size) directive will be used instead.
+--- If the `buffer_size` argument is specified, then its value will be used for the size of the memory buffer for body writing with `ngx.req.append_body`. If the argument is omitted, then the value specified by the standard `client_body_buffer_size` directive will be used instead.
---
---- When the data can no longer be hold in the memory buffer for the request body, then the data will be flushed onto a temporary file just like the standard request body reader in the Nginx core.
+--- When the data can no longer be hold in the memory buffer for the request body, then the data will be flushed onto a temporary file just like the standard request body reader in the NGINX core.
---
---- It is important to always call the [ngx.req.finish_body](#ngxreqfinish_body) after all the data has been appended onto the current request body. Also, when this function is used together with [ngx.req.socket](#ngxreqsocket), it is required to call [ngx.req.socket](#ngxreqsocket) *before* this function, or you will get the "request body already exists" error message.
+--- It is important to always call the `ngx.req.finish_body` after all the data has been appended onto the current request body. Also, when this function is used together with `ngx.req.socket`, it is required to call `ngx.req.socket` *before* this function, or you will get the "request body already exists" error message.
---
--- The usage of this function is often like this:
---
--- ```lua
----
--- ngx.req.init_body(128 * 1024) -- buffer is 128KB
--- for chunk in next_data_chunk() do
--- ngx.req.append_body(chunk) -- each chunk can be 4KB
@@ -2693,27 +2346,21 @@ function ngx.req.start_time() end
--- ngx.req.finish_body()
--- ```
---
---- This function can be used with [ngx.req.append_body](#ngxreqappend_body), [ngx.req.finish_body](#ngxreqfinish_body), and [ngx.req.socket](#ngxreqsocket) to implement efficient input filters in pure Lua (in the context of [rewrite_by_lua*](#rewrite_by_lua) or [access_by_lua*](#access_by_lua)), which can be used with other Nginx content handler or upstream modules like [ngx_http_proxy_module](http://nginx.org/en/docs/http/ngx_http_proxy_module.html) and [ngx_http_fastcgi_module](http://nginx.org/en/docs/http/ngx_http_fastcgi_module.html).
----
---- This function was first introduced in the `v0.5.11` release.
+--- This function can be used with `ngx.req.append_body`, `ngx.req.finish_body`, and `ngx.req.socket` to implement efficient input filters in pure Lua (in the context of `rewrite_by_lua*` or `access_by_lua*`), which can be used with other NGINX content handler or upstream modules like `ngx_http_proxy_module` and `ngx_http_fastcgi_module`.
---
---@param buffer_size number
-function ngx.req.init_body( buffer_size) end
+function ngx.req.init_body(buffer_size) end
--- Set the current request's request body using the in-file data specified by the `file_name` argument.
---
---- If the request body has not been read yet, call [ngx.req.read_body](#ngxreqread_body) first (or turn on [lua_need_request_body](#lua_need_request_body) to force this module to read the request body. This is not recommended however). Additionally, the request body must not have been previously discarded by [ngx.req.discard_body](#ngxreqdiscard_body).
+--- If the request body has not been read yet, call `ngx.req.read_body` first (or turn on `lua_need_request_body` to force this module to read the request body. This is not recommended however). Additionally, the request body must not have been previously discarded by `ngx.req.discard_body`.
---
---- If the optional `auto_clean` argument is given a `true` value, then this file will be removed at request completion or the next time this function or [ngx.req.set_body_data](#ngxreqset_body_data) are called in the same request. The `auto_clean` is default to `false`.
+--- If the optional `auto_clean` argument is given a `true` value, then this file will be removed at request completion or the next time this function or `ngx.req.set_body_data` are called in the same request. The `auto_clean` is default to `false`.
---
---- Please ensure that the file specified by the `file_name` argument exists and is readable by an Nginx worker process by setting its permission properly to avoid Lua exception errors.
+--- Please ensure that the file specified by the `file_name` argument exists and is readable by an NGINX worker process by setting its permission properly to avoid Lua exception errors.
---
--- Whether the previous request body has been read into memory or buffered into a disk file, it will be freed or the disk file will be cleaned up immediately, respectively.
---
---- This function was first introduced in the `v0.3.1rc18` release.
----
---- See also [ngx.req.set_body_data](#ngxreqset_body_data).
----
---@param file_name string
---@param auto_clean boolean
function ngx.req.set_body_file(file_name, auto_clean) end
@@ -2726,7 +2373,6 @@ function ngx.req.clear_header(header_name) end
--- Returns a Lua table holding all the current request headers.
---
--- ```lua
----
--- local h, err = ngx.req.get_headers()
---
--- if err == "truncated" then
@@ -2741,16 +2387,14 @@ function ngx.req.clear_header(header_name) end
--- To read an individual header:
---
--- ```lua
----
--- ngx.say("Host: ", ngx.req.get_headers()["Host"])
--- ```
---
---- Note that the [ngx.var.HEADER](#ngxvarvariable) API call, which uses core [$http_HEADER](http://nginx.org/en/docs/http/ngx_http_core_module.html#var_http_) variables, may be more preferable for reading individual request headers.
+--- Note that the `ngx.var.HEADER` API call, which uses core `$http_HEADER` variables, may be more preferable for reading individual request headers.
---
--- For multiple instances of request headers such as:
---
--- ```bash
----
--- Foo: foo
--- Foo: bar
--- Foo: baz
@@ -2759,16 +2403,14 @@ function ngx.req.clear_header(header_name) end
--- the value of `ngx.req.get_headers()["Foo"]` will be a Lua (array) table such as:
---
--- ```lua
----
--- {"foo", "bar", "baz"}
--- ```
---
---- Note that a maximum of 100 request headers are parsed by default (including those with the same name) and that additional request headers are silently discarded to guard against potential denial of service attacks. Since `v0.10.13`, when the limit is exceeded, it will return a second value which is the string `"truncated"`.
+--- Note that a maximum of 100 request headers are parsed by default (including those with the same name) and that additional request headers are silently discarded to guard against potential denial of service attacks. When the limit is exceeded, it will return a second value which is the string `"truncated"`.
---
--- However, the optional `max_headers` function argument can be used to override this limit:
---
--- ```lua
----
--- local headers, err = ngx.req.get_headers(10)
---
--- if err == "truncated" then
@@ -2779,18 +2421,16 @@ function ngx.req.clear_header(header_name) end
--- This argument can be set to zero to remove the limit and to process all request headers received:
---
--- ```lua
----
--- local headers, err = ngx.req.get_headers(0)
--- ```
---
--- Removing the `max_headers` cap is strongly discouraged.
---
---- Since the `0.6.9` release, all the header names in the Lua table returned are converted to the pure lower-case form by default, unless the `raw` argument is set to `true` (default to `false`).
+--- All the header names in the Lua table returned are converted to the pure lower-case form by default, unless the `raw` argument is set to `true` (default to `false`).
---
--- Also, by default, an `__index` metamethod is added to the resulting Lua table and will normalize the keys to a pure lowercase form with all underscores converted to dashes in case of a lookup miss. For example, if a request header `My-Foo-Header` is present, then the following invocations will all pick up the value of this header correctly:
---
--- ```lua
----
--- ngx.say(headers.my_foo_header)
--- ngx.say(headers["My-Foo-Header"])
--- ngx.say(headers["my-foo-header"])
@@ -2800,7 +2440,8 @@ function ngx.req.clear_header(header_name) end
---
---@param max_headers number
---@param raw boolean
----@return table,string@headers, err
+---@return table<string, string|string[]> headers
+---@return string|'"truncated"' error
function ngx.req.get_headers(max_headers, raw) end
--- Explicitly discard the request body, i.e., read the data on the connection and throw it away immediately (without using the request body by any means).
@@ -2809,20 +2450,15 @@ function ngx.req.get_headers(max_headers, raw) end
---
--- If the request body has already been read, this function does nothing and returns immediately.
---
---- This function was first introduced in the `v0.3.1rc17` release.
----
---- See also [ngx.req.read_body](#ngxreqread_body).
----
function ngx.req.discard_body() end
--- Set the current request's request header named `header_name` to value `header_value`, overriding any existing ones.
---
---- By default, all the subrequests subsequently initiated by [ngx.location.capture](#ngxlocationcapture) and [ngx.location.capture_multi](#ngxlocationcapture_multi) will inherit the new header.
+--- By default, all the subrequests subsequently initiated by `ngx.location.capture` and `ngx.location.capture_multi` will inherit the new header.
---
--- Here is an example of setting the `Content-Type` header:
---
--- ```lua
----
--- ngx.req.set_header("Content-Type", "text/css")
--- ```
---
@@ -2830,14 +2466,12 @@ function ngx.req.discard_body() end
--- for example,
---
--- ```lua
----
--- ngx.req.set_header("Foo", {"a", "abc"})
--- ```
---
--- will produce two new request headers:
---
--- ```bash
----
--- Foo: a
--- Foo: abc
--- ```
@@ -2847,22 +2481,20 @@ function ngx.req.discard_body() end
--- When the `header_value` argument is `nil`, the request header will be removed. So
---
--- ```lua
----
--- ngx.req.set_header("X-Foo", nil)
--- ```
---
--- is equivalent to
---
--- ```lua
----
--- ngx.req.clear_header("X-Foo")
--- ```
---
---@param header_name string
----@param header_value string | string[]
+---@param header_value string|string[]
function ngx.req.set_header(header_name, header_value) end
---- Retrieves in-memory request body data. It returns a Lua string rather than a Lua table holding all the parsed query arguments. Use the [ngx.req.get_post_args](#ngxreqget_post_args) function instead if a Lua table is required.
+--- Retrieves in-memory request body data. It returns a Lua string rather than a Lua table holding all the parsed query arguments. Use the `ngx.req.get_post_args` function instead if a Lua table is required.
---
--- This function returns `nil` if
---
@@ -2870,75 +2502,61 @@ function ngx.req.set_header(header_name, header_value) end
--- 1. the request body has been read into disk temporary files,
--- 1. or the request body has zero size.
---
---- If the request body has not been read yet, call [ngx.req.read_body](#ngxreqread_body) first (or turn on [lua_need_request_body](#lua_need_request_body) to force this module to read the request body. This is not recommended however).
+--- If the request body has not been read yet, call `ngx.req.read_body` first (or turn on `lua_need_request_body` to force this module to read the request body. This is not recommended however).
---
---- If the request body has been read into disk files, try calling the [ngx.req.get_body_file](#ngxreqget_body_file) function instead.
+--- If the request body has been read into disk files, try calling the `ngx.req.get_body_file` function instead.
---
---- To force in-memory request bodies, try setting [client_body_buffer_size](http://nginx.org/en/docs/http/ngx_http_core_module.html#client_body_buffer_size) to the same size value in [client_max_body_size](http://nginx.org/en/docs/http/ngx_http_core_module.html#client_max_body_size).
+--- To force in-memory request bodies, try setting `client_body_buffer_size` to the same size value in `client_max_body_size`.
---
--- Note that calling this function instead of using `ngx.var.request_body` or `ngx.var.echo_request_body` is more efficient because it can save one dynamic memory allocation and one data copy.
---
---- This function was first introduced in the `v0.3.1rc17` release.
----
---- See also [ngx.req.get_body_file](#ngxreqget_body_file).
----
----@return any@data
+---@return string?
function ngx.req.get_body_data() end
---- Reads the client request body synchronously without blocking the Nginx event loop.
+--- Reads the client request body synchronously without blocking the NGINX event loop.
---
--- ```lua
----
--- ngx.req.read_body()
--- local args = ngx.req.get_post_args()
--- ```
---
---- If the request body is already read previously by turning on [lua_need_request_body](#lua_need_request_body) or by using other modules, then this function does not run and returns immediately.
+--- If the request body is already read previously by turning on `lua_need_request_body` or by using other modules, then this function does not run and returns immediately.
---
---- If the request body has already been explicitly discarded, either by the [ngx.req.discard_body](#ngxreqdiscard_body) function or other modules, this function does not run and returns immediately.
+--- If the request body has already been explicitly discarded, either by the `ngx.req.discard_body` function or other modules, this function does not run and returns immediately.
---
--- In case of errors, such as connection errors while reading the data, this method will throw out a Lua exception *or* terminate the current request with a 500 status code immediately.
---
---- The request body data read using this function can be retrieved later via [ngx.req.get_body_data](#ngxreqget_body_data) or, alternatively, the temporary file name for the body data cached to disk using [ngx.req.get_body_file](#ngxreqget_body_file). This depends on
----
---- 1. whether the current request body is already larger than the [client_body_buffer_size](http://nginx.org/en/docs/http/ngx_http_core_module.html#client_body_buffer_size),
---- 1. and whether [client_body_in_file_only](http://nginx.org/en/docs/http/ngx_http_core_module.html#client_body_in_file_only) has been switched on.
+--- The request body data read using this function can be retrieved later via `ngx.req.get_body_data` or, alternatively, the temporary file name for the body data cached to disk using `ngx.req.get_body_file`. This depends on
---
---- In cases where current request may have a request body and the request body data is not required, The [ngx.req.discard_body](#ngxreqdiscard_body) function must be used to explicitly discard the request body to avoid breaking things under HTTP 1.1 keepalive or HTTP 1.1 pipelining.
----
---- This function was first introduced in the `v0.3.1rc17` release.
+--- 1. whether the current request body is already larger than the `client_body_buffer_size`,
+--- 1. and whether `client_body_in_file_only` has been switched on.
---
+--- In cases where current request may have a request body and the request body data is not required, The `ngx.req.discard_body` function must be used to explicitly discard the request body to avoid breaking things under HTTP 1.1 keepalive or HTTP 1.1 pipelining.
---
function ngx.req.read_body() end
--- Retrieves the file name for the in-file request body data. Returns `nil` if the request body has not been read or has been read into memory.
---
---- The returned file is read only and is usually cleaned up by Nginx's memory pool. It should not be manually modified, renamed, or removed in Lua code.
----
---- If the request body has not been read yet, call [ngx.req.read_body](#ngxreqread_body) first (or turn on [lua_need_request_body](#lua_need_request_body) to force this module to read the request body. This is not recommended however).
----
---- If the request body has been read into memory, try calling the [ngx.req.get_body_data](#ngxreqget_body_data) function instead.
+--- The returned file is read only and is usually cleaned up by NGINX's memory pool. It should not be manually modified, renamed, or removed in Lua code.
---
---- To force in-file request bodies, try turning on [client_body_in_file_only](http://nginx.org/en/docs/http/ngx_http_core_module.html#client_body_in_file_only).
+--- If the request body has not been read yet, call `ngx.req.read_body` first (or turn on `lua_need_request_body` to force this module to read the request body. This is not recommended however).
---
---- This function was first introduced in the `v0.3.1rc17` release.
+--- If the request body has been read into memory, try calling the `ngx.req.get_body_data` function instead.
---
---- See also [ngx.req.get_body_data](#ngxreqget_body_data).
+--- To force in-file request bodies, try turning on `client_body_in_file_only`.
---
----@return string@file_name
+---@return string? filename
function ngx.req.get_body_file() end
--- Rewrite the current request's URI query arguments by the `args` argument. The `args` argument can be either a Lua string, as in
---
--- ```lua
----
--- ngx.req.set_uri_args("a=3&b=hello%20world")
--- ```
---
--- or a Lua table holding the query arguments' key-value pairs, as in
---
--- ```lua
----
--- ngx.req.set_uri_args({ a = 3, b = "hello world" })
--- ```
---
@@ -2947,17 +2565,12 @@ function ngx.req.get_body_file() end
--- Multi-value arguments are also supported:
---
--- ```lua
----
--- ngx.req.set_uri_args({ a = 3, b = {5, 6} })
--- ```
---
--- which will result in a query string like `a=3&b=5&b=6`.
---
---- This interface was first introduced in the `v0.3.1rc13` release.
----
---- See also [ngx.req.set_uri](#ngxreqset_uri).
----
----@param args string | table<string,any>
+---@param args string|table
function ngx.req.set_uri_args(args) end
--- Encode the Lua table to a query args string according to the URI encoded rules.
@@ -2965,85 +2578,71 @@ function ngx.req.set_uri_args(args) end
--- For example,
---
--- ```lua
----
--- ngx.encode_args({foo = 3, ["b r"] = "hello world"})
--- ```
---
--- yields
---
----
--- foo=3&b%20r=hello%20world
---
----
--- The table keys must be Lua strings.
---
--- Multi-value query args are also supported. Just use a Lua table for the argument's value, for example:
---
--- ```lua
----
--- ngx.encode_args({baz = {32, "hello"}})
--- ```
---
--- gives
---
----
--- baz=32&baz=hello
---
----
--- If the value table is empty and the effect is equivalent to the `nil` value.
---
--- Boolean argument values are also supported, for instance,
---
--- ```lua
----
--- ngx.encode_args({a = true, b = 1})
--- ```
---
--- yields
---
----
--- a&b=1
---
----
--- If the argument value is `false`, then the effect is equivalent to the `nil` value.
---
---- This method was first introduced in the `v0.3.1rc27` release.
----
----@param table table
----@return string
-function ngx.encode_args(table) end
+---@param args table
+---@return string encoded
+function ngx.encode_args(args) end
---- Decodes a URI encoded query-string into a Lua table. This is the inverse function of [ngx.encode_args](#ngxencode_args).
+--- Decodes a URI encoded query-string into a Lua table. This is the inverse function of `ngx.encode_args`.
---
---- The optional `max_args` argument can be used to specify the maximum number of arguments parsed from the `str` argument. By default, a maximum of 100 request arguments are parsed (including those with the same name) and that additional URI arguments are silently discarded to guard against potential denial of service attacks. Since `v0.10.13`, when the limit is exceeded, it will return a second value which is the string `"truncated"`.
+--- The optional `max_args` argument can be used to specify the maximum number of arguments parsed from the `str` argument. By default, a maximum of 100 request arguments are parsed (including those with the same name) and that additional URI arguments are silently discarded to guard against potential denial of service attacks. When the limit is exceeded, it will return a second value which is the string `"truncated"`.
---
--- This argument can be set to zero to remove the limit and to process all request arguments received:
---
--- ```lua
----
--- local args = ngx.decode_args(str, 0)
--- ```
---
--- Removing the `max_args` cap is strongly discouraged.
---
---- This method was introduced in the `v0.5.0rc29`.
----
----@param str string
----@param max_args number
----@return table,string
+---@param str string
+---@param max_args number
+---@return table args
+---@return string|'"truncated"' error
function ngx.decode_args(str, max_args) end
-ngx.socket={}
+ngx.socket = {}
---@class udpsock
-local udpsock={}
+local udpsock = {}
--- Attempts to connect a UDP socket object to a remote server or to a datagram unix domain socket file. Because the datagram protocol is actually connection-less, this method does not really establish a "connection", but only just set the name of the remote peer for subsequent read/write operations.
---
---- Both IP addresses and domain names can be specified as the `host` argument. In case of domain names, this method will use Nginx core's dynamic resolver to parse the domain name without blocking and it is required to configure the [resolver](http://nginx.org/en/docs/http/ngx_http_core_module.html#resolver) directive in the `nginx.conf` file like this:
+--- Both IP addresses and domain names can be specified as the `host` argument. In case of domain names, this method will use NGINX core's dynamic resolver to parse the domain name without blocking and it is required to configure the `resolver` directive in the `nginx.conf` file like this:
---
--- ```nginx
----
--- resolver 8.8.8.8; # use Google's public DNS nameserver
--- ```
---
@@ -3054,7 +2653,6 @@ local udpsock={}
--- Here is an example for connecting to a UDP (memcached) server:
---
--- ```nginx
----
--- location /test {
--- resolver 8.8.8.8;
---
@@ -3071,10 +2669,9 @@ local udpsock={}
--- }
--- ```
---
---- Since the `v0.7.18` release, connecting to a datagram unix domain socket file is also possible on Linux:
+--- Connecting to a datagram unix domain socket file is also possible on Linux:
---
--- ```lua
----
--- local sock = ngx.socket.udp()
--- local ok, err = sock:setpeername("unix:/tmp/some-datagram-service.sock")
--- if not ok then
@@ -3087,23 +2684,21 @@ local udpsock={}
---
--- Calling this method on an already connected socket object will cause the original connection to be closed first.
---
---- This method was first introduced in the `v0.5.7` release.
----
---@param host string
---@param port number
----@return boolean,string@ok,err
+---@return boolean ok
+---@return string? error
function udpsock:setpeername(host, port) end
--- Sends data on the current UDP or datagram unix domain socket object.
---
--- In case of success, it returns `1`. Otherwise, it returns `nil` and a string describing the error.
---
---- The input argument `data` can either be a Lua string or a (nested) Lua table holding string fragments. In case of table arguments, this method will copy all the string elements piece by piece to the underlying Nginx socket send buffers, which is usually optimal than doing string concatenation operations on the Lua land.
----
---- This feature was first introduced in the `v0.5.7` release.
+--- The input argument `data` can either be a Lua string or a (nested) Lua table holding string fragments. In case of table arguments, this method will copy all the string elements piece by piece to the underlying NGINX socket send buffers, which is usually optimal than doing string concatenation operations on the Lua land.
---
---@param data string | string[]
----@return boolean,string@ok,err
+---@return boolean ok
+---@return string? error
function udpsock:send(data) end
--- Receives data from the UDP or datagram unix domain socket object with an optional receive buffer size argument, `size`.
@@ -3116,10 +2711,9 @@ function udpsock:send(data) end
---
--- If no argument is specified, then the maximal buffer size, `8192` is assumed.
---
---- Timeout for the reading operation is controlled by the [lua_socket_read_timeout](#lua_socket_read_timeout) config directive and the [settimeout](#udpsocksettimeout) method. And the latter takes priority. For example:
+--- Timeout for the reading operation is controlled by the `lua_socket_read_timeout` config directive and the `settimeout` method. And the latter takes priority. For example:
---
--- ```lua
----
--- sock:settimeout(1000) -- one second timeout
--- local data, err = sock:receive()
--- if not data then
@@ -3129,12 +2723,11 @@ function udpsock:send(data) end
--- ngx.say("successfully read a packet: ", data)
--- ```
---
---- It is important here to call the [settimeout](#udpsocksettimeout) method *before* calling this method.
+--- It is important here to call the `settimeout` method *before* calling this method.
---
---- This feature was first introduced in the `v0.5.7` release.
----
----@param size number@optional param
----@return string,string@data,err
+---@param size? number
+---@return string? data
+---@return string? error
function udpsock:receive(size) end
@@ -3142,42 +2735,39 @@ function udpsock:receive(size) end
---
--- Socket objects that have not invoked this method (and associated connections) will be closed when the socket object is released by the Lua GC (Garbage Collector) or the current client HTTP request finishes processing.
---
---- This feature was first introduced in the `v0.5.7` release.
----
----@return boolean,string@ok,err
+---@return boolean ok
+---@return string? error
function udpsock:close() end
---- Set the timeout value in milliseconds for subsequent socket operations (like [receive](#udpsockreceive)).
+--- Set the timeout value in milliseconds for subsequent socket operations (like `receive`).
---
---- Settings done by this method takes priority over those config directives, like [lua_socket_read_timeout](#lua_socket_read_timeout).
----
---- This feature was first introduced in the `v0.5.7` release.
+--- Settings done by this method takes priority over those config directives, like `lua_socket_read_timeout`.
---
---@param time number
function udpsock:settimeout(time) end
--- Creates and returns a TCP or stream-oriented unix domain socket object (also known as one type of the "cosocket" objects). The following methods are supported on this object:
---
---- * [connect](#tcpsockconnect)
---- * [sslhandshake](#tcpsocksslhandshake)
---- * [send](#tcpsocksend)
---- * [receive](#tcpsockreceive)
---- * [close](#tcpsockclose)
---- * [settimeout](#tcpsocksettimeout)
---- * [settimeouts](#tcpsocksettimeouts)
---- * [setoption](#tcpsocksetoption)
---- * [receiveany](#tcpsockreceiveany)
---- * [receiveuntil](#tcpsockreceiveuntil)
---- * [setkeepalive](#tcpsocksetkeepalive)
---- * [getreusedtimes](#tcpsockgetreusedtimes)
+--- * `connect`
+--- * `sslhandshake`
+--- * `send`
+--- * `receive`
+--- * `close`
+--- * `settimeout`
+--- * `settimeouts`
+--- * `setoption`
+--- * `receiveany`
+--- * `receiveuntil`
+--- * `setkeepalive`
+--- * `getreusedtimes`
---
---- It is intended to be compatible with the TCP API of the [LuaSocket](http://w3.impa.br/~diego/software/luasocket/tcp.html) library but is 100% nonblocking out of the box. Also, we introduce some new APIs to provide more functionalities.
+--- It is intended to be compatible with the TCP API of the `LuaSocket` library but is 100% nonblocking out of the box.
---
---- The cosocket object created by this API function has exactly the same lifetime as the Lua handler creating it. So never pass the cosocket object to any other Lua handler (including ngx.timer callback functions) and never share the cosocket object between different Nginx requests.
+--- The cosocket object created by this API function has exactly the same lifetime as the Lua handler creating it. So never pass the cosocket object to any other Lua handler (including ngx.timer callback functions) and never share the cosocket object between different NGINX requests.
---
--- For every cosocket object's underlying connection, if you do not
---- explicitly close it (via [close](#tcpsockclose)) or put it back to the connection
---- pool (via [setkeepalive](#tcpsocksetkeepalive)), then it is automatically closed when one of
+--- explicitly close it (via `close`) or put it back to the connection
+--- pool (via `setkeepalive`), then it is automatically closed when one of
--- the following two events happens:
---
--- * the current request handler completes, or
@@ -3185,29 +2775,24 @@ function udpsock:settimeout(time) end
---
--- Fatal errors in cosocket operations always automatically close the current
--- connection (note that, read timeout error is the only error that is
---- not fatal), and if you call [close](#tcpsockclose) on a closed connection, you will get
+--- not fatal), and if you call `close` on a closed connection, you will get
--- the "closed" error.
---
---- Starting from the `0.9.9` release, the cosocket object here is full-duplex, that is, a reader "light thread" and a writer "light thread" can operate on a single cosocket object simultaneously (both "light threads" must belong to the same Lua handler though, see reasons above). But you cannot have two "light threads" both reading (or writing or connecting) the same cosocket, otherwise you might get an error like "socket busy reading" when calling the methods of the cosocket object.
----
---- This feature was first introduced in the `v0.5.0rc1` release.
----
---- See also [ngx.socket.udp](#ngxsocketudp).
+--- The cosocket object here is full-duplex, that is, a reader "light thread" and a writer "light thread" can operate on a single cosocket object simultaneously (both "light threads" must belong to the same Lua handler though, see reasons above). But you cannot have two "light threads" both reading (or writing or connecting) the same cosocket, otherwise you might get an error like "socket busy reading" when calling the methods of the cosocket object.
---
---@return tcpsock
function ngx.socket.tcp() end
---@class tcpsock
-local tcpsock={}
+local tcpsock = {}
--- Attempts to connect a TCP socket object to a remote server or to a stream unix domain socket file without blocking.
---
---- Before actually resolving the host name and connecting to the remote backend, this method will always look up the connection pool for matched idle connections created by previous calls of this method (or the [ngx.socket.connect](#ngxsocketconnect) function).
+--- Before actually resolving the host name and connecting to the remote backend, this method will always look up the connection pool for matched idle connections created by previous calls of this method (or the `ngx.socket.connect` function).
---
---- Both IP addresses and domain names can be specified as the `host` argument. In case of domain names, this method will use Nginx core's dynamic resolver to parse the domain name without blocking and it is required to configure the [resolver](http://nginx.org/en/docs/http/ngx_http_core_module.html#resolver) directive in the `nginx.conf` file like this:
+--- Both IP addresses and domain names can be specified as the `host` argument. In case of domain names, this method will use NGINX core's dynamic resolver to parse the domain name without blocking and it is required to configure the `resolver` directive in the `nginx.conf` file like this:
---
--- ```nginx
----
--- resolver 8.8.8.8; # use Google's public DNS nameserver
--- ```
---
@@ -3218,7 +2803,6 @@ local tcpsock={}
--- Here is an example for connecting to a TCP server:
---
--- ```nginx
----
--- location /test {
--- resolver 8.8.8.8;
---
@@ -3238,7 +2822,6 @@ local tcpsock={}
--- Connecting to a Unix Domain Socket file is also possible:
---
--- ```lua
----
--- local sock = ngx.socket.tcp()
--- local ok, err = sock:connect("unix:/tmp/memcached.sock")
--- if not ok then
@@ -3249,70 +2832,37 @@ local tcpsock={}
---
--- assuming memcached (or something else) is listening on the unix domain socket file `/tmp/memcached.sock`.
---
---- Timeout for the connecting operation is controlled by the [lua_socket_connect_timeout](#lua_socket_connect_timeout) config directive and the [settimeout](#tcpsocksettimeout) method. And the latter takes priority. For example:
+--- Timeout for the connecting operation is controlled by the `lua_socket_connect_timeout` config directive and the `settimeout` method. And the latter takes priority. For example:
---
--- ```lua
----
--- local sock = ngx.socket.tcp()
--- sock:settimeout(1000) -- one second timeout
--- local ok, err = sock:connect(host, port)
--- ```
---
---- It is important here to call the [settimeout](#tcpsocksettimeout) method *before* calling this method.
+--- It is important here to call the `settimeout` method *before* calling this method.
---
--- Calling this method on an already connected socket object will cause the original connection to be closed first.
---
---- An optional Lua table can be specified as the last argument to this method to specify various connect options:
----
---- * `pool`
---- specify a custom name for the connection pool being used. If omitted, then the connection pool name will be generated from the string template `"<host>:<port>"` or `"<unix-socket-path>"`.
----
---- * `pool_size`
---- specify the size of the connection pool. If omitted and no
---- `backlog` option was provided, no pool will be created. If omitted
---- but `backlog` was provided, the pool will be created with a default
---- size equal to the value of the [lua_socket_pool_size](#lua_socket_pool_size)
---- directive.
---- The connection pool holds up to `pool_size` alive connections
---- ready to be reused by subsequent calls to [connect](#tcpsockconnect), but
---- note that there is no upper limit to the total number of opened connections
---- outside of the pool. If you need to restrict the total number of opened
---- connections, specify the `backlog` option.
---- When the connection pool would exceed its size limit, the least recently used
---- (kept-alive) connection already in the pool will be closed to make room for
---- the current connection.
---- Note that the cosocket connection pool is per Nginx worker process rather
---- than per Nginx server instance, so the size limit specified here also applies
---- to every single Nginx worker process. Also note that the size of the connection
---- pool cannot be changed once it has been created.
---- This option was first introduced in the `v0.10.14` release.
----
---- * `backlog`
---- if specified, this module will limit the total number of opened connections
---- for this pool. No more connections than `pool_size` can be opened
---- for this pool at any time. If the connection pool is full, subsequent
---- connect operations will be queued into a queue equal to this option's
---- value (the "backlog" queue).
---- If the number of queued connect operations is equal to `backlog`,
---- subsequent connect operations will fail and return `nil` plus the
---- error string `"too many waiting connect operations"`.
---- The queued connect operations will be resumed once the number of connections
---- in the pool is less than `pool_size`.
---- The queued connect operation will abort once they have been queued for more
---- than `connect_timeout`, controlled by
---- [settimeouts](#tcpsocksettimeouts), and will return `nil` plus
---- the error string `"timeout"`.
---- This option was first introduced in the `v0.10.14` release.
----
---- The support for the options table argument was first introduced in the `v0.5.7` release.
----
---- This method was first introduced in the `v0.5.0rc1` release.
----
---@param host string
---@param port number
----@param options_table table
----@return boolean,string @ok,err
-function tcpsock:connect(host, port, options_table) end
+---@param opts tcpsock.connect.opts
+---@return boolean ok
+---@return string? error
+function tcpsock:connect(host, port, opts) end
+
+--- An optional Lua table can be specified as the last argument to `tcpsock:connect()`
+---
+---@class tcpsock.connect.opts : table
+---
+--- A custom name for the connection pool being used. If omitted, then the connection pool name will be generated from the string template `"<host>:<port>"` or `"<unix-socket-path>"`.
+---@field pool string
+---
+--- The size of the connection pool. If omitted and no `backlog` option was provided, no pool will be created. If omitted but `backlog` was provided, the pool will be created with a default size equal to the value of the `lua_socket_pool_size` directive. The connection pool holds up to `pool_size` alive connections ready to be reused by subsequent calls to `connect`, but note that there is no upper limit to the total number of opened connections outside of the pool. If you need to restrict the total number of opened connections, specify the `backlog` option. When the connection pool would exceed its size limit, the least recently used (kept-alive) connection already in the pool will be closed to make room for the current connection. Note that the cosocket connection pool is per NGINX worker process rather than per NGINX server instance, so the size limit specified here also applies to every single NGINX worker process. Also note that the size of the connection pool cannot be changed once it has been created.
+---@field pool_size number
+---
+--- Limits the total number of opened connections for this pool. No more connections than `pool_size` can be opened for this pool at any time. If the connection pool is full, subsequent connect operations will be queued into a queue equal to this option's value (the "backlog" queue). If the number of queued connect operations is equal to `backlog`, subsequent connect operations will fail and return `nil` plus the error string `"too many waiting connect operations"`. The queued connect operations will be resumed once the number of connections in the pool is less than `pool_size`. The queued connect operation will abort once they have been queued for more than `connect_timeout`, controlled by `settimeouts`, and will return `nil` plus the error string `"timeout"`.
+---@field backlog number
--- Does SSL/TLS handshake on the currently established connection.
@@ -3337,8 +2887,8 @@ function tcpsock:connect(host, port, options_table) end
--- The optional `ssl_verify` argument takes a Lua boolean value to
--- control whether to perform SSL verification. When set to `true`, the server
--- certificate will be verified according to the CA certificates specified by
---- the [lua_ssl_trusted_certificate](#lua_ssl_trusted_certificate) directive.
---- You may also need to adjust the [lua_ssl_verify_depth](#lua_ssl_verify_depth)
+--- the `lua_ssl_trusted_certificate` directive.
+--- You may also need to adjust the `lua_ssl_verify_depth`
--- directive to control how deep we should follow along the certificate chain.
--- Also, when the `ssl_verify` argument is true and the
--- `server_name` argument is also specified, the latter will be used
@@ -3350,13 +2900,11 @@ function tcpsock:connect(host, port, options_table) end
--- For connections that have already done SSL/TLS handshake, this method returns
--- immediately.
---
---- This method was first introduced in the `v0.9.11` release.
----
----@param reused_session boolean@
----@param server_name string@
----@param ssl_verify boolean@
+---@param reused_session? userdata|boolean
+---@param server_name string
+---@param ssl_verify boolean
---@param send_status_req boolean
----@return ngx.ssl.session,string@session, err
+---@return userdata|boolean session_or_ok
function tcpsock:sslhandshake(reused_session, server_name, ssl_verify, send_status_req) end
@@ -3366,35 +2914,31 @@ function tcpsock:sslhandshake(reused_session, server_name, ssl_verify, send_stat
---
--- In case of success, it returns the total number of bytes that have been sent. Otherwise, it returns `nil` and a string describing the error.
---
---- The input argument `data` can either be a Lua string or a (nested) Lua table holding string fragments. In case of table arguments, this method will copy all the string elements piece by piece to the underlying Nginx socket send buffers, which is usually optimal than doing string concatenation operations on the Lua land.
+--- The input argument `data` can either be a Lua string or a (nested) Lua table holding string fragments. In case of table arguments, this method will copy all the string elements piece by piece to the underlying NGINX socket send buffers, which is usually optimal than doing string concatenation operations on the Lua land.
---
---- Timeout for the sending operation is controlled by the [lua_socket_send_timeout](#lua_socket_send_timeout) config directive and the [settimeout](#tcpsocksettimeout) method. And the latter takes priority. For example:
+--- Timeout for the sending operation is controlled by the `lua_socket_send_timeout` config directive and the `settimeout` method. And the latter takes priority. For example:
---
--- ```lua
----
--- sock:settimeout(1000) -- one second timeout
--- local bytes, err = sock:send(request)
--- ```
---
---- It is important here to call the [settimeout](#tcpsocksettimeout) method *before* calling this method.
+--- It is important here to call the `settimeout` method *before* calling this method.
---
--- In case of any connection errors, this method always automatically closes the current connection.
---
---- This feature was first introduced in the `v0.5.0rc1` release.
----
----@param data string| string[]
----@return string,string@bytes,err
+---@param data string|string[]
+---@return number? bytes
+---@return string? error
function tcpsock:send(data) end
--- Receives data from the connected socket according to the reading pattern or size.
---
---- This method is a synchronous operation just like the [send](#tcpsocksend) method and is 100% nonblocking.
+--- This method is a synchronous operation just like the `send` method and is 100% nonblocking.
---
--- In case of success, it returns the data received; in case of error, it returns `nil` with a string describing the error and the partial data received so far.
---
---- If a number-like argument is specified (including strings that look like numbers), then it is interpreted as a size. This method will not return until it reads exactly this size of data or an error occurs.
----
--- If a non-number-like string argument is specified, then it is interpreted as a "pattern". The following patterns are supported:
---
--- * `'*a'`: reads from the socket until the connection is closed. No end-of-line translation is performed;
@@ -3402,10 +2946,12 @@ function tcpsock:send(data) end
---
--- If no argument is specified, then it is assumed to be the pattern `'*l'`, that is, the line reading pattern.
---
---- Timeout for the reading operation is controlled by the [lua_socket_read_timeout](#lua_socket_read_timeout) config directive and the [settimeout](#tcpsocksettimeout) method. And the latter takes priority. For example:
+--- If a number-like argument is specified (including strings that look like numbers), then it is interpreted as a size. This method will not return until it reads exactly this size of data or an error occurs.
---
---- ```lua
---
+--- Timeout for the reading operation is controlled by the `lua_socket_read_timeout` config directive and the `settimeout` method. And the latter takes priority. For example:
+---
+--- ```lua
--- sock:settimeout(1000) -- one second timeout
--- local line, err, partial = sock:receive()
--- if not line then
@@ -3415,30 +2961,30 @@ function tcpsock:send(data) end
--- ngx.say("successfully read a line: ", line)
--- ```
---
---- It is important here to call the [settimeout](#tcpsocksettimeout) method *before* calling this method.
+--- It is important here to call the `settimeout` method *before* calling this method.
---
---- Since the `v0.8.8` release, this method no longer automatically closes the current connection when the read timeout error happens. For other connection errors, this method always automatically closes the connection.
+--- This method does not automatically closes the current connection when the read timeout error happens. For other connection errors, this method always automatically closes the connection.
---
---- This feature was first introduced in the `v0.5.0rc1` release.
+---@overload fun(self:tcpsock, size:number):string,string,string
---
----@overload fun(pattern:string):string,string,any
----@param size number
----@return string,string,any @ data, err, partial
-function tcpsock:receive(size) end
+---@param pattern '"*a"'|'"*l"'
+---@return string? data
+---@return string? error
+---@return string? partial
+function tcpsock:receive(pattern) end
--- Returns any data received by the connected socket, at most `max` bytes.
---
---- This method is a synchronous operation just like the [send](#tcpsocksend) method and is 100% nonblocking.
+--- This method is a synchronous operation just like the `send` method and is 100% nonblocking.
---
--- In case of success, it returns the data received; in case of error, it returns `nil` with a string describing the error.
---
--- If the received data is more than this size, this method will return with exactly this size of data.
--- The remaining data in the underlying receive buffer could be returned in the next reading operation.
---
---- Timeout for the reading operation is controlled by the [lua_socket_read_timeout](#lua_socket_read_timeout) config directive and the [settimeouts](#tcpsocksettimeouts) method. And the latter takes priority. For example:
+--- Timeout for the reading operation is controlled by the `lua_socket_read_timeout` config directive and the `settimeouts` method. And the latter takes priority. For example:
---
--- ```lua
----
--- sock:settimeouts(1000, 1000, 1000) -- one second timeout for connect/read/write
--- local data, err = sock:receiveany(10 * 1024) -- read any data, at most 10K
--- if not data then
@@ -3450,10 +2996,9 @@ function tcpsock:receive(size) end
---
--- This method doesn't automatically close the current connection when the read timeout error occurs. For other connection errors, this method always automatically closes the connection.
---
---- This feature was first introduced in the `v0.10.14` release.
----
---@param max string
----@return string,string@data,err
+---@return string? data
+---@return string? error
function tcpsock:receiveany(max) end
@@ -3462,7 +3007,6 @@ function tcpsock:receiveany(max) end
--- Here is an example for using this method to read a data stream with the boundary sequence `--abcedhb`:
---
--- ```lua
----
--- local reader = sock:receiveuntil("\r\n--abcedhb")
--- local data, err, partial = reader()
--- if not data then
@@ -3480,7 +3024,6 @@ function tcpsock:receiveany(max) end
--- The iterator function behaves differently (i.e., like a real iterator) when it is called with a `size` argument. That is, it will read that `size` of data on each invocation and will return `nil` at the last invocation (either sees the boundary pattern or meets an error). For the last successful invocation of the iterator function, the `err` return value will be `nil` too. The iterator function will be reset after the last successful invocation that returns `nil` data and `nil` error. Consider the following example:
---
--- ```lua
----
--- local reader = sock:receiveuntil("\r\n--abcedhb")
---
--- while true do
@@ -3500,7 +3043,6 @@ function tcpsock:receiveany(max) end
---
--- Then for the incoming data stream `'hello, world! -agentzh\r\n--abcedhb blah blah'`, we shall get the following output from the sample code above:
---
----
--- read chunk: [hell]
--- read chunk: [o, w]
--- read chunk: [orld]
@@ -3509,13 +3051,11 @@ function tcpsock:receiveany(max) end
--- read chunk: [zh]
--- read done
---
----
--- Note that, the actual data returned *might* be a little longer than the size limit specified by the `size` argument when the boundary pattern has ambiguity for streaming parsing. Near the boundary of the data stream, the data string actually returned could also be shorter than the size limit.
---
---- Timeout for the iterator function's reading operation is controlled by the [lua_socket_read_timeout](#lua_socket_read_timeout) config directive and the [settimeout](#tcpsocksettimeout) method. And the latter takes priority. For example:
+--- Timeout for the iterator function's reading operation is controlled by the `lua_socket_read_timeout` config directive and the `settimeout` method. And the latter takes priority. For example:
---
--- ```lua
----
--- local readline = sock:receiveuntil("\r\n")
---
--- sock:settimeout(1000) -- one second timeout
@@ -3527,16 +3067,15 @@ function tcpsock:receiveany(max) end
--- ngx.say("successfully read a line: ", line)
--- ```
---
---- It is important here to call the [settimeout](#tcpsocksettimeout) method *before* calling the iterator function (note that the `receiveuntil` call is irrelevant here).
+--- It is important here to call the `settimeout` method *before* calling the iterator function (note that the `receiveuntil` call is irrelevant here).
---
---- As from the `v0.5.1` release, this method also takes an optional `options` table argument to control the behavior. The following options are supported:
+--- This method also takes an optional `options` table argument to control the behavior. The following options are supported:
---
--- * `inclusive`
---
--- The `inclusive` takes a boolean value to control whether to include the pattern string in the returned data string. Default to `false`. For example,
---
--- ```lua
----
--- local reader = tcpsock:receiveuntil("_END_", { inclusive = true })
--- local data = reader()
--- ngx.say(data)
@@ -3544,11 +3083,12 @@ function tcpsock:receiveany(max) end
---
--- Then for the input data stream `"hello world _END_ blah blah blah"`, then the example above will output `hello world _END_`, including the pattern string `_END_` itself.
---
---- Since the `v0.8.8` release, this method no longer automatically closes the current connection when the read timeout error happens. For other connection errors, this method always automatically closes the connection.
----
---- This method was first introduced in the `v0.5.0rc1` release.
+--- This method does not automatically closes the current connection when the read timeout error happens. For other connection errors, this method always automatically closes the connection.
---
---@alias ngx.socket.tcpsock.iterator fun(size:number):string,string,any
+---
+---@overload fun(self:tcpsock, size:number, options:table):ngx.socket.tcpsock.iterator
+---
---@param pattern string
---@param options table
---@return ngx.socket.tcpsock.iterator
@@ -3557,38 +3097,33 @@ function tcpsock:receiveuntil(pattern, options) end
--- Closes the current TCP or stream unix domain socket. It returns the `1` in case of success and returns `nil` with a string describing the error otherwise.
---
---- Note that there is no need to call this method on socket objects that have invoked the [setkeepalive](#tcpsocksetkeepalive) method because the socket object is already closed (and the current connection is saved into the built-in connection pool).
+--- Note that there is no need to call this method on socket objects that have invoked the `setkeepalive` method because the socket object is already closed (and the current connection is saved into the built-in connection pool).
---
--- Socket objects that have not invoked this method (and associated connections) will be closed when the socket object is released by the Lua GC (Garbage Collector) or the current client HTTP request finishes processing.
---
---- This feature was first introduced in the `v0.5.0rc1` release.
----
----@return boolean,string @ok,err
+---@return boolean ok
+---@return string? error
function tcpsock:close() end
---- Set the timeout value in milliseconds for subsequent socket operations ([connect](#tcpsockconnect), [receive](#tcpsockreceive), and iterators returned from [receiveuntil](#tcpsockreceiveuntil)).
----
---- Settings done by this method take priority over those specified via config directives (i.e. [lua_socket_connect_timeout](#lua_socket_connect_timeout), [lua_socket_send_timeout](#lua_socket_send_timeout), and [lua_socket_read_timeout](#lua_socket_read_timeout)).
+--- Set the timeout value in milliseconds for subsequent socket operations (`connect`, `receive`, and iterators returned from `receiveuntil`).
---
---- Note that this method does *not* affect the [lua_socket_keepalive_timeout](#lua_socket_keepalive_timeout) setting; the `timeout` argument to the [setkeepalive](#tcpsocksetkeepalive) method should be used for this purpose instead.
+--- Settings done by this method take priority over those specified via config directives (i.e. `lua_socket_connect_timeout`, `lua_socket_send_timeout`, and `lua_socket_read_timeout`).
---
---- This feature was first introduced in the `v0.5.0rc1` release.
+--- Note that this method does *not* affect the `lua_socket_keepalive_timeout` setting; the `timeout` argument to the `setkeepalive` method should be used for this purpose instead.
---
---@param time number
function tcpsock:settimeout(time) end
--- Respectively sets the connect, send, and read timeout thresholds (in milliseconds) for subsequent socket
---- operations ([connect](#tcpsockconnect), [send](#tcpsocksend), [receive](#tcpsockreceive), and iterators returned from [receiveuntil](#tcpsockreceiveuntil)).
+--- operations (`connect`, `send`, `receive`, and iterators returned from `receiveuntil`).
---
---- Settings done by this method take priority over those specified via config directives (i.e. [lua_socket_connect_timeout](#lua_socket_connect_timeout), [lua_socket_send_timeout](#lua_socket_send_timeout), and [lua_socket_read_timeout](#lua_socket_read_timeout)).
+--- Settings done by this method take priority over those specified via config directives (i.e. `lua_socket_connect_timeout`, `lua_socket_send_timeout`, and `lua_socket_read_timeout`).
---
---- It is recommended to use [settimeouts](#tcpsocksettimeouts) instead of [settimeout](#tcpsocksettimeout).
+--- It is recommended to use `settimeouts` instead of `settimeout`.
---
---- Note that this method does *not* affect the [lua_socket_keepalive_timeout](#lua_socket_keepalive_timeout) setting; the `timeout` argument to the [setkeepalive](#tcpsocksetkeepalive) method should be used for this purpose instead.
----
---- This feature was first introduced in the `v0.10.7` release.
+--- Note that this method does *not* affect the `lua_socket_keepalive_timeout` setting; the `timeout` argument to the `setkeepalive` method should be used for this purpose instead.
---
---@param connect_timeout number
---@param send_timeout number
@@ -3596,9 +3131,7 @@ function tcpsock:settimeout(time) end
function tcpsock:settimeouts(connect_timeout, send_timeout, read_timeout) end
---- This function is added for [LuaSocket](http://w3.impa.br/~diego/software/luasocket/tcp.html) API compatibility and does nothing for now. Its functionality is implemented `v0.10.18`.
----
---- This feature was first introduced in the `v0.5.0rc1` release.
+--- This function is added for `LuaSocket` API compatibility and does nothing for now.
---
--- In case of success, it returns `true`. Otherwise, it returns nil and a string describing the error.
---
@@ -3611,7 +3144,6 @@ function tcpsock:settimeouts(connect_timeout, send_timeout, read_timeout) end
--- had been called before, for example,
---
--- ```lua
----
--- local ok, err = tcpsock:setoption("keepalive", true)
--- if not ok then
--- ngx.say("setoption keepalive failed: ", err)
@@ -3624,7 +3156,6 @@ function tcpsock:settimeouts(connect_timeout, send_timeout, read_timeout) end
--- the `connect` function had been called before, for example,
---
--- ```lua
----
--- local ok, err = tcpsock:setoption("reuseaddr", 0)
--- if not ok then
--- ngx.say("setoption reuseaddr failed: ", err)
@@ -3636,7 +3167,6 @@ function tcpsock:settimeouts(connect_timeout, send_timeout, read_timeout) end
--- Make sure the `connect` function had been called before, for example,
---
--- ```lua
----
--- local ok, err = tcpsock:setoption("tcp-nodelay", true)
--- if not ok then
--- ngx.say("setoption tcp-nodelay failed: ", err)
@@ -3649,7 +3179,6 @@ function tcpsock:settimeouts(connect_timeout, send_timeout, read_timeout) end
--- Make sure the `connect` function had been called before, for example,
---
--- ```lua
----
--- local ok, err = tcpsock:setoption("sndbuf", 1024 * 10)
--- if not ok then
--- ngx.say("setoption sndbuf failed: ", err)
@@ -3662,7 +3191,6 @@ function tcpsock:settimeouts(connect_timeout, send_timeout, read_timeout) end
--- sure the `connect` function had been called before, for example,
---
--- ```lua
----
--- local ok, err = tcpsock:setoption("rcvbuf", 1024 * 10)
--- if not ok then
--- ngx.say("setoption rcvbuf failed: ", err)
@@ -3672,7 +3200,6 @@ function tcpsock:settimeouts(connect_timeout, send_timeout, read_timeout) end
--- NOTE: Once the option is set, it will become effective until the connection is closed. If you know the connection is from the connection pool and all the in-pool connections already have called the setoption() method with the desired socket option state, then you can just skip calling setoption() again to avoid the overhead of repeated calls, for example,
---
--- ```lua
----
--- local count, err = tcpsock:getreusedtimes()
--- if not count then
--- ngx.say("getreusedtimes failed: ", err)
@@ -3688,9 +3215,6 @@ function tcpsock:settimeouts(connect_timeout, send_timeout, read_timeout) end
--- end
--- ```
---
---- These options described above are supported in `v0.10.18`, and more options will be implemented in future.
----
----
---@param option tcpsock.setoption.option
---@param value number|boolean
---@return boolean ok
@@ -3706,47 +3230,29 @@ function tcpsock:setoption(option, value) end
---- Puts the current socket's connection immediately into the cosocket built-in connection pool and keep it alive until other [connect](#tcpsockconnect) method calls request it or the associated maximal idle timeout is expired.
----
---- The first optional argument, `timeout`, can be used to specify the maximal idle timeout (in milliseconds) for the current connection. If omitted, the default setting in the [lua_socket_keepalive_timeout](#lua_socket_keepalive_timeout) config directive will be used. If the `0` value is given, then the timeout interval is unlimited.
----
---- The second optional argument `size` is considered deprecated since
---- the `v0.10.14` release of this module, in favor of the
---- `pool_size` option of the [connect](#tcpsockconnect) method.
---- Since the `v0.10.14` release, this option will only take effect if
---- the call to [connect](#tcpsockconnect) did not already create a connection
---- pool.
---- When this option takes effect (no connection pool was previously created by
---- [connect](#tcpsockconnect)), it will specify the size of the connection pool,
---- and create it.
---- If omitted (and no pool was previously created), the default size is the value
---- of the [lua_socket_pool_size](#lua_socket_pool_size) directive.
---- The connection pool holds up to `size` alive connections ready to be
---- reused by subsequent calls to [connect](#tcpsockconnect), but note that there
---- is no upper limit to the total number of opened connections outside of the
---- pool.
---- When the connection pool would exceed its size limit, the least recently used
---- (kept-alive) connection already in the pool will be closed to make room for
---- the current connection.
---- Note that the cosocket connection pool is per Nginx worker process rather
---- than per Nginx server instance, so the size limit specified here also applies
---- to every single Nginx worker process. Also note that the size of the connection
---- pool cannot be changed once it has been created.
---- If you need to restrict the total number of opened connections, specify both
---- the `pool_size` and `backlog` option in the call to
---- [connect](#tcpsockconnect).
+--- Puts the current socket's connection immediately into the cosocket built-in connection pool and keep it alive until other `connect` method calls request it or the associated maximal idle timeout is expired.
+---
+--- The first optional argument, `timeout`, can be used to specify the maximal idle timeout (in milliseconds) for the current connection. If omitted, the default setting in the `lua_socket_keepalive_timeout` config directive will be used. If the `0` value is given, then the timeout interval is unlimited.
+---
+--- The second optional argument `size` is considered deprecated since the `v0.10.14` release of this module, in favor of the `pool_size` option of the `connect` method.
+--- Since the `v0.10.14` release, this option will only take effect if the call to `connect` did not already create a connection pool.
+--- When this option takes effect (no connection pool was previously created by `connect`), it will specify the size of the connection pool, and create it.
+--- If omitted (and no pool was previously created), the default size is the value of the `lua_socket_pool_size` directive.
+--- The connection pool holds up to `size` alive connections ready to be reused by subsequent calls to `connect`, but note that there is no upper limit to the total number of opened connections outside of the pool.
+--- When the connection pool would exceed its size limit, the least recently used (kept-alive) connection already in the pool will be closed to make room for the current connection.
+--- Note that the cosocket connection pool is per NGINX worker process rather than per NGINX server instance, so the size limit specified here also applies to every single NGINX worker process. Also note that the size of the connection pool cannot be changed once it has been created.
+--- If you need to restrict the total number of opened connections, specify both the `pool_size` and `backlog` option in the call to `connect`.
---
--- In case of success, this method returns `1`; otherwise, it returns `nil` and a string describing the error.
---
--- When the system receive buffer for the current connection has unread data, then this method will return the "connection in dubious state" error message (as the second return value) because the previous session has unread data left behind for the next session and the connection is not safe to be reused.
---
---- This method also makes the current cosocket object enter the "closed" state, so there is no need to manually call the [close](#tcpsockclose) method on it afterwards.
----
---- This feature was first introduced in the `v0.5.0rc1` release.
+--- This method also makes the current cosocket object enter the "closed" state, so there is no need to manually call the `close` method on it afterwards.
---
---@param timeout number
---@param size number
----@return boolean,string @ok,err
+---@return boolean ok
+---@return string? error
function tcpsock:setkeepalive(timeout, size) end
@@ -3754,15 +3260,13 @@ function tcpsock:setkeepalive(timeout, size) end
---
--- If the current connection does not come from the built-in connection pool, then this method always returns `0`, that is, the connection has never been reused (yet). If the connection comes from the connection pool, then the return value is always non-zero. So this method can also be used to determine if the current connection comes from the pool.
---
---- This feature was first introduced in the `v0.5.0rc1` release.
----
----@return number,string@count,err
+---@return number? count
+---@return string? error
function tcpsock:getreusedtimes() end
---- This function is a shortcut for combining [ngx.socket.tcp()](#ngxsockettcp) and the [connect()](#tcpsockconnect) method call in a single operation. It is actually implemented like this:
+--- This function is a shortcut for combining `ngx.socket.tcp()` and the `connect()` method call in a single operation. It is actually implemented like this:
---
--- ```lua
----
--- local sock = ngx.socket.tcp()
--- local ok, err = sock:connect(...)
--- if not ok then
@@ -3771,51 +3275,41 @@ function tcpsock:getreusedtimes() end
--- return sock
--- ```
---
---- There is no way to use the [settimeout](#tcpsocksettimeout) method to specify connecting timeout for this method and the [lua_socket_connect_timeout](#lua_socket_connect_timeout) directive must be set at configure time instead.
----
---- This feature was first introduced in the `v0.5.0rc1` release.
+--- There is no way to use the `settimeout` method to specify connecting timeout for this method and the `lua_socket_connect_timeout` directive must be set at configure time instead.
---
---@param host string
---@param port number
----@return tcpsock,string @tcpsock,err
+---@return tcpsock? socket
+---@return string? error
function ngx.socket.connect(host,port) end
--- Creates and returns a UDP or datagram-oriented unix domain socket object (also known as one type of the "cosocket" objects). The following methods are supported on this object:
---
---- * [setpeername](#udpsocksetpeername)
---- * [send](#udpsocksend)
---- * [receive](#udpsockreceive)
---- * [close](#udpsockclose)
---- * [settimeout](#udpsocksettimeout)
----
---- It is intended to be compatible with the UDP API of the [LuaSocket](http://w3.impa.br/~diego/software/luasocket/udp.html) library but is 100% nonblocking out of the box.
----
---- This feature was first introduced in the `v0.5.7` release.
+--- * `setpeername`
+--- * `send`
+--- * `receive`
+--- * `close`
+--- * `settimeout`
---
---- See also [ngx.socket.tcp](#ngxsockettcp).
+--- It is intended to be compatible with the UDP API of the `LuaSocket` library but is 100% nonblocking out of the box.
---
---@return udpsock
function ngx.socket.udp() end
---- Just an alias to [ngx.socket.tcp](#ngxsockettcp). If the stream-typed cosocket may also connect to a unix domain
+--- Just an alias to `ngx.socket.tcp`. If the stream-typed cosocket may also connect to a unix domain
--- socket, then this API name is preferred.
---
---- This API function was first added to the `v0.10.1` release.
----
----
function ngx.socket.stream() end
--- When this is used in the context of the `set_by_lua*` directives, this table is read-only and holds the input arguments to the config directives:
---
--- ```lua
----
--- value = ngx.arg[n]
--- ```
---
--- Here is an example
---
--- ```nginx
----
--- location /foo {
--- set $a 32;
--- set $b 56;
@@ -3832,21 +3326,41 @@ function ngx.socket.stream() end
---
--- When this table is used in the context of `body_filter_by_lua*`, the first element holds the input data chunk to the output filter code and the second element holds the boolean flag for the "eof" flag indicating the end of the whole output data stream.
---
---- The data chunk and "eof" flag passed to the downstream Nginx output filters can also be overridden by assigning values directly to the corresponding table elements. When setting `nil` or an empty Lua string value to `ngx.arg[1]`, no data chunk will be passed to the downstream Nginx output filters at all.
+--- The data chunk and "eof" flag passed to the downstream NGINX output filters can also be overridden by assigning values directly to the corresponding table elements. When setting `nil` or an empty Lua string value to `ngx.arg[1]`, no data chunk will be passed to the downstream NGINX output filters at all.
ngx.arg = {}
+---@alias ngx.phase.name
+---| '"init"'
+---| '"init_worker"'
+---| '"ssl_cert"'
+---| '"ssl_session_fetch"'
+---| '"ssl_session_store"'
+---| '"set"'
+---| '"rewrite"'
+---| '"balancer"'
+---| '"access"'
+---| '"content"'
+---| '"header_filter"'
+---| '"body_filter"'
+---| '"log"'
+---| '"timer"'
+
+--- Retrieves the current running phase name.
+---
+---@return ngx.phase.name
+function ngx.get_phase() end
+
---- When `status >= 200` (i.e., `ngx.HTTP_OK` and above), it will interrupt the execution of the current request and return status code to Nginx.
+--- When `status >= 200` (i.e., `ngx.HTTP_OK` and above), it will interrupt the execution of the current request and return status code to NGINX.
---
---- When `status == 0` (i.e., `ngx.OK`), it will only quit the current phase handler (or the content handler if the [content_by_lua*](#content_by_lua) directive is used) and continue to run later phases (if any) for the current request.
+--- When `status == 0` (i.e., `ngx.OK`), it will only quit the current phase handler (or the content handler if the `content_by_lua*` directive is used) and continue to run later phases (if any) for the current request.
---
--- The `status` argument can be `ngx.OK`, `ngx.ERROR`, `ngx.HTTP_NOT_FOUND`,
---- `ngx.HTTP_MOVED_TEMPORARILY`, or other [HTTP status constants](#http-status-constants).
+--- `ngx.HTTP_MOVED_TEMPORARILY`, or other `ngx.HTTP_*` status constants.
---
--- To return an error page with custom contents, use code snippets like this:
---
--- ```lua
----
--- ngx.status = ngx.HTTP_GONE
--- ngx.say("This is our own content")
--- -- to cause quit the whole request rather than the current phase handler
@@ -3856,7 +3370,6 @@ ngx.arg = {}
--- The effect in action:
---
--- ```bash
----
--- $ curl -i http://localhost/test
--- HTTP/1.1 410 Gone
--- Server: nginx/1.0.6
@@ -3871,16 +3384,15 @@ ngx.arg = {}
--- Number literals can be used directly as the argument, for instance,
---
--- ```lua
----
--- ngx.exit(501)
--- ```
---
---- Note that while this method accepts all [HTTP status constants](#http-status-constants) as input, it only accepts `ngx.OK` and `ngx.ERROR` of the [core constants](#core-constants).
+--- Note that while this method accepts all `ngx.HTTP_*` status constants as input, it only accepts `ngx.OK` and `ngx.ERROR` of the `core constants`.
---
--- Also note that this method call terminates the processing of the current request and that it is recommended that a coding style that combines this method call with the `return` statement, i.e., `return ngx.exit(...)` be used to reinforce the fact that the request processing is being terminated.
---
---- When being used in the contexts of [header_filter_by_lua*](#header_filter_by_lua), [balancer_by_lua*](#balancer_by_lua_block), and
---- [ssl_session_store_by_lua*](#ssl_session_store_by_lua_block), `ngx.exit()` is
+--- When being used in the contexts of `header_filter_by_lua*`, `balancer_by_lua*`, and
+--- `ssl_session_store_by_lua*`, `ngx.exit()` is
--- an asynchronous operation and will return immediately. This behavior may change in future and it is recommended that users always use `return` in combination as suggested above.
---
---@param status ngx.OK|ngx.ERROR|ngx.http.status_code
@@ -3908,71 +3420,62 @@ function ngx.exit(status) end
--- Here is an example assuming the current server name is `localhost` and that it is listening on port 1984:
---
--- ```lua
----
--- return ngx.redirect("/foo")
--- ```
---
--- which is equivalent to
---
--- ```lua
----
--- return ngx.redirect("/foo", ngx.HTTP_MOVED_TEMPORARILY)
--- ```
---
--- Redirecting arbitrary external URLs is also supported, for example:
---
--- ```lua
----
--- return ngx.redirect("http://www.google.com")
--- ```
---
--- We can also use the numerical code directly as the second `status` argument:
---
--- ```lua
----
--- return ngx.redirect("/foo", 301)
--- ```
---
---- This method is similar to the [rewrite](http://nginx.org/en/docs/http/ngx_http_rewrite_module.html#rewrite) directive with the `redirect` modifier in the standard
---- [ngx_http_rewrite_module](http://nginx.org/en/docs/http/ngx_http_rewrite_module.html), for example, this `nginx.conf` snippet
+--- This method is similar to the `rewrite` directive with the `redirect` modifier in the standard
+--- `ngx_http_rewrite_module`, for example, this `nginx.conf` snippet
---
--- ```nginx
----
--- rewrite ^ /foo? redirect; # nginx config
--- ```
---
--- is equivalent to the following Lua code
---
--- ```lua
----
--- return ngx.redirect('/foo'); -- Lua code
--- ```
---
--- while
---
--- ```nginx
----
--- rewrite ^ /foo? permanent; # nginx config
--- ```
---
--- is equivalent to
---
--- ```lua
----
--- return ngx.redirect('/foo', ngx.HTTP_MOVED_PERMANENTLY) -- Lua code
--- ```
---
--- URI arguments can be specified as well, for example:
---
--- ```lua
----
--- return ngx.redirect('/foo?a=3&b=4')
--- ```
---
---- Note that this method call terminates the processing of the current request and that it *must* be called before [ngx.send_headers](#ngxsend_headers) or explicit response body
---- outputs by either [ngx.print](#ngxprint) or [ngx.say](#ngxsay).
+--- Note that this method call terminates the processing of the current request and that it *must* be called before `ngx.send_headers` or explicit response body
+--- outputs by either `ngx.print` or `ngx.say`.
---
---- It is recommended that a coding style that combines this method call with the `return` statement, i.e., `return ngx.redirect(...)` be adopted when this method call is used in contexts other than [header_filter_by_lua*](#header_filter_by_lua) to reinforce the fact that the request processing is being terminated.
+--- It is recommended that a coding style that combines this method call with the `return` statement, i.e., `return ngx.redirect(...)` be adopted when this method call is used in contexts other than `header_filter_by_lua*` to reinforce the fact that the request processing is being terminated.
---
---@param uri string
---@param status number
@@ -3982,12 +3485,11 @@ function ngx.redirect(uri, status) end
---
--- Returns `1` if the callback is registered successfully or returns `nil` and a string describing the error otherwise.
---
---- All the [Nginx API for Lua](#nginx-api-for-lua) can be used in the callback function because the function is run in a special "light thread", just as those "light threads" created by [ngx.thread.spawn](#ngxthreadspawn).
+--- All the NGINX APIs for lua can be used in the callback function because the function is run in a special "light thread", just as those "light threads" created by `ngx.thread.spawn`.
---
---- The callback function can decide what to do with the client abortion event all by itself. For example, it can simply ignore the event by doing nothing and the current Lua request handler will continue executing without interruptions. And the callback function can also decide to terminate everything by calling [ngx.exit](#ngxexit), for example,
+--- The callback function can decide what to do with the client abortion event all by itself. For example, it can simply ignore the event by doing nothing and the current Lua request handler will continue executing without interruptions. And the callback function can also decide to terminate everything by calling `ngx.exit`, for example,
---
--- ```lua
----
--- local function my_cleanup()
--- -- custom cleanup work goes here, like cancelling a pending DB transaction
---
@@ -4002,26 +3504,19 @@ function ngx.redirect(uri, status) end
--- end
--- ```
---
---- When [lua_check_client_abort](#lua_check_client_abort) is set to `off` (which is the default), then this function call will always return the error message "lua_check_client_abort is off".
+--- When `lua_check_client_abort` is set to `off` (which is the default), then this function call will always return the error message "lua_check_client_abort is off".
---
--- According to the current implementation, this function can only be called once in a single request handler; subsequent calls will return the error message "duplicate call".
---
---- This API was first introduced in the `v0.7.4` release.
----
---- See also [lua_check_client_abort](#lua_check_client_abort).
----
---@param callback fun()
----@return boolean,string@ok,err
+---@return boolean ok
+---@return string? error
function ngx.on_abort(callback) end
-
-function ngx.throw_error() end
-
---- Does an internal redirect to `uri` with `args` and is similar to the [echo_exec](http://github.com/openresty/echo-nginx-module#echo_exec) directive of the [echo-nginx-module](http://github.com/openresty/echo-nginx-module).
+--- Does an internal redirect to `uri` with `args` and is similar to the `echo_exec` directive of the `echo-nginx-module`.
---
--- ```lua
----
--- ngx.exec('/some-location');
--- ngx.exec('/some-location', 'a=3&b=5&c=6');
--- ngx.exec('/some-location?a=3&b=5', 'c=6');
@@ -4030,27 +3525,24 @@ function ngx.throw_error() end
--- The optional second `args` can be used to specify extra URI query arguments, for example:
---
--- ```lua
----
--- ngx.exec("/foo", "a=3&b=hello%20world")
--- ```
---
--- Alternatively, a Lua table can be passed for the `args` argument for ngx_lua to carry out URI escaping and string concatenation.
---
--- ```lua
----
--- ngx.exec("/foo", { a = 3, b = "hello world" })
--- ```
---
--- The result is exactly the same as the previous example.
---
---- The format for the Lua table passed as the `args` argument is identical to the format used in the [ngx.encode_args](#ngxencode_args) method.
+--- The format for the Lua table passed as the `args` argument is identical to the format used in the `ngx.encode_args` method.
---
--- Named locations are also supported but the second `args` argument will be ignored if present and the querystring for the new target is inherited from the referring location (if any).
---
--- `GET /foo/file.php?a=hello` will return "hello" and not "goodbye" in the example below
---
--- ```nginx
----
--- location /foo {
--- content_by_lua_block {
--- ngx.exec("@bar", "a=goodbye");
@@ -4069,295 +3561,90 @@ function ngx.throw_error() end
--- }
--- ```
---
---- Note that the `ngx.exec` method is different from [ngx.redirect](#ngxredirect) in that
+--- Note that the `ngx.exec` method is different from `ngx.redirect` in that
--- it is purely an internal redirect and that no new external HTTP traffic is involved.
---
---- Also note that this method call terminates the processing of the current request and that it *must* be called before [ngx.send_headers](#ngxsend_headers) or explicit response body
---- outputs by either [ngx.print](#ngxprint) or [ngx.say](#ngxsay).
+--- Also note that this method call terminates the processing of the current request and that it *must* be called before `ngx.send_headers` or explicit response body
+--- outputs by either `ngx.print` or `ngx.say`.
---
---- It is recommended that a coding style that combines this method call with the `return` statement, i.e., `return ngx.exec(...)` be adopted when this method call is used in contexts other than [header_filter_by_lua*](#header_filter_by_lua) to reinforce the fact that the request processing is being terminated.
+--- It is recommended that a coding style that combines this method call with the `return` statement, i.e., `return ngx.exec(...)` be adopted when this method call is used in contexts other than `header_filter_by_lua*` to reinforce the fact that the request processing is being terminated.
---
---@param uri string
----@param args string| table<string,any>
+---@param args string|table<string,any>
function ngx.exec(uri, args) end
----@alias ngx.phase.name
----| '"init"'
----| '"init_worker"'
----| '"ssl_cert"'
----| '"ssl_session_fetch"'
----| '"ssl_session_store"'
----| '"set"'
----| '"rewrite"'
----| '"balancer"'
----| '"access"'
----| '"content"'
----| '"header_filter"'
----| '"body_filter"'
----| '"log"'
----| '"timer"'
+ngx.location = {}
---- Retrieves the current running phase name.
----
---- This API was first introduced in the `v0.5.10` release.
----
----@return ngx.phase.name
-function ngx.get_phase() end
-
-ngx.location={}
+---@class ngx.location.capture.response : table
+---@field status integer # response status code
+---@field header table<string, string|string[]> # response headers
+---@field body string # response body
+---@field truncated boolean # truth-y if the response body is truncated. You always need to check the `res.truncated` boolean flag to see if `res.body` contains truncated data. The data truncation here can only be caused by those unrecoverable errors in your subrequests like the cases that the remote end aborts the connection prematurely in the middle of the response body data stream or a read timeout happens when your subrequest is receiving the response body data from the remote.
---- Just like [ngx.location.capture](#ngxlocationcapture), but supports multiple subrequests running in parallel.
+--- An optional option table can be fed as the second argument, which supports the options:
---
---- This function issues several parallel subrequests specified by the input table and returns their results in the same order. For example,
+---@class ngx.location.capture.options
---
---- ```lua
+---@field method ngx.http.method # the subrequest's request method, which only accepts constants like `ngx.HTTP_POST`.
---
---- res1, res2, res3 = ngx.location.capture_multi{
---- { "/foo", { args = "a=3&b=4" } },
---- { "/bar" },
---- { "/baz", { method = ngx.HTTP_POST, body = "hello" } },
---- }
+---@field body string # the subrequest's request body (string value only).
---
---- if res1.status == ngx.HTTP_OK then
---- ...
---- end
+---@field args string|table # the subrequest's URI query arguments (both string value and Lua tables are accepted)
+---@field ctx table # a Lua table to be the `ngx.ctx` table for the subrequest. It can be the current request's `ngx.ctx` table, which effectively makes the parent and its subrequest to share exactly the same context table.
---
---- if res2.body == "BLAH" then
---- ...
---- end
---- ```
----
---- This function will not return until all the subrequests terminate.
---- The total latency is the longest latency of the individual subrequests rather than the sum.
----
---- Lua tables can be used for both requests and responses when the number of subrequests to be issued is not known in advance:
----
---- ```lua
----
---- -- construct the requests table
---- local reqs = {}
---- table.insert(reqs, { "/mysql" })
---- table.insert(reqs, { "/postgres" })
---- table.insert(reqs, { "/redis" })
---- table.insert(reqs, { "/memcached" })
----
---- -- issue all the requests at once and wait until they all return
---- local resps = { ngx.location.capture_multi(reqs) }
----
---- -- loop over the responses table
---- for i, resp in ipairs(resps) do
---- -- process the response table "resp"
---- end
---- ```
+---@field vars table # a Lua table which holds the values to set the specified NGINX variables in the subrequest as this option's value.
---
---- The [ngx.location.capture](#ngxlocationcapture) function is just a special form
---- of this function. Logically speaking, the [ngx.location.capture](#ngxlocationcapture) can be implemented like this
+---@field copy_all_vars boolean # whether to copy over all the NGINX variable values of the current request to the subrequest in question. modifications of the NGINX variables in the subrequest will not affect the current (parent) request.
---
---- ```lua
+---@field share_all_vars boolean # whether to share all the NGINX variables of the subrequest with the current (parent) request. modifications of the NGINX variables in the subrequest will affect the current (parent) request. Enabling this option may lead to hard-to-debug issues due to bad side-effects and is considered bad and harmful. Only enable this option when you completely know what you are doing.
---
---- ngx.location.capture =
---- function (uri, args)
---- return ngx.location.capture_multi({ {uri, args} })
---- end
---- ```
----
---- Please also refer to restrictions on capturing locations configured by [subrequest directives of other modules](#locations-configured-by-subrequest-directives-of-other-modules).
----
----see doc
-function ngx.location.capture_multi(...) end
+---@field always_forward_body boolean # when set to true, the current (parent) request's request body will always be forwarded to the subrequest being created if the `body` option is not specified. The request body read by either `ngx.req.read_body()` or `lua_need_request_body on` will be directly forwarded to the subrequest without copying the whole request body data when creating the subrequest (no matter the request body data is buffered in memory buffers or temporary files). By default, this option is `false` and when the `body` option is not specified, the request body of the current (parent) request is only forwarded when the subrequest takes the `PUT` or `POST` request method.
---- Set, add to, or clear the current request's `HEADER` response header that is to be sent.
----
---- Underscores (`_`) in the header names will be replaced by hyphens (`-`) by default. This transformation can be turned off via the [lua_transform_underscores_in_response_headers](#lua_transform_underscores_in_response_headers) directive.
----
---- The header names are matched case-insensitively.
----
---- ```lua
----
---- -- equivalent to ngx.header["Content-Type"] = 'text/plain'
---- ngx.header.content_type = 'text/plain';
----
---- ngx.header["X-My-Header"] = 'blah blah';
---- ```
----
---- Multi-value headers can be set this way:
----
---- ```lua
----
---- ngx.header['Set-Cookie'] = {'a=32; path=/', 'b=4; path=/'}
---- ```
----
---- will yield
----
---- ```bash
----
---- Set-Cookie: a=32; path=/
---- Set-Cookie: b=4; path=/
---- ```
----
---- in the response headers.
----
---- Only Lua tables are accepted (Only the last element in the table will take effect for standard headers such as `Content-Type` that only accept a single value).
----
---- ```lua
----
---- ngx.header.content_type = {'a', 'b'}
---- ```
----
---- is equivalent to
----
---- ```lua
----
---- ngx.header.content_type = 'b'
---- ```
----
---- Setting a slot to `nil` effectively removes it from the response headers:
----
---- ```lua
----
---- ngx.header["X-My-Header"] = nil;
---- ```
----
---- The same applies to assigning an empty table:
----
---- ```lua
----
---- ngx.header["X-My-Header"] = {};
---- ```
----
---- Setting `ngx.header.HEADER` after sending out response headers (either explicitly with [ngx.send_headers](#ngxsend_headers) or implicitly with [ngx.print](#ngxprint) and similar) will log an error message.
----
---- Reading `ngx.header.HEADER` will return the value of the response header named `HEADER`.
----
---- Underscores (`_`) in the header names will also be replaced by dashes (`-`) and the header names will be matched case-insensitively. If the response header is not present at all, `nil` will be returned.
----
---- This is particularly useful in the context of [header_filter_by_lua*](#header_filter_by_lua), for example,
----
---- ```nginx
----
---- location /test {
---- set $footer '';
----
---- proxy_pass http://some-backend;
----
---- header_filter_by_lua_block {
---- if ngx.header["X-My-Header"] == "blah" then
---- ngx.var.footer = "some value"
---- end
---- }
----
---- echo_after_body $footer;
---- }
---- ```
----
---- For multi-value headers, all of the values of header will be collected in order and returned as a Lua table. For example, response headers
----
----
---- Foo: bar
---- Foo: baz
----
----
---- will result in
----
---- ```lua
----
---- {"bar", "baz"}
---- ```
----
---- to be returned when reading `ngx.header.Foo`.
----
---- Note that `ngx.header` is not a normal Lua table and as such, it is not possible to iterate through it using the Lua `ipairs` function.
----
---- Note: `HEADER` and `VALUE` will be truncated if they
---- contain the `\r` or `\n` characters. The truncated values
---- will contain all characters up to (and excluding) the first occurrence of
---- `\r` or `\n`.
----
---- For reading *request* headers, use the [ngx.req.get_headers](#ngxreqget_headers) function instead.
----
----
-ngx.header={}
+---@alias ngx.location.capture.uri string
+
+---@class ngx.location.capture.arg : table
+---@field [1] ngx.location.capture.uri request uri
+---@field [2] ngx.location.capture.options? request options
---- Issues a synchronous but still non-blocking *Nginx Subrequest* using `uri`.
+--- Issues a synchronous but still non-blocking *NGINX Subrequest* using `uri`.
---
---- Nginx's subrequests provide a powerful way to make non-blocking internal requests to other locations configured with disk file directory or *any* other Nginx C modules like `ngx_proxy`, `ngx_fastcgi`, `ngx_memc`,
+--- NGINX's subrequests provide a powerful way to make non-blocking internal requests to other locations configured with disk file directory or *any* other NGINX C modules like `ngx_proxy`, `ngx_fastcgi`, `ngx_memc`,
--- `ngx_postgres`, `ngx_drizzle`, and even ngx_lua itself and etc etc etc.
---
--- Also note that subrequests just mimic the HTTP interface but there is *no* extra HTTP/TCP traffic *nor* IPC involved. Everything works internally, efficiently, on the C level.
---
---- Subrequests are completely different from HTTP 301/302 redirection (via [ngx.redirect](#ngxredirect)) and internal redirection (via [ngx.exec](#ngxexec)).
+--- Subrequests are completely different from HTTP 301/302 redirection (via `ngx.redirect`) and internal redirection (via `ngx.exec`).
---
---- You should always read the request body (by either calling [ngx.req.read_body](#ngxreqread_body) or configuring [lua_need_request_body](#lua_need_request_body) on) before initiating a subrequest.
+--- You should always read the request body (by either calling `ngx.req.read_body` or configuring `lua_need_request_body` on) before initiating a subrequest.
---
---- This API function (as well as [ngx.location.capture_multi](#ngxlocationcapture_multi)) always buffers the whole response body of the subrequest in memory. Thus, you should use [cosockets](#ngxsockettcp)
+--- This API function (as well as `ngx.location.capture_multi`) always buffers the whole response body of the subrequest in memory. Thus, you should use `cosockets`
--- and streaming processing instead if you have to handle large subrequest responses.
---
--- Here is a basic example:
---
--- ```lua
----
--- res = ngx.location.capture(uri)
--- ```
---
--- Returns a Lua table with 4 slots: `res.status`, `res.header`, `res.body`, and `res.truncated`.
---
---- `res.status` holds the response status code for the subrequest response.
----
---- `res.header` holds all the response headers of the
---- subrequest and it is a normal Lua table. For multi-value response headers,
---- the value is a Lua (array) table that holds all the values in the order that
---- they appear. For instance, if the subrequest response headers contain the following
---- lines:
----
---- ```bash
----
---- Set-Cookie: a=3
---- Set-Cookie: foo=bar
---- Set-Cookie: baz=blah
---- ```
----
---- Then `res.header["Set-Cookie"]` will be evaluated to the table value
---- `{"a=3", "foo=bar", "baz=blah"}`.
----
---- `res.body` holds the subrequest's response body data, which might be truncated. You always need to check the `res.truncated` boolean flag to see if `res.body` contains truncated data. The data truncation here can only be caused by those unrecoverable errors in your subrequests like the cases that the remote end aborts the connection prematurely in the middle of the response body data stream or a read timeout happens when your subrequest is receiving the response body data from the remote.
----
--- URI query strings can be concatenated to URI itself, for instance,
---
--- ```lua
----
--- res = ngx.location.capture('/foo/bar?a=3&b=4')
--- ```
---
--- Named locations like `@foo` are not allowed due to a limitation in
---- the Nginx core. Use normal locations combined with the `internal` directive to
+--- the NGINX core. Use normal locations combined with the `internal` directive to
--- prepare internal-only locations.
---
---- An optional option table can be fed as the second
---- argument, which supports the options:
----
---- * `method`
---- specify the subrequest's request method, which only accepts constants like `ngx.HTTP_POST`.
---- * `body`
---- specify the subrequest's request body (string value only).
---- * `args`
---- specify the subrequest's URI query arguments (both string value and Lua tables are accepted)
---- * `ctx`
---- specify a Lua table to be the [ngx.ctx](#ngxctx) table for the subrequest. It can be the current request's [ngx.ctx](#ngxctx) table, which effectively makes the parent and its subrequest to share exactly the same context table. This option was first introduced in the `v0.3.1rc25` release.
---- * `vars`
---- take a Lua table which holds the values to set the specified Nginx variables in the subrequest as this option's value. This option was first introduced in the `v0.3.1rc31` release.
---- * `copy_all_vars`
---- specify whether to copy over all the Nginx variable values of the current request to the subrequest in question. modifications of the Nginx variables in the subrequest will not affect the current (parent) request. This option was first introduced in the `v0.3.1rc31` release.
---- * `share_all_vars`
---- specify whether to share all the Nginx variables of the subrequest with the current (parent) request. modifications of the Nginx variables in the subrequest will affect the current (parent) request. Enabling this option may lead to hard-to-debug issues due to bad side-effects and is considered bad and harmful. Only enable this option when you completely know what you are doing.
---- * `always_forward_body`
---- when set to true, the current (parent) request's request body will always be forwarded to the subrequest being created if the `body` option is not specified. The request body read by either [ngx.req.read_body()](#ngxreqread_body) or [lua_need_request_body on](#lua_need_request_body) will be directly forwarded to the subrequest without copying the whole request body data when creating the subrequest (no matter the request body data is buffered in memory buffers or temporary files). By default, this option is `false` and when the `body` option is not specified, the request body of the current (parent) request is only forwarded when the subrequest takes the `PUT` or `POST` request method.
+--- An optional option table can be fed as the second argument.
---
--- Issuing a POST subrequest, for example, can be done as follows
---
--- ```lua
----
--- res = ngx.location.capture(
--- '/foo/bar',
--- { method = ngx.HTTP_POST, body = 'hello, world' }
@@ -4370,7 +3657,6 @@ ngx.header={}
--- The `args` option can specify extra URI arguments, for instance,
---
--- ```lua
----
--- ngx.location.capture('/foo?a=1',
--- { args = { b = 3, c = ':' } }
--- )
@@ -4379,17 +3665,15 @@ ngx.header={}
--- is equivalent to
---
--- ```lua
----
--- ngx.location.capture('/foo?a=1&b=3&c=%3a')
--- ```
---
--- that is, this method will escape argument keys and values according to URI rules and
---- concatenate them together into a complete query string. The format for the Lua table passed as the `args` argument is identical to the format used in the [ngx.encode_args](#ngxencode_args) method.
+--- concatenate them together into a complete query string. The format for the Lua table passed as the `args` argument is identical to the format used in the `ngx.encode_args` method.
---
--- The `args` option can also take plain query strings:
---
--- ```lua
----
--- ngx.location.capture('/foo?a=1',
--- { args = 'b=3&c=%3a' }
--- )
@@ -4397,15 +3681,14 @@ ngx.header={}
---
--- This is functionally identical to the previous examples.
---
---- The `share_all_vars` option controls whether to share Nginx variables among the current request and its subrequests.
---- If this option is set to `true`, then the current request and associated subrequests will share the same Nginx variable scope. Hence, changes to Nginx variables made by a subrequest will affect the current request.
+--- The `share_all_vars` option controls whether to share NGINX variables among the current request and its subrequests.
+--- If this option is set to `true`, then the current request and associated subrequests will share the same NGINX variable scope. Hence, changes to NGINX variables made by a subrequest will affect the current request.
---
--- Care should be taken in using this option as variable scope sharing can have unexpected side effects. The `args`, `vars`, or `copy_all_vars` options are generally preferable instead.
---
--- This option is set to `false` by default
---
--- ```nginx
----
--- location /other {
--- set $dog "$dog world";
--- echo "$uri dog: $dog";
@@ -4425,15 +3708,12 @@ ngx.header={}
---
--- Accessing location `/lua` gives
---
----
--- /other dog: hello world
--- /lua: hello world
---
----
---- The `copy_all_vars` option provides a copy of the parent request's Nginx variables to subrequests when such subrequests are issued. Changes made to these variables by such subrequests will not affect the parent request or any other subrequests sharing the parent request's variables.
+--- The `copy_all_vars` option provides a copy of the parent request's NGINX variables to subrequests when such subrequests are issued. Changes made to these variables by such subrequests will not affect the parent request or any other subrequests sharing the parent request's variables.
---
--- ```nginx
----
--- location /other {
--- set $dog "$dog world";
--- echo "$uri dog: $dog";
@@ -4453,11 +3733,9 @@ ngx.header={}
---
--- Request `GET /lua` will give the output
---
----
--- /other dog: hello world
--- /lua: hello
---
----
--- Note that if both `share_all_vars` and `copy_all_vars` are set to true, then `share_all_vars` takes precedence.
---
--- In addition to the two settings above, it is possible to specify
@@ -4465,10 +3743,9 @@ ngx.header={}
--- variables are set after the sharing or copying of variables has been
--- evaluated, and provides a more efficient method of passing specific
--- values to a subrequest over encoding them as URL arguments and
---- unescaping them in the Nginx config file.
+--- unescaping them in the NGINX config file.
---
--- ```nginx
----
--- location /other {
--- content_by_lua_block {
--- ngx.say("dog = ", ngx.var.dog)
@@ -4490,15 +3767,12 @@ ngx.header={}
---
--- Accessing `/lua` will yield the output
---
----
--- dog = hello
--- cat = 32
---
----
---- The `ctx` option can be used to specify a custom Lua table to serve as the [ngx.ctx](#ngxctx) table for the subrequest.
+--- The `ctx` option can be used to specify a custom Lua table to serve as the `ngx.ctx` table for the subrequest.
---
--- ```nginx
----
--- location /sub {
--- content_by_lua_block {
--- ngx.ctx.foo = "bar";
@@ -4517,15 +3791,12 @@ ngx.header={}
---
--- Then request `GET /lua` gives
---
----
--- bar
--- nil
---
----
---- It is also possible to use this `ctx` option to share the same [ngx.ctx](#ngxctx) table between the current (parent) request and the subrequest:
+--- It is also possible to use this `ctx` option to share the same `ngx.ctx` table between the current (parent) request and the subrequest:
---
--- ```nginx
----
--- location /sub {
--- content_by_lua_block {
--- ngx.ctx.foo = "bar";
@@ -4541,38 +3812,194 @@ ngx.header={}
---
--- Request `GET /lua` yields the output
---
----
--- bar
---
----
---- Note that subrequests issued by [ngx.location.capture](#ngxlocationcapture) inherit all the
+--- Note that subrequests issued by `ngx.location.capture` inherit all the
--- request headers of the current request by default and that this may have unexpected side effects on the
--- subrequest responses. For example, when using the standard `ngx_proxy` module to serve
--- subrequests, an "Accept-Encoding: gzip" header in the main request may result
--- in gzipped responses that cannot be handled properly in Lua code. Original request headers should be ignored by setting
---- [proxy_pass_request_headers](http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_pass_request_headers) to `off` in subrequest locations.
+--- `proxy_pass_request_headers` to `off` in subrequest locations.
---
--- When the `body` option is not specified and the `always_forward_body` option is false (the default value), the `POST` and `PUT` subrequests will inherit the request bodies of the parent request (if any).
---
---- There is a hard-coded upper limit on the number of concurrent subrequests possible for every main request. In older versions of Nginx, the limit was `50` concurrent subrequests and in more recent versions, Nginx `1.1.x` onwards, this was increased to `200` concurrent subrequests. When this limit is exceeded, the following error message is added to the `error.log` file:
----
+--- There is a hard-coded upper limit on the number of concurrent subrequests possible for every main request. In older versions of NGINX, the limit was `50` concurrent subrequests and in more recent versions, NGINX `1.1.x` onwards, this was increased to `200` concurrent subrequests. When this limit is exceeded, the following error message is added to the `error.log` file:
---
--- [error] 13983#0: *1 subrequests cycle while processing "/uri"
---
+--- The limit can be manually modified if required by editing the definition of the `NGX_HTTP_MAX_SUBREQUESTS` macro in the `nginx/src/http/ngx_http_request.h` file in the NGINX source tree.
---
---- The limit can be manually modified if required by editing the definition of the `NGX_HTTP_MAX_SUBREQUESTS` macro in the `nginx/src/http/ngx_http_request.h` file in the Nginx source tree.
----
---- Please also refer to restrictions on capturing locations configured by [subrequest directives of other modules](#locations-configured-by-subrequest-directives-of-other-modules).
+--- Please also refer to restrictions on capturing locations configured by subrequest directives of other modules.
---
----@param uri string
----@param options string | table<string,string>
----@return any
+---@param uri ngx.location.capture.uri
+---@param options ngx.location.capture.options
+---@return ngx.location.capture.response
function ngx.location.capture(uri, options) end
---- Parse the http time string (as returned by [ngx.http_time](#ngxhttp_time)) into seconds. Returns the seconds or `nil` if the input string is in bad forms.
+
+--- Just like `ngx.location.capture`, but supports multiple subrequests running in parallel.
+---
+--- This function issues several parallel subrequests specified by the input table and returns their results in the same order. For example,
+---
+--- ```lua
+--- local res1, res2, res3 = ngx.location.capture_multi{
+--- { "/foo", { args = "a=3&b=4" } },
+--- { "/bar" },
+--- { "/baz", { method = ngx.HTTP_POST, body = "hello" } },
+--- }
+---
+--- if res1.status == ngx.HTTP_OK then
+--- ...
+--- end
+---
+--- if res2.body == "BLAH" then
+--- ...
+--- end
+--- ```
+---
+--- This function will not return until all the subrequests terminate.
+--- The total latency is the longest latency of the individual subrequests rather than the sum.
+---
+--- Lua tables can be used for both requests and responses when the number of subrequests to be issued is not known in advance:
+---
+--- ```lua
+--- -- construct the requests table
+--- local reqs = {}
+--- table.insert(reqs, { "/mysql" })
+--- table.insert(reqs, { "/postgres" })
+--- table.insert(reqs, { "/redis" })
+--- table.insert(reqs, { "/memcached" })
+---
+--- -- issue all the requests at once and wait until they all return
+--- local resps = { ngx.location.capture_multi(reqs) }
+---
+--- -- loop over the responses table
+--- for i, resp in ipairs(resps) do
+--- -- process the response table "resp"
+--- end
+--- ```
+---
+--- The `ngx.location.capture` function is just a special form
+--- of this function. Logically speaking, the `ngx.location.capture` can be implemented like this
+---
+--- ```lua
+--- ngx.location.capture = function (uri, args)
+--- return ngx.location.capture_multi({ {uri, args} })
+--- end
+--- ```
+---
+--- Please also refer to restrictions on capturing locations configured by subrequest directives of other modules.
+---
+---@param args ngx.location.capture.arg[]
+---@return ngx.location.capture.response ...
+function ngx.location.capture_multi(args) end
+
+
+--- Set, add to, or clear the current request's `HEADER` response header that is to be sent.
+---
+--- Underscores (`_`) in the header names will be replaced by hyphens (`-`) by default. This transformation can be turned off via the `lua_transform_underscores_in_response_headers` directive.
+---
+--- The header names are matched case-insensitively.
+---
+--- ```lua
+--- -- equivalent to ngx.header["Content-Type"] = 'text/plain'
+--- ngx.header.content_type = 'text/plain';
+---
+--- ngx.header["X-My-Header"] = 'blah blah';
+--- ```
+---
+--- Multi-value headers can be set this way:
+---
+--- ```lua
+--- ngx.header['Set-Cookie'] = {'a=32; path=/', 'b=4; path=/'}
+--- ```
+---
+--- will yield
+---
+--- ```bash
+--- Set-Cookie: a=32; path=/
+--- Set-Cookie: b=4; path=/
+--- ```
+---
+--- in the response headers.
+---
+--- Only Lua tables are accepted (Only the last element in the table will take effect for standard headers such as `Content-Type` that only accept a single value).
+---
+--- ```lua
+--- ngx.header.content_type = {'a', 'b'}
+--- ```
+---
+--- is equivalent to
+---
+--- ```lua
+--- ngx.header.content_type = 'b'
+--- ```
+---
+--- Setting a slot to `nil` effectively removes it from the response headers:
+---
+--- ```lua
+--- ngx.header["X-My-Header"] = nil;
+--- ```
+---
+--- The same applies to assigning an empty table:
+---
+--- ```lua
+--- ngx.header["X-My-Header"] = {};
+--- ```
+---
+--- Setting `ngx.header.HEADER` after sending out response headers (either explicitly with `ngx.send_headers` or implicitly with `ngx.print` and similar) will log an error message.
+---
+--- Reading `ngx.header.HEADER` will return the value of the response header named `HEADER`.
+---
+--- Underscores (`_`) in the header names will also be replaced by dashes (`-`) and the header names will be matched case-insensitively. If the response header is not present at all, `nil` will be returned.
+---
+--- This is particularly useful in the context of `header_filter_by_lua*`, for example:
---
--- ```nginx
+--- location /test {
+--- set $footer '';
+---
+--- proxy_pass http://some-backend;
+---
+--- header_filter_by_lua_block {
+--- if ngx.header["X-My-Header"] == "blah" then
+--- ngx.var.footer = "some value"
+--- end
+--- }
+---
+--- echo_after_body $footer;
+--- }
+--- ```
+---
+--- For multi-value headers, all of the values of header will be collected in order and returned as a Lua table. For example, response headers
+---
+--- Foo: bar
+--- Foo: baz
+---
+--- will result in
+---
+--- ```lua
+--- {"bar", "baz"}
+--- ```
+---
+--- to be returned when reading `ngx.header.Foo`.
---
+--- Note that `ngx.header` is not a normal Lua table and as such, it is not possible to iterate through it using the Lua `ipairs` function.
+---
+--- Note: `HEADER` and `VALUE` will be truncated if they
+--- contain the `\r` or `\n` characters. The truncated values
+--- will contain all characters up to (and excluding) the first occurrence of
+--- `\r` or `\n`.
+---
+--- For reading *request* headers, use the `ngx.req.get_headers` function instead.
+---
+---@type table<string, any>
+ngx.header = {}
+
+
+--- Parse the http time string (as returned by `ngx.http_time`) into seconds. Returns the seconds or `nil` if the input string is in bad forms.
+---
+--- ```lua
--- local time = ngx.parse_http_time("Thu, 18 Nov 2010 11:27:35 GMT")
--- if time == nil then
--- ...
@@ -4580,13 +4007,13 @@ function ngx.location.capture(uri, options) end
--- ```
---
---@param str string
----@return sec number
+---@return number?
function ngx.parse_http_time(str) end
---- Returns a formated string can be used as the http header time (for example, being used in `Last-Modified` header). The parameter `sec` is the time stamp in seconds (like those returned from [ngx.time](#ngxtime)).
----
---- ```nginx
+
+--- Returns a formated string can be used as the http header time (for example, being used in `Last-Modified` header). The parameter `sec` is the time stamp in seconds (like those returned from `ngx.time`).
---
+--- ```lua
--- ngx.say(ngx.http_time(1290079655))
--- -- yields "Thu, 18 Nov 2010 11:27:35 GMT"
--- ```
@@ -4595,76 +4022,58 @@ function ngx.parse_http_time(str) end
---@return string
function ngx.http_time(sec) end
-function ngx.get_now() end
-
---- Returns current date (in the format `yyyy-mm-dd`) from the Nginx cached time (no syscall involved unlike Lua's date library).
----
---- This is the local time.
----
----@return string
-function ngx.get_today() end
-
--- Sleeps for the specified seconds without blocking. One can specify time resolution up to 0.001 seconds (i.e., one milliseconds).
---
---- Behind the scene, this method makes use of the Nginx timers.
----
---- Since the `0.7.20` release, The `0` time argument can also be specified.
+--- Behind the scene, this method makes use of the NGINX timers.
---
---- This method was introduced in the `0.5.0rc30` release.
+--- The `0` time argument can also be specified.
---
---@param seconds number
function ngx.sleep(seconds) end
---- Forcibly updates the Nginx current time cache. This call involves a syscall and thus has some overhead, so do not abuse it.
----
---- This API was first introduced in `v0.3.1rc32`.
+--- Forcibly updates the NGINX current time cache. This call involves a syscall and thus has some overhead, so do not abuse it.
---
function ngx.update_time() end
---- Returns a floating-point number for the elapsed time in seconds (including milliseconds as the decimal part) from the epoch for the current time stamp from the Nginx cached time (no syscall involved unlike Lua's date library).
----
---- You can forcibly update the Nginx time cache by calling [ngx.update_time](#ngxupdate_time) first.
+--- Returns a floating-point number for the elapsed time in seconds (including milliseconds as the decimal part) from the epoch for the current time stamp from the NGINX cached time (no syscall involved unlike Lua's date library).
---
---- This API was first introduced in `v0.3.1rc32`.
+--- You can forcibly update the NGINX time cache by calling `ngx.update_time` first.
---
---@return number
function ngx.now() end
---- Returns the current time stamp (in the format `yyyy-mm-dd hh:mm:ss`) of the Nginx cached time (no syscall involved unlike Lua's [os.date](https://www.lua.org/manual/5.1/manual.html#pdf-os.date) function).
+--- Returns the current time stamp (in the format `yyyy-mm-dd hh:mm:ss`) of the NGINX cached time (no syscall involved unlike Lua's `os.date` function).
---
---- This is the local time.
---@return string
function ngx.localtime() end
---- Returns the current time stamp (in the format `yyyy-mm-dd hh:mm:ss`) of the Nginx cached time (no syscall involved unlike Lua's [os.date](https://www.lua.org/manual/5.1/manual.html#pdf-os.date) function).
+--- Returns the current time stamp (in the format `yyyy-mm-dd hh:mm:ss`) of the NGINX cached time (no syscall involved unlike Lua's `os.date` function).
---
---- This is the UTC time.
+---@return string
function ngx.utctime() end
-function ngx.get_now_ts() end
-
---- Returns a formatted string can be used as the cookie expiration time. The parameter `sec` is the time stamp in seconds (like those returned from [ngx.time](#ngxtime)).
----
---- ```nginx
+--- Returns a formatted string can be used as the cookie expiration time. The parameter `sec` is the time stamp in seconds (like those returned from `ngx.time`).
---
+--- ```lua
--- ngx.say(ngx.cookie_time(1290079655))
--- -- yields "Thu, 18-Nov-10 11:27:35 GMT"
--- ```
---
---@param sec number
----@return str string
+---@return string
function ngx.cookie_time(sec) end
---- Returns current date (in the format `yyyy-mm-dd`) from the Nginx cached time (no syscall involved unlike Lua's date library).
+--- Returns current date (in the format `yyyy-mm-dd`) from the NGINX cached time (no syscall involved unlike Lua's date library).
+---
+--- This uses the local timezone.
---
---- This is the local time.
---@return string
function ngx.today() end
----Returns the elapsed seconds from the epoch for the current time stamp from the Nginx cached time (no syscall involved unlike Lua's date library).
+--- Returns the elapsed seconds from the epoch for the current time stamp from the NGINX cached time (no syscall involved unlike Lua's date library).
---
----Updates of the Nginx time cache can be forced by calling [ngx.update_time](#ngxupdate_time) first.
+--- Updates of the NGINX time cache can be forced by calling `ngx.update_time` first.
---
---@return integer
function ngx.time() end
@@ -4673,21 +4082,20 @@ function ngx.time() end
---
--- Lua `nil` arguments are accepted and result in literal `"nil"` string while Lua booleans result in literal `"true"` or `"false"` string outputs. And the `ngx.null` constant will yield the `"null"` string output.
---
---- The `log_level` argument can take constants like `ngx.ERR` and `ngx.WARN`. Check out [Nginx log level constants](#nginx-log-level-constants) for details.
+--- The `level` argument can take constants like `ngx.ERR` and `ngx.WARN`.
---
---- There is a hard coded `2048` byte limitation on error message lengths in the Nginx core. This limit includes trailing newlines and leading time stamps. If the message size exceeds this limit, Nginx will truncate the message text accordingly. This limit can be manually modified by editing the `NGX_MAX_ERROR_STR` macro definition in the `src/core/ngx_log.h` file in the Nginx source tree.
+--- There is a hard coded `2048` byte limitation on error message lengths in the NGINX core. This limit includes trailing newlines and leading time stamps. If the message size exceeds this limit, NGINX will truncate the message text accordingly. This limit can be manually modified by editing the `NGX_MAX_ERROR_STR` macro definition in the `src/core/ngx_log.h` file in the NGINX source tree.
---
----@param log_level ngx.log.level
+---@param level ngx.log.level
---@param ... string|number|'nil'|'ngx.null'
-function ngx.log(log_level,...) end
+function ngx.log(level, ...) end
---- Explicitly specify the end of the response output stream. In the case of HTTP 1.1 chunked encoded output, it will just trigger the Nginx core to send out the "last chunk".
+--- Explicitly specify the end of the response output stream. In the case of HTTP 1.1 chunked encoded output, it will just trigger the NGINX core to send out the "last chunk".
---
--- When you disable the HTTP 1.1 keep-alive feature for your downstream connections, you can rely on well written HTTP clients to close the connection actively for you when you call this method. This trick can be used do back-ground jobs without letting the HTTP clients to wait on the connection, as in the following example:
---
--- ```nginx
----
--- location = /async {
--- keepalive_timeout 0;
--- content_by_lua_block {
@@ -4698,30 +4106,29 @@ function ngx.log(log_level,...) end
--- }
--- ```
---
---- But if you create subrequests to access other locations configured by Nginx upstream modules, then you should configure those upstream modules to ignore client connection abortions if they are not by default. For example, by default the standard [ngx_http_proxy_module](http://nginx.org/en/docs/http/ngx_http_proxy_module.html) will terminate both the subrequest and the main request as soon as the client closes the connection, so it is important to turn on the [proxy_ignore_client_abort](http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_ignore_client_abort) directive in your location block configured by [ngx_http_proxy_module](http://nginx.org/en/docs/http/ngx_http_proxy_module.html):
+--- But if you create subrequests to access other locations configured by NGINX upstream modules, then you should configure those upstream modules to ignore client connection abortions if they are not by default. For example, by default the standard `ngx_http_proxy_module` will terminate both the subrequest and the main request as soon as the client closes the connection, so it is important to turn on the `proxy_ignore_client_abort` directive in your location block configured by `ngx_http_proxy_module`://nginx.org/en/docs/http/ngx_http_proxy_module.html):
---
--- ```nginx
----
--- proxy_ignore_client_abort on;
--- ```
---
---- A better way to do background jobs is to use the [ngx.timer.at](#ngxtimerat) API.
+--- A better way to do background jobs is to use the `ngx.timer.at` API.
---
---- Since `v0.8.3` this function returns `1` on success, or returns `nil` and a string describing the error otherwise.
+--- Returns `1` on success, or returns `nil` and a string describing the error otherwise.
---
----@return boolean,string@ok,err
+---@return boolean ok
+---@return string? error
function ngx.eof() end
--- Emits arguments concatenated to the HTTP client (as response body). If response headers have not been sent, this function will send headers out first and then output body data.
---
---- Since `v0.8.3` this function returns `1` on success, or returns `nil` and a string describing the error otherwise.
+--- Returns `1` on success, or returns `nil` and a string describing the error otherwise.
---
--- Lua `nil` values will output `"nil"` strings and Lua boolean values will output `"true"` and `"false"` literal strings respectively.
---
--- Nested arrays of strings are permitted and the elements in the arrays will be sent one by one:
---
--- ```lua
----
--- local table = {
--- "hello, ",
--- {"world: ", true, " or ", false,
@@ -4733,7 +4140,6 @@ function ngx.eof() end
--- will yield the output
---
--- ```bash
----
--- hello, world: true or false: nil
--- ```
---
@@ -4741,15 +4147,16 @@ function ngx.eof() end
---
--- The `ngx.null` constant will yield the `"null"` string output.
---
---- This is an asynchronous call and will return immediately without waiting for all the data to be written into the system send buffer. To run in synchronous mode, call `ngx.flush(true)` after calling `ngx.print`. This can be particularly useful for streaming output. See [ngx.flush](#ngxflush) for more details.
+--- This is an asynchronous call and will return immediately without waiting for all the data to be written into the system send buffer. To run in synchronous mode, call `ngx.flush(true)` after calling `ngx.print`. This can be particularly useful for streaming output. See `ngx.flush` for more details.
---
---- Please note that both `ngx.print` and [ngx.say](#ngxsay) will always invoke the whole Nginx output body filter chain, which is an expensive operation. So be careful when calling either of these two in a tight loop; buffer the data yourself in Lua and save the calls.
+--- Please note that both `ngx.print` and `ngx.say` will always invoke the whole NGINX output body filter chain, which is an expensive operation. So be careful when calling either of these two in a tight loop; buffer the data yourself in Lua and save the calls.
---
---@param ... string|string[]
----@return boolean,string@ok,err
+---@return boolean ok
+---@return string? error
function ngx.print(...) end
---- Just as [ngx.print](#ngxprint) but also emit a trailing newline.
+--- Just as `ngx.print` but also emit a trailing newline.
---
---@param ... string|string[]
---@return boolean ok
@@ -4758,28 +4165,29 @@ function ngx.say(...) end
--- Explicitly send out the response headers.
---
---- Since `v0.8.3` this function returns `1` on success, or returns `nil` and a string describing the error otherwise.
+--- Returns `1` on success, or returns `nil` and a string describing the error otherwise.
---
---- Note that there is normally no need to manually send out response headers as ngx_lua will automatically send headers out
---- before content is output with [ngx.say](#ngxsay) or [ngx.print](#ngxprint) or when [content_by_lua*](#content_by_lua) exits normally.
+--- Note that there is normally no need to manually send out response headers as ngx_lua will automatically send headers out before content is output with `ngx.say` or `ngx.print` or when `content_by_lua*` exits normally.
---
----@return boolean,string@ok,err
+---@return boolean ok
+---@return string? error
function ngx.send_headers() end
--- Flushes response output to the client.
---
---- `ngx.flush` accepts an optional boolean `wait` argument (Default: `false`) first introduced in the `v0.3.1rc34` release. When called with the default argument, it issues an asynchronous call (Returns immediately without waiting for output data to be written into the system send buffer). Calling the function with the `wait` argument set to `true` switches to synchronous mode.
+--- `ngx.flush` accepts an optional boolean `wait` argument (Default: `false`). When called with the default argument, it issues an asynchronous call (Returns immediately without waiting for output data to be written into the system send buffer). Calling the function with the `wait` argument set to `true` switches to synchronous mode.
---
---- In synchronous mode, the function will not return until all output data has been written into the system send buffer or until the [send_timeout](http://nginx.org/en/docs/http/ngx_http_core_module.html#send_timeout) setting has expired. Note that using the Lua coroutine mechanism means that this function does not block the Nginx event loop even in the synchronous mode.
+--- In synchronous mode, the function will not return until all output data has been written into the system send buffer or until the `send_timeout` setting has expired. Note that using the Lua coroutine mechanism means that this function does not block the NGINX event loop even in the synchronous mode.
---
---- When `ngx.flush(true)` is called immediately after [ngx.print](#ngxprint) or [ngx.say](#ngxsay), it causes the latter functions to run in synchronous mode. This can be particularly useful for streaming output.
+--- When `ngx.flush(true)` is called immediately after `ngx.print` or `ngx.say`, it causes the latter functions to run in synchronous mode. This can be particularly useful for streaming output.
---
---- Note that `ngx.flush` is not functional when in the HTTP 1.0 output buffering mode. See [HTTP 1.0 support](#http-10-support).
+--- Note that `ngx.flush` is not functional when in the HTTP 1.0 output buffering mode. See `HTTP 1.0 support`.
---
---- Since `v0.8.3` this function returns `1` on success, or returns `nil` and a string describing the error otherwise.
+--- Returns `1` on success, or returns `nil` and a string describing the error otherwise.
---
---@param wait boolean
----@return boolean,string@ok,err
+---@return boolean ok
+---@return string? error
function ngx.flush(wait) end
--- NGINX response methods
@@ -4788,7 +4196,6 @@ ngx.resp = {}
--- Returns a Lua table holding all the current response headers for the current request.
---
--- ```lua
----
--- local h, err = ngx.resp.get_headers()
---
--- if err == "truncated" then
@@ -4800,15 +4207,14 @@ ngx.resp = {}
--- end
--- ```
---
---- This function has the same signature as [ngx.req.get_headers](#ngxreqget_headers) except getting response headers instead of request headers.
+--- This function has the same signature as `ngx.req.get_headers` except getting response headers instead of request headers.
---
---- Note that a maximum of 100 response headers are parsed by default (including those with the same name) and that additional response headers are silently discarded to guard against potential denial of service attacks. Since `v0.10.13`, when the limit is exceeded, it will return a second value which is the string `"truncated"`.
----
---- This API was first introduced in the `v0.9.5` release.
+--- Note that a maximum of 100 response headers are parsed by default (including those with the same name) and that additional response headers are silently discarded to guard against potential denial of service attacks. When the limit is exceeded, it will return a second value which is the string `"truncated"`.
---
---@param max_headers number
----@param raw string
----@return table,string
+---@param raw boolean
+---@return table<string, string|string[]>
+---@return string? error
function ngx.resp.get_headers(max_headers, raw) end