1 _ _ ____ _ 2 ___| | | | _ \| | 3 / __| | | | |_) | | 4 | (__| |_| | _ <| |___ 5 \___|\___/|_| \_\_____| 6 7 Things that could be nice to do in the future 8 9 Things to do in project curl. Please tell us what you think, contribute and 10 send us patches that improve things! 11 12 Be aware that these are things that we could do, or have once been considered 13 things we could do. If you want to work on any of these areas, please 14 consider bringing it up for discussions first on the mailing list so that we 15 all agree it is still a good idea for the project! 16 17 All bugs documented in the KNOWN_BUGS document are subject for fixing! 18 19 1. libcurl 20 1.2 More data sharing 21 1.3 struct lifreq 22 1.4 signal-based resolver timeouts 23 1.5 get rid of PATH_MAX 24 1.6 Modified buffer size approach 25 1.7 Detect when called from within callbacks 26 1.8 CURLOPT_RESOLVE for any port number 27 1.9 Cache negative name resolves 28 1.10 auto-detect proxy 29 1.11 minimize dependencies with dynamically loaded modules 30 1.12 updated DNS server while running 31 1.13 DNS-over-HTTPS 32 1.14 Typesafe curl_easy_setopt() 33 1.15 Monitor connections in the connection pool 34 1.16 Try to URL encode given URL 35 1.17 Add support for IRIs 36 1.18 try next proxy if one doesn't work 37 1.19 Timeout idle connections from the pool 38 1.20 SRV and URI DNS records 39 1.21 API for URL parsing/splitting 40 1.23 Offer API to flush the connection pool 41 1.24 TCP Fast Open for windows 42 1.25 Expose tried IP addresses that failed 43 44 2. libcurl - multi interface 45 2.1 More non-blocking 46 2.2 Better support for same name resolves 47 2.3 Non-blocking curl_multi_remove_handle() 48 2.4 Split connect and authentication process 49 2.5 Edge-triggered sockets should work 50 51 3. Documentation 52 3.2 Provide cmake config-file 53 54 4. FTP 55 4.1 HOST 56 4.2 Alter passive/active on failure and retry 57 4.3 Earlier bad letter detection 58 4.4 REST for large files 59 4.5 ASCII support 60 4.6 GSSAPI via Windows SSPI 61 4.7 STAT for LIST without data connection 62 4.8 Option to ignore private IP addresses in PASV response 63 64 5. HTTP 65 5.1 Better persistency for HTTP 1.0 66 5.2 support FF3 sqlite cookie files 67 5.3 Rearrange request header order 68 5.4 HTTP Digest using SHA-256 69 5.5 auth= in URLs 70 5.6 Refuse "downgrade" redirects 71 5.7 QUIC 72 5.8 Leave secure cookies alone 73 74 6. TELNET 75 6.1 ditch stdin 76 6.2 ditch telnet-specific select 77 6.3 feature negotiation debug data 78 79 7. SMTP 80 7.1 Pipelining 81 7.2 Enhanced capability support 82 7.3 Add CURLOPT_MAIL_CLIENT option 83 84 8. POP3 85 8.1 Pipelining 86 8.2 Enhanced capability support 87 88 9. IMAP 89 9.1 Enhanced capability support 90 91 10. LDAP 92 10.1 SASL based authentication mechanisms 93 94 11. SMB 95 11.1 File listing support 96 11.2 Honor file timestamps 97 11.3 Use NTLMv2 98 11.4 Create remote directories 99 100 12. New protocols 101 12.1 RSYNC 102 103 13. SSL 104 13.1 Disable specific versions 105 13.2 Provide mutex locking API 106 13.3 Evaluate SSL patches 107 13.4 Cache/share OpenSSL contexts 108 13.5 Export session ids 109 13.6 Provide callback for cert verification 110 13.7 improve configure --with-ssl 111 13.8 Support DANE 112 13.10 Support SSLKEYLOGFILE 113 13.11 Support intermediate & root pinning for PINNEDPUBLICKEY 114 13.12 Support HSTS 115 13.13 Support HPKP 116 117 14. GnuTLS 118 14.1 SSL engine stuff 119 14.2 check connection 120 121 15. WinSSL/SChannel 122 15.1 Add support for client certificate authentication 123 15.2 Add support for custom server certificate validation 124 15.3 Add support for the --ciphers option 125 126 16. SASL 127 16.1 Other authentication mechanisms 128 16.2 Add QOP support to GSSAPI authentication 129 16.3 Support binary messages (i.e.: non-base64) 130 131 17. SSH protocols 132 17.1 Multiplexing 133 17.2 SFTP performance 134 17.3 Support better than MD5 hostkey hash 135 17.4 Support CURLOPT_PREQUOTE 136 137 18. Command line tool 138 18.1 sync 139 18.2 glob posts 140 18.3 prevent file overwriting 141 18.4 simultaneous parallel transfers 142 18.6 warning when setting an option 143 18.8 offer color-coded HTTP header output 144 18.9 Choose the name of file in braces for complex URLs 145 18.10 improve how curl works in a windows console window 146 18.11 -w output to stderr 147 18.12 keep running, read instructions from pipe/socket 148 18.13 support metalink in http headers 149 18.14 --fail without --location should treat 3xx as a failure 150 18.15 --retry should resume 151 18.16 send only part of --data 152 18.17 consider file name from the redirected URL with -O ? 153 154 19. Build 155 19.1 roffit 156 19.2 Enable PIE and RELRO by default 157 158 20. Test suite 159 20.1 SSL tunnel 160 20.2 nicer lacking perl message 161 20.3 more protocols supported 162 20.4 more platforms supported 163 20.5 Add support for concurrent connections 164 20.6 Use the RFC6265 test suite 165 166 21. Next SONAME bump 167 21.1 http-style HEAD output for FTP 168 21.2 combine error codes 169 21.3 extend CURLOPT_SOCKOPTFUNCTION prototype 170 171 22. Next major release 172 22.1 cleanup return codes 173 22.2 remove obsolete defines 174 22.3 size_t 175 22.4 remove several functions 176 22.5 remove CURLOPT_FAILONERROR 177 22.6 remove CURLOPT_DNS_USE_GLOBAL_CACHE 178 22.7 remove progress meter from libcurl 179 22.8 remove 'curl_httppost' from public 180 181============================================================================== 182 1831. libcurl 184 1851.2 More data sharing 186 187 curl_share_* functions already exist and work, and they can be extended to 188 share more. For example, enable sharing of the ares channel and the 189 connection cache. 190 1911.3 struct lifreq 192 193 Use 'struct lifreq' and SIOCGLIFADDR instead of 'struct ifreq' and 194 SIOCGIFADDR on newer Solaris versions as they claim the latter is obsolete. 195 To support IPv6 interface addresses for network interfaces properly. 196 1971.4 signal-based resolver timeouts 198 199 libcurl built without an asynchronous resolver library uses alarm() to time 200 out DNS lookups. When a timeout occurs, this causes libcurl to jump from the 201 signal handler back into the library with a sigsetjmp, which effectively 202 causes libcurl to continue running within the signal handler. This is 203 non-portable and could cause problems on some platforms. A discussion on the 204 problem is available at https://curl.haxx.se/mail/lib-2008-09/0197.html 205 206 Also, alarm() provides timeout resolution only to the nearest second. alarm 207 ought to be replaced by setitimer on systems that support it. 208 2091.5 get rid of PATH_MAX 210 211 Having code use and rely on PATH_MAX is not nice: 212 https://insanecoding.blogspot.com/2007/11/pathmax-simply-isnt.html 213 214 Currently the SSH based code uses it a bit, but to remove PATH_MAX from there 215 we need libssh2 to properly tell us when we pass in a too small buffer and 216 its current API (as of libssh2 1.2.7) doesn't. 217 2181.6 Modified buffer size approach 219 220 Current libcurl allocates a fixed 16K size buffer for download and an 221 additional 16K for upload. They are always unconditionally part of the easy 222 handle. If CRLF translations are requested, an additional 32K "scratch 223 buffer" is allocated. A total of 64K transfer buffers in the worst case. 224 225 First, while the handles are not actually in use these buffers could be freed 226 so that lingering handles just kept in queues or whatever waste less memory. 227 228 Secondly, SFTP is a protocol that needs to handle many ~30K blocks at once 229 since each need to be individually acked and therefore libssh2 must be 230 allowed to send (or receive) many separate ones in parallel to achieve high 231 transfer speeds. A current libcurl build with a 16K buffer makes that 232 impossible, but one with a 512K buffer will reach MUCH faster transfers. But 233 allocating 512K unconditionally for all buffers just in case they would like 234 to do fast SFTP transfers at some point is not a good solution either. 235 236 Dynamically allocate buffer size depending on protocol in use in combination 237 with freeing it after each individual transfer? Other suggestions? 238 2391.7 Detect when called from within callbacks 240 241 We should set a state variable before calling callbacks, so that we 242 subsequently can add code within libcurl that returns error if called within 243 callbacks for when that's not supported. 244 2451.8 CURLOPT_RESOLVE for any port number 246 247 This option allows applications to set a replacement IP address for a given 248 host + port pair. Consider making support for providing a replacement address 249 for the host name on all port numbers. 250 251 See https://github.com/curl/curl/issues/1264 252 2531.9 Cache negative name resolves 254 255 A name resolve that has failed is likely to fail when made again within a 256 short period of time. Currently we only cache positive responses. 257 2581.10 auto-detect proxy 259 260 libcurl could be made to detect the system proxy setup automatically and use 261 that. On Windows, macOS and Linux desktops for example. 262 263 The pull-request to use libproxy for this was deferred due to doubts on the 264 reliability of the dependency and how to use it: 265 https://github.com/curl/curl/pull/977 266 267 libdetectproxy is a (C++) library for detecting the proxy on Windows 268 https://github.com/paulharris/libdetectproxy 269 2701.11 minimize dependencies with dynamically loaded modules 271 272 We can create a system with loadable modules/plug-ins, where these modules 273 would be the ones that link to 3rd party libs. That would allow us to avoid 274 having to load ALL dependencies since only the necessary ones for this 275 app/invoke/used protocols would be necessary to load. See 276 https://github.com/curl/curl/issues/349 277 2781.12 updated DNS server while running 279 280 If /etc/resolv.conf gets updated while a program using libcurl is running, it 281 is may cause name resolves to fail unless res_init() is called. We should 282 consider calling res_init() + retry once unconditionally on all name resolve 283 failures to mitigate against this. Firefox works like that. Note that Windows 284 doesn't have res_init() or an alternative. 285 286 https://github.com/curl/curl/issues/2251 287 2881.13 DNS-over-HTTPS 289 290 By adding support for DNS-over-HTTPS curl could resolve host names using a 291 totally separate name server than the standard system resolver, while at the 292 same time doing so over a communication channel that enhances privacy and 293 security. 294 295 https://github.com/curl/curl/wiki/DNS-over-HTTPS 296 2971.14 Typesafe curl_easy_setopt() 298 299 One of the most common problems in libcurl using applications is the lack of 300 type checks for curl_easy_setopt() which happens because it accepts varargs 301 and thus can take any type. 302 303 One possible solution to this is to introduce a few different versions of the 304 setopt version for the different kinds of data you can set. 305 306 curl_easy_set_num() - sets a long value 307 308 curl_easy_set_large() - sets a curl_off_t value 309 310 curl_easy_set_ptr() - sets a pointer 311 312 curl_easy_set_cb() - sets a callback PLUS its callback data 313 3141.15 Monitor connections in the connection pool 315 316 libcurl's connection cache or pool holds a number of open connections for the 317 purpose of possible subsequent connection reuse. It may contain a few up to a 318 significant amount of connections. Currently, libcurl leaves all connections 319 as they are and first when a connection is iterated over for matching or 320 reuse purpose it is verified that it is still alive. 321 322 Those connections may get closed by the server side for idleness or they may 323 get a HTTP/2 ping from the peer to verify that they're still alive. By adding 324 monitoring of the connections while in the pool, libcurl can detect dead 325 connections (and close them) better and earlier, and it can handle HTTP/2 326 pings to keep such ones alive even when not actively doing transfers on them. 327 3281.16 Try to URL encode given URL 329 330 Given a URL that for example contains spaces, libcurl could have an option 331 that would try somewhat harder than it does now and convert spaces to %20 and 332 perhaps URL encoded byte values over 128 etc (basically do what the redirect 333 following code already does). 334 335 https://github.com/curl/curl/issues/514 336 3371.17 Add support for IRIs 338 339 IRIs (RFC 3987) allow localized, non-ascii, names in the URL. To properly 340 support this, curl/libcurl would need to translate/encode the given input 341 from the input string encoding into percent encoded output "over the wire". 342 343 To make that work smoothly for curl users even on Windows, curl would 344 probably need to be able to convert from several input encodings. 345 3461.18 try next proxy if one doesn't work 347 348 Allow an application to specify a list of proxies to try, and failing to 349 connect to the first go on and try the next instead until the list is 350 exhausted. Browsers support this feature at least when they specify proxies 351 using PACs. 352 353 https://github.com/curl/curl/issues/896 354 3551.19 Timeout idle connections from the pool 356 357 libcurl currently keeps connections in its connection pool for an indefinite 358 period of time, until it either gets reused, gets noticed that it has been 359 closed by the server or gets pruned to make room for a new connection. 360 361 To reduce overhead (especially for when we add monitoring of the connections 362 in the pool), we should introduce a timeout so that connections that have 363 been idle for N seconds get closed. 364 3651.20 SRV and URI DNS records 366 367 Offer support for resolving SRV and URI DNS records for libcurl to know which 368 server to connect to for various protocols (including HTTP!). 369 3701.21 API for URL parsing/splitting 371 372 libcurl has always parsed URLs internally and never exposed any API or 373 features to allow applications to do it. Still most or many applications 374 using libcurl need that ability. In polls to users, we've learned that many 375 libcurl users would like to see and use such an API. 376 3771.23 Offer API to flush the connection pool 378 379 Sometimes applications want to flush all the existing connections kept alive. 380 An API could allow a forced flush or just a forced loop that would properly 381 close all connections that have been closed by the server already. 382 3831.24 TCP Fast Open for windows 384 385 libcurl supports the CURLOPT_TCP_FASTOPEN option since 7.49.0 for Linux and 386 Mac OS. Windows supports TCP Fast Open starting with Windows 10, version 1607 387 and we should add support for it. 388 3891.25 Expose tried IP addresses that failed 390 391 When libcurl fails to connect to a host, it should be able to offer the 392 application the list of IP addresses that were used in the attempt. 393 394 https://github.com/curl/curl/issues/2126 395 3962. libcurl - multi interface 397 3982.1 More non-blocking 399 400 Make sure we don't ever loop because of non-blocking sockets returning 401 EWOULDBLOCK or similar. Blocking cases include: 402 403 - Name resolves on non-windows unless c-ares or the threaded resolver is used 404 - SOCKS proxy handshakes 405 - file:// transfers 406 - TELNET transfers 407 - The "DONE" operation (post transfer protocol-specific actions) for the 408 protocols SFTP, SMTP, FTP. Fixing Curl_done() for this is a worthy task. 409 4102.2 Better support for same name resolves 411 412 If a name resolve has been initiated for name NN and a second easy handle 413 wants to resolve that name as well, make it wait for the first resolve to end 414 up in the cache instead of doing a second separate resolve. This is 415 especially needed when adding many simultaneous handles using the same host 416 name when the DNS resolver can get flooded. 417 4182.3 Non-blocking curl_multi_remove_handle() 419 420 The multi interface has a few API calls that assume a blocking behavior, like 421 add_handle() and remove_handle() which limits what we can do internally. The 422 multi API need to be moved even more into a single function that "drives" 423 everything in a non-blocking manner and signals when something is done. A 424 remove or add would then only ask for the action to get started and then 425 multi_perform() etc still be called until the add/remove is completed. 426 4272.4 Split connect and authentication process 428 429 The multi interface treats the authentication process as part of the connect 430 phase. As such any failures during authentication won't trigger the relevant 431 QUIT or LOGOFF for protocols such as IMAP, POP3 and SMTP. 432 4332.5 Edge-triggered sockets should work 434 435 The multi_socket API should work with edge-triggered socket events. One of 436 the internal actions that need to be improved for this to work perfectly is 437 the 'maxloops' handling in transfer.c:readwrite_data(). 438 4393. Documentation 440 4413.2 Provide cmake config-file 442 443 A config-file package is a set of files provided by us to allow applications 444 to write cmake scripts to find and use libcurl easier. See 445 https://github.com/curl/curl/issues/885 446 4474. FTP 448 4494.1 HOST 450 451 HOST is a command for a client to tell which host name to use, to offer FTP 452 servers named-based virtual hosting: 453 454 https://tools.ietf.org/html/rfc7151 455 4564.2 Alter passive/active on failure and retry 457 458 When trying to connect passively to a server which only supports active 459 connections, libcurl returns CURLE_FTP_WEIRD_PASV_REPLY and closes the 460 connection. There could be a way to fallback to an active connection (and 461 vice versa). https://curl.haxx.se/bug/feature.cgi?id=1754793 462 4634.3 Earlier bad letter detection 464 465 Make the detection of (bad) %0d and %0a codes in FTP URL parts earlier in the 466 process to avoid doing a resolve and connect in vain. 467 4684.4 REST for large files 469 470 REST fix for servers not behaving well on >2GB requests. This should fail if 471 the server doesn't set the pointer to the requested index. The tricky 472 (impossible?) part is to figure out if the server did the right thing or not. 473 4744.5 ASCII support 475 476 FTP ASCII transfers do not follow RFC959. They don't convert the data 477 accordingly. 478 4794.6 GSSAPI via Windows SSPI 480 481 In addition to currently supporting the SASL GSSAPI mechanism (Kerberos V5) 482 via third-party GSS-API libraries, such as Heimdal or MIT Kerberos, also add 483 support for GSSAPI authentication via Windows SSPI. 484 4854.7 STAT for LIST without data connection 486 487 Some FTP servers allow STAT for listing directories instead of using LIST, 488 and the response is then sent over the control connection instead of as the 489 otherwise usedw data connection: http://www.nsftools.com/tips/RawFTP.htm#STAT 490 491 This is not detailed in any FTP specification. 492 4934.8 Option to ignore private IP addresses in PASV response 494 495 Some servers respond with and some other FTP client implementations can 496 ignore private (RFC 1918 style) IP addresses when received in PASV responses. 497 To consider for libcurl as well. See https://github.com/curl/curl/issues/1455 498 4995. HTTP 500 5015.1 Better persistency for HTTP 1.0 502 503 "Better" support for persistent connections over HTTP 1.0 504 https://curl.haxx.se/bug/feature.cgi?id=1089001 505 5065.2 support FF3 sqlite cookie files 507 508 Firefox 3 is changing from its former format to a a sqlite database instead. 509 We should consider how (lib)curl can/should support this. 510 https://curl.haxx.se/bug/feature.cgi?id=1871388 511 5125.3 Rearrange request header order 513 514 Server implementors often make an effort to detect browser and to reject 515 clients it can detect to not match. One of the last details we cannot yet 516 control in libcurl's HTTP requests, which also can be exploited to detect 517 that libcurl is in fact used even when it tries to impersonate a browser, is 518 the order of the request headers. I propose that we introduce a new option in 519 which you give headers a value, and then when the HTTP request is built it 520 sorts the headers based on that number. We could then have internally created 521 headers use a default value so only headers that need to be moved have to be 522 specified. 523 5245.4 HTTP Digest using SHA-256 525 526 RFC 7616 introduces an update to the HTTP Digest authentication 527 specification, which amongst other thing defines how new digest algorithms 528 can be used instead of MD5 which is considered old and not recommended. 529 530 See https://tools.ietf.org/html/rfc7616 and 531 https://github.com/curl/curl/issues/1018 532 5335.5 auth= in URLs 534 535 Add the ability to specify the preferred authentication mechanism to use by 536 using ;auth=<mech> in the login part of the URL. 537 538 For example: 539 540 http://test:pass;auth=NTLM@example.com would be equivalent to specifying --user 541 test:pass;auth=NTLM or --user test:pass --ntlm from the command line. 542 543 Additionally this should be implemented for proxy base URLs as well. 544 5455.6 Refuse "downgrade" redirects 546 547 See https://github.com/curl/curl/issues/226 548 549 Consider a way to tell curl to refuse to "downgrade" protocol with a redirect 550 and/or possibly a bit that refuses redirect to change protocol completely. 551 5525.7 QUIC 553 554 The standardization process of QUIC has been taken to the IETF and can be 555 followed on the [IETF QUIC Mailing 556 list](https://www.ietf.org/mailman/listinfo/quic). I'd like us to get on the 557 bandwagon. Ideally, this would be done with a separate library/project to 558 handle the binary/framing layer in a similar fashion to how HTTP/2 is 559 implemented. This, to allow other projects to benefit from the work and to 560 thus broaden the interest and chance of others to participate. 561 5625.8 Leave secure cookies alone 563 564 Non-secure origins (HTTP sites) should not be allowed to set or modify 565 cookies with the 'secure' property: 566 567 https://tools.ietf.org/html/draft-ietf-httpbis-cookie-alone-01 568 569 5706. TELNET 571 5726.1 ditch stdin 573 574Reading input (to send to the remote server) on stdin is a crappy solution for 575library purposes. We need to invent a good way for the application to be able 576to provide the data to send. 577 5786.2 ditch telnet-specific select 579 580 Move the telnet support's network select() loop go away and merge the code 581 into the main transfer loop. Until this is done, the multi interface won't 582 work for telnet. 583 5846.3 feature negotiation debug data 585 586 Add telnet feature negotiation data to the debug callback as header data. 587 588 5897. SMTP 590 5917.1 Pipelining 592 593 Add support for pipelining emails. 594 5957.2 Enhanced capability support 596 597 Add the ability, for an application that uses libcurl, to obtain the list of 598 capabilities returned from the EHLO command. 599 6007.3 Add CURLOPT_MAIL_CLIENT option 601 602 Rather than use the URL to specify the mail client string to present in the 603 HELO and EHLO commands, libcurl should support a new CURLOPT specifically for 604 specifying this data as the URL is non-standard and to be honest a bit of a 605 hack ;-) 606 607 Please see the following thread for more information: 608 https://curl.haxx.se/mail/lib-2012-05/0178.html 609 610 6118. POP3 612 6138.1 Pipelining 614 615 Add support for pipelining commands. 616 6178.2 Enhanced capability support 618 619 Add the ability, for an application that uses libcurl, to obtain the list of 620 capabilities returned from the CAPA command. 621 6229. IMAP 623 6249.1 Enhanced capability support 625 626 Add the ability, for an application that uses libcurl, to obtain the list of 627 capabilities returned from the CAPABILITY command. 628 62910. LDAP 630 63110.1 SASL based authentication mechanisms 632 633 Currently the LDAP module only supports ldap_simple_bind_s() in order to bind 634 to an LDAP server. However, this function sends username and password details 635 using the simple authentication mechanism (as clear text). However, it should 636 be possible to use ldap_bind_s() instead specifying the security context 637 information ourselves. 638 63911. SMB 640 64111.1 File listing support 642 643Add support for listing the contents of a SMB share. The output should probably 644be the same as/similar to FTP. 645 64611.2 Honor file timestamps 647 648The timestamp of the transferred file should reflect that of the original file. 649 65011.3 Use NTLMv2 651 652Currently the SMB authentication uses NTLMv1. 653 65411.4 Create remote directories 655 656Support for creating remote directories when uploading a file to a directory 657that doesn't exist on the server, just like --ftp-create-dirs. 658 65912. New protocols 660 66112.1 RSYNC 662 663 There's no RFC for the protocol or an URI/URL format. An implementation 664 should most probably use an existing rsync library, such as librsync. 665 66613. SSL 667 66813.1 Disable specific versions 669 670 Provide an option that allows for disabling specific SSL versions, such as 671 SSLv2 https://curl.haxx.se/bug/feature.cgi?id=1767276 672 67313.2 Provide mutex locking API 674 675 Provide a libcurl API for setting mutex callbacks in the underlying SSL 676 library, so that the same application code can use mutex-locking 677 independently of OpenSSL or GnutTLS being used. 678 67913.3 Evaluate SSL patches 680 681 Evaluate/apply Gertjan van Wingerde's SSL patches: 682 https://curl.haxx.se/mail/lib-2004-03/0087.html 683 68413.4 Cache/share OpenSSL contexts 685 686 "Look at SSL cafile - quick traces look to me like these are done on every 687 request as well, when they should only be necessary once per SSL context (or 688 once per handle)". The major improvement we can rather easily do is to make 689 sure we don't create and kill a new SSL "context" for every request, but 690 instead make one for every connection and re-use that SSL context in the same 691 style connections are re-used. It will make us use slightly more memory but 692 it will libcurl do less creations and deletions of SSL contexts. 693 694 Technically, the "caching" is probably best implemented by getting added to 695 the share interface so that easy handles who want to and can reuse the 696 context specify that by sharing with the right properties set. 697 698 https://github.com/curl/curl/issues/1110 699 70013.5 Export session ids 701 702 Add an interface to libcurl that enables "session IDs" to get 703 exported/imported. Cris Bailiff said: "OpenSSL has functions which can 704 serialise the current SSL state to a buffer of your choice, and recover/reset 705 the state from such a buffer at a later date - this is used by mod_ssl for 706 apache to implement and SSL session ID cache". 707 70813.6 Provide callback for cert verification 709 710 OpenSSL supports a callback for customised verification of the peer 711 certificate, but this doesn't seem to be exposed in the libcurl APIs. Could 712 it be? There's so much that could be done if it were! 713 71413.7 improve configure --with-ssl 715 716 make the configure --with-ssl option first check for OpenSSL, then GnuTLS, 717 then NSS... 718 71913.8 Support DANE 720 721 DNS-Based Authentication of Named Entities (DANE) is a way to provide SSL 722 keys and certs over DNS using DNSSEC as an alternative to the CA model. 723 https://www.rfc-editor.org/rfc/rfc6698.txt 724 725 An initial patch was posted by Suresh Krishnaswamy on March 7th 2013 726 (https://curl.haxx.se/mail/lib-2013-03/0075.html) but it was a too simple 727 approach. See Daniel's comments: 728 https://curl.haxx.se/mail/lib-2013-03/0103.html . libunbound may be the 729 correct library to base this development on. 730 731 Björn Stenberg wrote a separate initial take on DANE that was never 732 completed. 733 73413.10 Support SSLKEYLOGFILE 735 736 When used, Firefox and Chrome dumps their master TLS keys to the file name 737 this environment variable specifies. This allows tools like for example 738 Wireshark to capture and decipher TLS traffic to/from those clients. libcurl 739 could be made to support this more widely (presumably this already works when 740 built with NSS). Peter Wu made a OpenSSL preload to make possible that can be 741 used as inspiration and guidance 742 https://git.lekensteyn.nl/peter/wireshark-notes/tree/src/sslkeylog.c 743 74413.11 Support intermediate & root pinning for PINNEDPUBLICKEY 745 746 CURLOPT_PINNEDPUBLICKEY does not consider the hashes of intermediate & root 747 certificates when comparing the pinned keys. Therefore it is not compatible 748 with "HTTP Public Key Pinning" as there also intermediate and root certificates 749 can be pinned. This is very useful as it prevents webadmins from "locking 750 themself out of their servers". 751 752 Adding this feature would make curls pinning 100% compatible to HPKP and allow 753 more flexible pinning. 754 75513.12 Support HSTS 756 757 "HTTP Strict Transport Security" is TOFU (trust on first use), time-based 758 features indicated by a HTTP header send by the webserver. It is widely used 759 in browsers and it's purpose is to prevent insecure HTTP connections after 760 a previous HTTPS connection. It protects against SSLStripping attacks. 761 762 Doc: https://developer.mozilla.org/en-US/docs/Web/Security/HTTP_strict_transport_security 763 RFC 6797: https://tools.ietf.org/html/rfc6797 764 76513.13 Support HPKP 766 767 "HTTP Public Key Pinning" is TOFU (trust on first use), time-based 768 features indicated by a HTTP header send by the webserver. It's purpose is 769 to prevent Man-in-the-middle attacks by trusted CAs by allowing webadmins 770 to specify which CAs/certificates/public keys to trust when connection to 771 their websites. 772 773 It can be build based on PINNEDPUBLICKEY. 774 775 Wikipedia: https://en.wikipedia.org/wiki/HTTP_Public_Key_Pinning 776 OWASP: https://www.owasp.org/index.php/Certificate_and_Public_Key_Pinning 777 Doc: https://developer.mozilla.org/de/docs/Web/Security/Public_Key_Pinning 778 RFC: https://tools.ietf.org/html/draft-ietf-websec-key-pinning-21 779 78014. GnuTLS 781 78214.1 SSL engine stuff 783 784 Is this even possible? 785 78614.2 check connection 787 788 Add a way to check if the connection seems to be alive, to correspond to the 789 SSL_peak() way we use with OpenSSL. 790 79115. WinSSL/SChannel 792 79315.1 Add support for client certificate authentication 794 795 WinSSL/SChannel currently makes use of the OS-level system and user 796 certificate and private key stores. This does not allow the application 797 or the user to supply a custom client certificate using curl or libcurl. 798 799 Therefore support for the existing -E/--cert and --key options should be 800 implemented by supplying a custom certificate to the SChannel APIs, see: 801 - Getting a Certificate for Schannel 802 https://msdn.microsoft.com/en-us/library/windows/desktop/aa375447.aspx 803 80415.2 Add support for custom server certificate validation 805 806 WinSSL/SChannel currently makes use of the OS-level system and user 807 certificate trust store. This does not allow the application or user to 808 customize the server certificate validation process using curl or libcurl. 809 810 Therefore support for the existing --cacert or --capath options should be 811 implemented by supplying a custom certificate to the SChannel APIs, see: 812 - Getting a Certificate for Schannel 813 https://msdn.microsoft.com/en-us/library/windows/desktop/aa375447.aspx 814 81515.3 Add support for the --ciphers option 816 817 The cipher suites used by WinSSL/SChannel are configured on an OS-level 818 instead of an application-level. This does not allow the application or 819 the user to customize the configured cipher suites using curl or libcurl. 820 821 Therefore support for the existing --ciphers option should be implemented 822 by mapping the OpenSSL/GnuTLS cipher suites to the SChannel APIs, see 823 - Specifying Schannel Ciphers and Cipher Strengths 824 https://msdn.microsoft.com/en-us/library/windows/desktop/aa380161.aspx 825 82616. SASL 827 82816.1 Other authentication mechanisms 829 830 Add support for other authentication mechanisms such as OLP, 831 GSS-SPNEGO and others. 832 83316.2 Add QOP support to GSSAPI authentication 834 835 Currently the GSSAPI authentication only supports the default QOP of auth 836 (Authentication), whilst Kerberos V5 supports both auth-int (Authentication 837 with integrity protection) and auth-conf (Authentication with integrity and 838 privacy protection). 839 84016.3 Support binary messages (i.e.: non-base64) 841 842 Mandatory to support LDAP SASL authentication. 843 844 84517. SSH protocols 846 84717.1 Multiplexing 848 849 SSH is a perfectly fine multiplexed protocols which would allow libcurl to do 850 multiple parallel transfers from the same host using the same connection, 851 much in the same spirit as HTTP/2 does. libcurl however does not take 852 advantage of that ability but will instead always create a new connection for 853 new transfers even if an existing connection already exists to the host. 854 855 To fix this, libcurl would have to detect an existing connection and "attach" 856 the new transfer to the existing one. 857 85817.2 SFTP performance 859 860 libcurl's SFTP transfer performance is sub par and can be improved, mostly by 861 the approach mentioned in "1.6 Modified buffer size approach". 862 86317.3 Support better than MD5 hostkey hash 864 865 libcurl offers the CURLOPT_SSH_HOST_PUBLIC_KEY_MD5 option for verifying the 866 server's key. MD5 is generally being deprecated so we should implement 867 support for stronger hashing algorithms. libssh2 itself is what provides this 868 underlying functionality and it supports at least SHA-1 as an alternative. 869 SHA-1 is also being deprecated these days so we should consider workign with 870 libssh2 to instead offer support for SHA-256 or similar. 871 87217.4 Support CURLOPT_PREQUOTE 873 874 The two other QUOTE options are supported for SFTP, but this was left out for 875 unknown reasons! 876 87718. Command line tool 878 87918.1 sync 880 881 "curl --sync http://example.com/feed[1-100].rss" or 882 "curl --sync http://example.net/{index,calendar,history}.html" 883 884 Downloads a range or set of URLs using the remote name, but only if the 885 remote file is newer than the local file. A Last-Modified HTTP date header 886 should also be used to set the mod date on the downloaded file. 887 88818.2 glob posts 889 890 Globbing support for -d and -F, as in 'curl -d "name=foo[0-9]" URL'. 891 This is easily scripted though. 892 89318.3 prevent file overwriting 894 895 Add an option that prevents curl from overwriting existing local files. When 896 used, and there already is an existing file with the target file name 897 (either -O or -o), a number should be appended (and increased if already 898 existing). So that index.html becomes first index.html.1 and then 899 index.html.2 etc. 900 90118.4 simultaneous parallel transfers 902 903 The client could be told to use maximum N simultaneous parallel transfers and 904 then just make sure that happens. It should of course not make more than one 905 connection to the same remote host. This would require the client to use the 906 multi interface. https://curl.haxx.se/bug/feature.cgi?id=1558595 907 908 Using the multi interface would also allow properly using parallel transfers 909 with HTTP/2 and supporting HTTP/2 server push from the command line. 910 91118.6 warning when setting an option 912 913 Display a warning when libcurl returns an error when setting an option. 914 This can be useful to tell when support for a particular feature hasn't been 915 compiled into the library. 916 91718.8 offer color-coded HTTP header output 918 919 By offering different color output on the header name and the header 920 contents, they could be made more readable and thus help users working on 921 HTTP services. 922 92318.9 Choose the name of file in braces for complex URLs 924 925 When using braces to download a list of URLs and you use complicated names 926 in the list of alternatives, it could be handy to allow curl to use other 927 names when saving. 928 929 Consider a way to offer that. Possibly like 930 {partURL1:name1,partURL2:name2,partURL3:name3} where the name following the 931 colon is the output name. 932 933 See https://github.com/curl/curl/issues/221 934 93518.10 improve how curl works in a windows console window 936 937 If you pull the scrollbar when transferring with curl in a Windows console 938 window, the transfer is interrupted and can get disconnected. This can 939 probably be improved. See https://github.com/curl/curl/issues/322 940 94118.11 -w output to stderr 942 943 -w is quite useful, but not to those of us who use curl without -o or -O 944 (such as for scripting through a higher level language). It would be nice to 945 have an option that is exactly like -w but sends it to stderr 946 instead. Proposed name: --write-stderr. See 947 https://github.com/curl/curl/issues/613 948 94918.12 keep running, read instructions from pipe/socket 950 951 Provide an option that makes curl not exit after the last URL (or even work 952 without a given URL), and then make it read instructions passed on a pipe or 953 over a socket to make further instructions so that a second subsequent curl 954 invoke can talk to the still running instance and ask for transfers to get 955 done, and thus maintain its connection pool, DNS cache and more. 956 95718.13 support metalink in http headers 958 959 Curl has support for downloading a metalink xml file, processing it, and then 960 downloading the target of the metalink. This is done via the --metalink option. 961 It would be nice if metalink also supported downloading via metalink 962 information that is stored in HTTP headers (RFC 6249). Theoretically this could 963 also be supported with the --metalink option. 964 965 See https://tools.ietf.org/html/rfc6249 966 967 See also https://lists.gnu.org/archive/html/bug-wget/2015-06/msg00034.html for 968 an implematation of this in wget. 969 97018.14 --fail without --location should treat 3xx as a failure 971 972 To allow a command line like this to detect a redirect and consider it a 973 failure: 974 975 curl -v --fail -O https://example.com/curl-7.48.0.tar.gz 976 977 ... --fail must treat 3xx responses as failures too. The least problematic 978 way to implement this is probably to add that new logic in the command line 979 tool only and not in the underlying CURLOPT_FAILONERROR logic. 980 98118.15 --retry should resume 982 983 When --retry is used and curl actually retries transfer, it should use the 984 already transferred data and do a resumed transfer for the rest (when 985 possible) so that it doesn't have to transfer the same data again that was 986 already transferred before the retry. 987 988 See https://github.com/curl/curl/issues/1084 989 99018.16 send only part of --data 991 992 When the user only wants to send a small piece of the data provided with 993 --data or --data-binary, like when that data is a huge file, consider a way 994 to specify that curl should only send a piece of that. One suggested syntax 995 would be: "--data-binary @largefile.zip!1073741823-2147483647". 996 997 See https://github.com/curl/curl/issues/1200 998 99918.17 consider file name from the redirected URL with -O ? 1000 1001 When a user gives a URL and uses -O, and curl follows a redirect to a new 1002 URL, the file name is not extracted and used from the newly redirected-to URL 1003 even if the new URL may have a much more sensible file name. 1004 1005 This is clearly documented and helps for security since there's no surprise 1006 to users which file name that might get overwritten. But maybe a new option 1007 could allow for this or maybe -J should imply such a treatment as well as -J 1008 already allows for the server to decide what file name to use so it already 1009 provides the "may overwrite any file" risk. 1010 1011 This is extra tricky if the original URL has no file name part at all since 1012 then the current code path will error out with an error message, and we can't 1013 *know* already at that point if curl will be redirected to a URL that has a 1014 file name... 1015 1016 See https://github.com/curl/curl/issues/1241 1017 101819. Build 1019 102019.1 roffit 1021 1022 Consider extending 'roffit' to produce decent ASCII output, and use that 1023 instead of (g)nroff when building src/tool_hugehelp.c 1024 102519.2 Enable PIE and RELRO by default 1026 1027 Especially when having programs that execute curl via the command line, PIE 1028 renders the exploitation of memory corruption vulnerabilities a lot more 1029 difficult. This can be attributed to the additional information leaks being 1030 required to conduct a successful attack. RELRO, on the other hand, masks 1031 different binary sections like the GOT as read-only and thus kills a handful 1032 of techniques that come in handy when attackers are able to arbitrarily 1033 overwrite memory. A few tests showed that enabling these features had close 1034 to no impact, neither on the performance nor on the general functionality of 1035 curl. 1036 1037 103820. Test suite 1039 104020.1 SSL tunnel 1041 1042 Make our own version of stunnel for simple port forwarding to enable HTTPS 1043 and FTP-SSL tests without the stunnel dependency, and it could allow us to 1044 provide test tools built with either OpenSSL or GnuTLS 1045 104620.2 nicer lacking perl message 1047 1048 If perl wasn't found by the configure script, don't attempt to run the tests 1049 but explain something nice why it doesn't. 1050 105120.3 more protocols supported 1052 1053 Extend the test suite to include more protocols. The telnet could just do FTP 1054 or http operations (for which we have test servers). 1055 105620.4 more platforms supported 1057 1058 Make the test suite work on more platforms. OpenBSD and Mac OS. Remove 1059 fork()s and it should become even more portable. 1060 106120.5 Add support for concurrent connections 1062 1063 Tests 836, 882 and 938 were designed to verify that separate connections aren't 1064 used when using different login credentials in protocols that shouldn't re-use 1065 a connection under such circumstances. 1066 1067 Unfortunately, ftpserver.pl doesn't appear to support multiple concurrent 1068 connections. The read while() loop seems to loop until it receives a disconnect 1069 from the client, where it then enters the waiting for connections loop. When 1070 the client opens a second connection to the server, the first connection hasn't 1071 been dropped (unless it has been forced - which we shouldn't do in these tests) 1072 and thus the wait for connections loop is never entered to receive the second 1073 connection. 1074 107520.6 Use the RFC6265 test suite 1076 1077 A test suite made for HTTP cookies (RFC 6265) by Adam Barth is available at 1078 https://github.com/abarth/http-state/tree/master/tests 1079 1080 It'd be really awesome if someone would write a script/setup that would run 1081 curl with that test suite and detect deviances. Ideally, that would even be 1082 incorporated into our regular test suite. 1083 1084 108521. Next SONAME bump 1086 108721.1 http-style HEAD output for FTP 1088 1089 #undef CURL_FTP_HTTPSTYLE_HEAD in lib/ftp.c to remove the HTTP-style headers 1090 from being output in NOBODY requests over FTP 1091 109221.2 combine error codes 1093 1094 Combine some of the error codes to remove duplicates. The original 1095 numbering should not be changed, and the old identifiers would be 1096 macroed to the new ones in an CURL_NO_OLDIES section to help with 1097 backward compatibility. 1098 1099 Candidates for removal and their replacements: 1100 1101 CURLE_FILE_COULDNT_READ_FILE => CURLE_REMOTE_FILE_NOT_FOUND 1102 1103 CURLE_FTP_COULDNT_RETR_FILE => CURLE_REMOTE_FILE_NOT_FOUND 1104 1105 CURLE_FTP_COULDNT_USE_REST => CURLE_RANGE_ERROR 1106 1107 CURLE_FUNCTION_NOT_FOUND => CURLE_FAILED_INIT 1108 1109 CURLE_LDAP_INVALID_URL => CURLE_URL_MALFORMAT 1110 1111 CURLE_TFTP_NOSUCHUSER => CURLE_TFTP_ILLEGAL 1112 1113 CURLE_TFTP_NOTFOUND => CURLE_REMOTE_FILE_NOT_FOUND 1114 1115 CURLE_TFTP_PERM => CURLE_REMOTE_ACCESS_DENIED 1116 111721.3 extend CURLOPT_SOCKOPTFUNCTION prototype 1118 1119 The current prototype only provides 'purpose' that tells what the 1120 connection/socket is for, but not any protocol or similar. It makes it hard 1121 for applications to differentiate on TCP vs UDP and even HTTP vs FTP and 1122 similar. 1123 112422. Next major release 1125 112622.1 cleanup return codes 1127 1128 curl_easy_cleanup() returns void, but curl_multi_cleanup() returns a 1129 CURLMcode. These should be changed to be the same. 1130 113122.2 remove obsolete defines 1132 1133 remove obsolete defines from curl/curl.h 1134 113522.3 size_t 1136 1137 make several functions use size_t instead of int in their APIs 1138 113922.4 remove several functions 1140 1141 remove the following functions from the public API: 1142 1143 curl_getenv 1144 1145 curl_mprintf (and variations) 1146 1147 curl_strequal 1148 1149 curl_strnequal 1150 1151 They will instead become curlx_ - alternatives. That makes the curl app 1152 still capable of using them, by building with them from source. 1153 1154 These functions have no purpose anymore: 1155 1156 curl_multi_socket 1157 1158 curl_multi_socket_all 1159 116022.5 remove CURLOPT_FAILONERROR 1161 1162 Remove support for CURLOPT_FAILONERROR, it has gotten too kludgy and weird 1163 internally. Let the app judge success or not for itself. 1164 116522.6 remove CURLOPT_DNS_USE_GLOBAL_CACHE 1166 1167 Remove support for a global DNS cache. Anything global is silly, and we 1168 already offer the share interface for the same functionality but done 1169 "right". 1170 117122.7 remove progress meter from libcurl 1172 1173 The internally provided progress meter output doesn't belong in the library. 1174 Basically no application wants it (apart from curl) but instead applications 1175 can and should do their own progress meters using the progress callback. 1176 1177 The progress callback should then be bumped as well to get proper 64bit 1178 variable types passed to it instead of doubles so that big files work 1179 correctly. 1180 118122.8 remove 'curl_httppost' from public 1182 1183 curl_formadd() was made to fill in a public struct, but the fact that the 1184 struct is public is never really used by application for their own advantage 1185 but instead often restricts how the form functions can or can't be modified. 1186 1187 Changing them to return a private handle will benefit the implementation and 1188 allow us much greater freedoms while still maintaining a solid API and ABI. 1189