1 _ _ ____ _ 2 ___| | | | _ \| | 3 / __| | | | |_) | | 4 | (__| |_| | _ <| |___ 5 \___|\___/|_| \_\_____| 6 7 Things that could be nice to do in the future 8 9 Things to do in project curl. Please tell us what you think, contribute and 10 send us patches that improve things! 11 12 Be aware that these are things that we could do, or have once been considered 13 things we could do. If you want to work on any of these areas, please 14 consider bringing it up for discussions first on the mailing list so that we 15 all agree it is still a good idea for the project! 16 17 All bugs documented in the KNOWN_BUGS document are subject for fixing! 18 19 1. libcurl 20 1.1 TFO support on Windows 21 1.2 Consult %APPDATA% also for .netrc 22 1.3 struct lifreq 23 1.4 alt-svc sharing 24 1.5 get rid of PATH_MAX 25 1.6 native IDN support on macOS 26 1.7 Support HTTP/2 for HTTP(S) proxies 27 1.8 CURLOPT_RESOLVE for any port number 28 1.9 Cache negative name resolves 29 1.10 auto-detect proxy 30 1.11 minimize dependencies with dynamically loaded modules 31 1.12 updated DNS server while running 32 1.13 c-ares and CURLOPT_OPENSOCKETFUNCTION 33 1.14 Typesafe curl_easy_setopt() 34 1.15 Monitor connections in the connection pool 35 1.16 Try to URL encode given URL 36 1.17 Add support for IRIs 37 1.18 try next proxy if one doesn't work 38 1.20 SRV and URI DNS records 39 1.22 CURLINFO_PAUSE_STATE 40 1.23 Offer API to flush the connection pool 41 1.24 TCP Fast Open for windows 42 1.25 Expose tried IP addresses that failed 43 1.27 hardcode the "localhost" addresses 44 1.28 FD_CLOEXEC 45 1.29 Upgrade to websockets 46 1.30 config file parsing 47 48 2. libcurl - multi interface 49 2.1 More non-blocking 50 2.2 Better support for same name resolves 51 2.3 Non-blocking curl_multi_remove_handle() 52 2.4 Split connect and authentication process 53 2.5 Edge-triggered sockets should work 54 2.6 multi upkeep 55 2.7 Virtual external sockets 56 2.8 dynamically decide to use socketpair 57 58 3. Documentation 59 3.2 Provide cmake config-file 60 61 4. FTP 62 4.1 HOST 63 4.2 Alter passive/active on failure and retry 64 4.3 Earlier bad letter detection 65 4.5 ASCII support 66 4.6 GSSAPI via Windows SSPI 67 4.7 STAT for LIST without data connection 68 4.8 Option to ignore private IP addresses in PASV response 69 70 5. HTTP 71 5.1 Better persistency for HTTP 1.0 72 5.2 Set custom client ip when using haproxy protocol 73 5.3 Rearrange request header order 74 5.4 Allow SAN names in HTTP/2 server push 75 5.5 auth= in URLs 76 77 6. TELNET 78 6.1 ditch stdin 79 6.2 ditch telnet-specific select 80 6.3 feature negotiation debug data 81 82 7. SMTP 83 7.2 Enhanced capability support 84 7.3 Add CURLOPT_MAIL_CLIENT option 85 86 8. POP3 87 8.2 Enhanced capability support 88 89 9. IMAP 90 9.1 Enhanced capability support 91 92 10. LDAP 93 10.1 SASL based authentication mechanisms 94 10.2 CURLOPT_SSL_CTX_FUNCTION for LDAPS 95 10.3 Paged searches on LDAP server 96 97 11. SMB 98 11.1 File listing support 99 11.2 Honor file timestamps 100 11.3 Use NTLMv2 101 11.4 Create remote directories 102 103 12. New protocols 104 105 13. SSL 106 13.1 TLS-PSK with OpenSSL 107 13.2 Provide mutex locking API 108 13.3 Support in-memory certs/ca certs/keys 109 13.4 Cache/share OpenSSL contexts 110 13.5 Export session ids 111 13.6 Provide callback for cert verification 112 13.7 improve configure --with-ssl 113 13.8 Support DANE 114 13.9 TLS record padding 115 13.10 Support Authority Information Access certificate extension (AIA) 116 13.11 Support intermediate & root pinning for PINNEDPUBLICKEY 117 13.12 Support HSTS 118 13.13 Make sure we forbid TLS 1.3 post-handshake authentication 119 13.14 Support the clienthello extension 120 121 14. GnuTLS 122 14.2 check connection 123 124 15. Schannel 125 15.1 Extend support for client certificate authentication 126 15.2 Extend support for the --ciphers option 127 15.3 Add option to disable client certificate auto-send 128 15.4 Add option to allow abrupt server closure 129 130 16. SASL 131 16.1 Other authentication mechanisms 132 16.2 Add QOP support to GSSAPI authentication 133 16.3 Support binary messages (i.e.: non-base64) 134 135 17. SSH protocols 136 17.1 Multiplexing 137 17.2 Handle growing SFTP files 138 17.3 Support better than MD5 hostkey hash 139 17.4 Support CURLOPT_PREQUOTE 140 17.5 SSH over HTTPS proxy with more backends 141 142 18. Command line tool 143 18.1 sync 144 18.2 glob posts 145 18.3 prevent file overwriting 146 18.4 --proxycommand 147 18.5 UTF-8 filenames in Content-Disposition 148 18.6 Option to make -Z merge lined based outputs on stdout 149 18.7 at least N milliseconds between requests 150 18.8 Consider convenience options for JSON and XML? 151 18.9 Choose the name of file in braces for complex URLs 152 18.10 improve how curl works in a windows console window 153 18.11 Windows: set attribute 'archive' for completed downloads 154 18.12 keep running, read instructions from pipe/socket 155 18.13 Ratelimit or wait between serial requests 156 18.14 --dry-run 157 18.15 --retry should resume 158 18.16 send only part of --data 159 18.17 consider file name from the redirected URL with -O ? 160 18.18 retry on network is unreachable 161 18.19 expand ~/ in config files 162 18.20 host name sections in config files 163 18.21 retry on the redirected-to URL 164 18.23 Set the modification date on an uploaded file 165 18.24 Use multiple parallel transfers for a single download 166 167 19. Build 168 19.1 roffit 169 19.2 Enable PIE and RELRO by default 170 19.3 Don't use GNU libtool on OpenBSD 171 172 20. Test suite 173 20.1 SSL tunnel 174 20.2 nicer lacking perl message 175 20.3 more protocols supported 176 20.4 more platforms supported 177 20.5 Add support for concurrent connections 178 20.6 Use the RFC6265 test suite 179 20.7 Support LD_PRELOAD on macOS 180 20.8 Run web-platform-tests url tests 181 20.9 Use "random" ports for the test servers 182 183 21. Next SONAME bump 184 21.1 http-style HEAD output for FTP 185 21.2 combine error codes 186 21.3 extend CURLOPT_SOCKOPTFUNCTION prototype 187 188 22. Next major release 189 22.1 cleanup return codes 190 22.2 remove obsolete defines 191 22.3 size_t 192 22.4 remove several functions 193 22.5 remove CURLOPT_FAILONERROR 194 22.7 remove progress meter from libcurl 195 22.8 remove 'curl_httppost' from public 196 197============================================================================== 198 1991. libcurl 200 2011.1 TFO support on Windows 202 203 TCP Fast Open is supported on several platforms but not on Windows. Work on 204 this was once started but never finished. 205 206 See https://github.com/curl/curl/pull/3378 207 2081.2 Consult %APPDATA% also for .netrc 209 210 %APPDATA%\.netrc is not considered when running on Windows. Shouldn't it? 211 212 See https://github.com/curl/curl/issues/4016 213 2141.3 struct lifreq 215 216 Use 'struct lifreq' and SIOCGLIFADDR instead of 'struct ifreq' and 217 SIOCGIFADDR on newer Solaris versions as they claim the latter is obsolete. 218 To support IPv6 interface addresses for network interfaces properly. 219 2201.4 alt-svc sharing 221 222 The share interface could benefit from allowing the alt-svc cache to be 223 possible to share between easy handles. 224 225 See https://github.com/curl/curl/issues/4476 226 2271.5 get rid of PATH_MAX 228 229 Having code use and rely on PATH_MAX is not nice: 230 https://insanecoding.blogspot.com/2007/11/pathmax-simply-isnt.html 231 232 Currently the libssh2 SSH based code uses it, but to remove PATH_MAX from 233 there we need libssh2 to properly tell us when we pass in a too small buffer 234 and its current API (as of libssh2 1.2.7) doesn't. 235 2361.6 native IDN support on macOS 237 238 On recent macOS versions, the getaddrinfo() function itself has built-in IDN 239 support. By setting the AI_CANONNAME flag, the function will return the 240 encoded name in the ai_canonname struct field in the returned information. 241 This could be used by curl on macOS when built without a separate IDN library 242 and an IDN host name is used in a URL. 243 244 See initial work in https://github.com/curl/curl/pull/5371 245 2461.7 Support HTTP/2 for HTTP(S) proxies 247 248 Support for doing HTTP/2 to HTTP and HTTPS proxies is still missing. 249 250 See https://github.com/curl/curl/issues/3570 251 2521.8 CURLOPT_RESOLVE for any port number 253 254 This option allows applications to set a replacement IP address for a given 255 host + port pair. Consider making support for providing a replacement address 256 for the host name on all port numbers. 257 258 See https://github.com/curl/curl/issues/1264 259 2601.9 Cache negative name resolves 261 262 A name resolve that has failed is likely to fail when made again within a 263 short period of time. Currently we only cache positive responses. 264 2651.10 auto-detect proxy 266 267 libcurl could be made to detect the system proxy setup automatically and use 268 that. On Windows, macOS and Linux desktops for example. 269 270 The pull-request to use libproxy for this was deferred due to doubts on the 271 reliability of the dependency and how to use it: 272 https://github.com/curl/curl/pull/977 273 274 libdetectproxy is a (C++) library for detecting the proxy on Windows 275 https://github.com/paulharris/libdetectproxy 276 2771.11 minimize dependencies with dynamically loaded modules 278 279 We can create a system with loadable modules/plug-ins, where these modules 280 would be the ones that link to 3rd party libs. That would allow us to avoid 281 having to load ALL dependencies since only the necessary ones for this 282 app/invoke/used protocols would be necessary to load. See 283 https://github.com/curl/curl/issues/349 284 2851.12 updated DNS server while running 286 287 If /etc/resolv.conf gets updated while a program using libcurl is running, it 288 is may cause name resolves to fail unless res_init() is called. We should 289 consider calling res_init() + retry once unconditionally on all name resolve 290 failures to mitigate against this. Firefox works like that. Note that Windows 291 doesn't have res_init() or an alternative. 292 293 https://github.com/curl/curl/issues/2251 294 2951.13 c-ares and CURLOPT_OPENSOCKETFUNCTION 296 297 curl will create most sockets via the CURLOPT_OPENSOCKETFUNCTION callback and 298 close them with the CURLOPT_CLOSESOCKETFUNCTION callback. However, c-ares 299 does not use those functions and instead opens and closes the sockets 300 itself. This means that when curl passes the c-ares socket to the 301 CURLMOPT_SOCKETFUNCTION it isn't owned by the application like other sockets. 302 303 See https://github.com/curl/curl/issues/2734 304 3051.14 Typesafe curl_easy_setopt() 306 307 One of the most common problems in libcurl using applications is the lack of 308 type checks for curl_easy_setopt() which happens because it accepts varargs 309 and thus can take any type. 310 311 One possible solution to this is to introduce a few different versions of the 312 setopt version for the different kinds of data you can set. 313 314 curl_easy_set_num() - sets a long value 315 316 curl_easy_set_large() - sets a curl_off_t value 317 318 curl_easy_set_ptr() - sets a pointer 319 320 curl_easy_set_cb() - sets a callback PLUS its callback data 321 3221.15 Monitor connections in the connection pool 323 324 libcurl's connection cache or pool holds a number of open connections for the 325 purpose of possible subsequent connection reuse. It may contain a few up to a 326 significant amount of connections. Currently, libcurl leaves all connections 327 as they are and first when a connection is iterated over for matching or 328 reuse purpose it is verified that it is still alive. 329 330 Those connections may get closed by the server side for idleness or they may 331 get a HTTP/2 ping from the peer to verify that they're still alive. By adding 332 monitoring of the connections while in the pool, libcurl can detect dead 333 connections (and close them) better and earlier, and it can handle HTTP/2 334 pings to keep such ones alive even when not actively doing transfers on them. 335 3361.16 Try to URL encode given URL 337 338 Given a URL that for example contains spaces, libcurl could have an option 339 that would try somewhat harder than it does now and convert spaces to %20 and 340 perhaps URL encoded byte values over 128 etc (basically do what the redirect 341 following code already does). 342 343 https://github.com/curl/curl/issues/514 344 3451.17 Add support for IRIs 346 347 IRIs (RFC 3987) allow localized, non-ascii, names in the URL. To properly 348 support this, curl/libcurl would need to translate/encode the given input 349 from the input string encoding into percent encoded output "over the wire". 350 351 To make that work smoothly for curl users even on Windows, curl would 352 probably need to be able to convert from several input encodings. 353 3541.18 try next proxy if one doesn't work 355 356 Allow an application to specify a list of proxies to try, and failing to 357 connect to the first go on and try the next instead until the list is 358 exhausted. Browsers support this feature at least when they specify proxies 359 using PACs. 360 361 https://github.com/curl/curl/issues/896 362 3631.20 SRV and URI DNS records 364 365 Offer support for resolving SRV and URI DNS records for libcurl to know which 366 server to connect to for various protocols (including HTTP!). 367 3681.22 CURLINFO_PAUSE_STATE 369 370 Return information about the transfer's current pause state, in both 371 directions. https://github.com/curl/curl/issues/2588 372 3731.23 Offer API to flush the connection pool 374 375 Sometimes applications want to flush all the existing connections kept alive. 376 An API could allow a forced flush or just a forced loop that would properly 377 close all connections that have been closed by the server already. 378 3791.24 TCP Fast Open for windows 380 381 libcurl supports the CURLOPT_TCP_FASTOPEN option since 7.49.0 for Linux and 382 Mac OS. Windows supports TCP Fast Open starting with Windows 10, version 1607 383 and we should add support for it. 384 3851.25 Expose tried IP addresses that failed 386 387 When libcurl fails to connect to a host, it should be able to offer the 388 application the list of IP addresses that were used in the attempt. 389 390 https://github.com/curl/curl/issues/2126 391 3921.27 hardcode the "localhost" addresses 393 394 There's this new spec getting adopted that says "localhost" should always and 395 unconditionally be a local address and not get resolved by a DNS server. A 396 fine way for curl to fix this would be to simply hard-code the response to 397 127.0.0.1 and/or ::1 (depending on what IP versions that are requested). This 398 is what the browsers probably will do with this hostname. 399 400 https://bugzilla.mozilla.org/show_bug.cgi?id=1220810 401 402 https://tools.ietf.org/html/draft-ietf-dnsop-let-localhost-be-localhost-02 403 4041.28 FD_CLOEXEC 405 406 It sets the close-on-exec flag for the file descriptor, which causes the file 407 descriptor to be automatically (and atomically) closed when any of the 408 exec-family functions succeed. Should probably be set by default? 409 410 https://github.com/curl/curl/issues/2252 411 4121.29 Upgrade to websockets 413 414 libcurl could offer a smoother path to get to a websocket connection. 415 See https://github.com/curl/curl/issues/3523 416 417 Michael Kaufmann suggestion here: 418 https://curl.haxx.se/video/curlup-2017/2017-03-19_05_Michael_Kaufmann_Websocket_support_for_curl.mp4 419 4201.30 config file parsing 421 422 Consider providing an API, possibly in a separate companion library, for 423 parsing a config file like curl's -K/--config option to allow applications to 424 get the same ability to read curl options from files. 425 426 See https://github.com/curl/curl/issues/3698 427 4282. libcurl - multi interface 429 4302.1 More non-blocking 431 432 Make sure we don't ever loop because of non-blocking sockets returning 433 EWOULDBLOCK or similar. Blocking cases include: 434 435 - Name resolves on non-windows unless c-ares or the threaded resolver is used. 436 437 - The threaded resolver may block on cleanup: 438 https://github.com/curl/curl/issues/4852 439 440 - file:// transfers 441 442 - TELNET transfers 443 444 - GSSAPI authentication for FTP transfers 445 446 - The "DONE" operation (post transfer protocol-specific actions) for the 447 protocols SFTP, SMTP, FTP. Fixing multi_done() for this is a worthy task. 448 449 - curl_multi_remove_handle for any of the above. See section 2.3. 450 4512.2 Better support for same name resolves 452 453 If a name resolve has been initiated for name NN and a second easy handle 454 wants to resolve that name as well, make it wait for the first resolve to end 455 up in the cache instead of doing a second separate resolve. This is 456 especially needed when adding many simultaneous handles using the same host 457 name when the DNS resolver can get flooded. 458 4592.3 Non-blocking curl_multi_remove_handle() 460 461 The multi interface has a few API calls that assume a blocking behavior, like 462 add_handle() and remove_handle() which limits what we can do internally. The 463 multi API need to be moved even more into a single function that "drives" 464 everything in a non-blocking manner and signals when something is done. A 465 remove or add would then only ask for the action to get started and then 466 multi_perform() etc still be called until the add/remove is completed. 467 4682.4 Split connect and authentication process 469 470 The multi interface treats the authentication process as part of the connect 471 phase. As such any failures during authentication won't trigger the relevant 472 QUIT or LOGOFF for protocols such as IMAP, POP3 and SMTP. 473 4742.5 Edge-triggered sockets should work 475 476 The multi_socket API should work with edge-triggered socket events. One of 477 the internal actions that need to be improved for this to work perfectly is 478 the 'maxloops' handling in transfer.c:readwrite_data(). 479 4802.6 multi upkeep 481 482 In libcurl 7.62.0 we introduced curl_easy_upkeep. It unfortunately only works 483 on easy handles. We should introduces a version of that for the multi handle, 484 and also consider doing "upkeep" automatically on connections in the 485 connection pool when the multi handle is in used. 486 487 See https://github.com/curl/curl/issues/3199 488 4892.7 Virtual external sockets 490 491 libcurl performs operations on the given file descriptor that presumes it is 492 a socket and an application cannot replace them at the moment. Allowing an 493 application to fully replace those would allow a larger degree of freedom and 494 flexibility. 495 496 See https://github.com/curl/curl/issues/5835 497 4982.8 dynamically decide to use socketpair 499 500 For users who don't use curl_multi_wait() or don't care for 501 curl_multi_wakeup(), we could introduce a way to make libcurl NOT 502 create a socketpair in the multi handle. 503 504 See https://github.com/curl/curl/issues/4829 505 5063. Documentation 507 5083.2 Provide cmake config-file 509 510 A config-file package is a set of files provided by us to allow applications 511 to write cmake scripts to find and use libcurl easier. See 512 https://github.com/curl/curl/issues/885 513 5144. FTP 515 5164.1 HOST 517 518 HOST is a command for a client to tell which host name to use, to offer FTP 519 servers named-based virtual hosting: 520 521 https://tools.ietf.org/html/rfc7151 522 5234.2 Alter passive/active on failure and retry 524 525 When trying to connect passively to a server which only supports active 526 connections, libcurl returns CURLE_FTP_WEIRD_PASV_REPLY and closes the 527 connection. There could be a way to fallback to an active connection (and 528 vice versa). https://curl.haxx.se/bug/feature.cgi?id=1754793 529 5304.3 Earlier bad letter detection 531 532 Make the detection of (bad) %0d and %0a codes in FTP URL parts earlier in the 533 process to avoid doing a resolve and connect in vain. 534 5354.5 ASCII support 536 537 FTP ASCII transfers do not follow RFC959. They don't convert the data 538 accordingly. 539 5404.6 GSSAPI via Windows SSPI 541 542 In addition to currently supporting the SASL GSSAPI mechanism (Kerberos V5) 543 via third-party GSS-API libraries, such as Heimdal or MIT Kerberos, also add 544 support for GSSAPI authentication via Windows SSPI. 545 5464.7 STAT for LIST without data connection 547 548 Some FTP servers allow STAT for listing directories instead of using LIST, 549 and the response is then sent over the control connection instead of as the 550 otherwise usedw data connection: https://www.nsftools.com/tips/RawFTP.htm#STAT 551 552 This is not detailed in any FTP specification. 553 5544.8 Option to ignore private IP addresses in PASV response 555 556 Some servers respond with and some other FTP client implementations can 557 ignore private (RFC 1918 style) IP addresses when received in PASV responses. 558 To consider for libcurl as well. See https://github.com/curl/curl/issues/1455 559 5605. HTTP 561 5625.1 Better persistency for HTTP 1.0 563 564 "Better" support for persistent connections over HTTP 1.0 565 https://curl.haxx.se/bug/feature.cgi?id=1089001 566 5675.2 Set custom client ip when using haproxy protocol 568 569 This would allow testing servers with different client ip addresses (without 570 using x-forward-for header). 571 572 https://github.com/curl/curl/issues/5125 573 5745.3 Rearrange request header order 575 576 Server implementors often make an effort to detect browser and to reject 577 clients it can detect to not match. One of the last details we cannot yet 578 control in libcurl's HTTP requests, which also can be exploited to detect 579 that libcurl is in fact used even when it tries to impersonate a browser, is 580 the order of the request headers. I propose that we introduce a new option in 581 which you give headers a value, and then when the HTTP request is built it 582 sorts the headers based on that number. We could then have internally created 583 headers use a default value so only headers that need to be moved have to be 584 specified. 585 5865.4 Allow SAN names in HTTP/2 server push 587 588 curl only allows HTTP/2 push promise if the provided :authority header value 589 exactly matches the host name given in the URL. It could be extended to allow 590 any name that would match the Subject Alternative Names in the server's TLS 591 certificate. 592 593 See https://github.com/curl/curl/pull/3581 594 5955.5 auth= in URLs 596 597 Add the ability to specify the preferred authentication mechanism to use by 598 using ;auth=<mech> in the login part of the URL. 599 600 For example: 601 602 http://test:pass;auth=NTLM@example.com would be equivalent to specifying 603 --user test:pass;auth=NTLM or --user test:pass --ntlm from the command line. 604 605 Additionally this should be implemented for proxy base URLs as well. 606 607 6086. TELNET 609 6106.1 ditch stdin 611 612 Reading input (to send to the remote server) on stdin is a crappy solution 613 for library purposes. We need to invent a good way for the application to be 614 able to provide the data to send. 615 6166.2 ditch telnet-specific select 617 618 Move the telnet support's network select() loop go away and merge the code 619 into the main transfer loop. Until this is done, the multi interface won't 620 work for telnet. 621 6226.3 feature negotiation debug data 623 624 Add telnet feature negotiation data to the debug callback as header data. 625 626 6277. SMTP 628 6297.2 Enhanced capability support 630 631 Add the ability, for an application that uses libcurl, to obtain the list of 632 capabilities returned from the EHLO command. 633 6347.3 Add CURLOPT_MAIL_CLIENT option 635 636 Rather than use the URL to specify the mail client string to present in the 637 HELO and EHLO commands, libcurl should support a new CURLOPT specifically for 638 specifying this data as the URL is non-standard and to be honest a bit of a 639 hack ;-) 640 641 Please see the following thread for more information: 642 https://curl.haxx.se/mail/lib-2012-05/0178.html 643 644 6458. POP3 646 6478.2 Enhanced capability support 648 649 Add the ability, for an application that uses libcurl, to obtain the list of 650 capabilities returned from the CAPA command. 651 6529. IMAP 653 6549.1 Enhanced capability support 655 656 Add the ability, for an application that uses libcurl, to obtain the list of 657 capabilities returned from the CAPABILITY command. 658 65910. LDAP 660 66110.1 SASL based authentication mechanisms 662 663 Currently the LDAP module only supports ldap_simple_bind_s() in order to bind 664 to an LDAP server. However, this function sends username and password details 665 using the simple authentication mechanism (as clear text). However, it should 666 be possible to use ldap_bind_s() instead specifying the security context 667 information ourselves. 668 66910.2 CURLOPT_SSL_CTX_FUNCTION for LDAPS 670 671 CURLOPT_SSL_CTX_FUNCTION works perfectly for HTTPS and email protocols, but 672 it has no effect for LDAPS connections. 673 674 https://github.com/curl/curl/issues/4108 675 67610.3 Paged searches on LDAP server 677 678 https://github.com/curl/curl/issues/4452 679 68011. SMB 681 68211.1 File listing support 683 684Add support for listing the contents of a SMB share. The output should probably 685be the same as/similar to FTP. 686 68711.2 Honor file timestamps 688 689The timestamp of the transferred file should reflect that of the original file. 690 69111.3 Use NTLMv2 692 693Currently the SMB authentication uses NTLMv1. 694 69511.4 Create remote directories 696 697Support for creating remote directories when uploading a file to a directory 698that doesn't exist on the server, just like --ftp-create-dirs. 699 70012. New protocols 701 70213. SSL 703 70413.1 TLS-PSK with OpenSSL 705 706 Transport Layer Security pre-shared key ciphersuites (TLS-PSK) is a set of 707 cryptographic protocols that provide secure communication based on pre-shared 708 keys (PSKs). These pre-shared keys are symmetric keys shared in advance among 709 the communicating parties. 710 711 https://github.com/curl/curl/issues/5081 712 71313.2 Provide mutex locking API 714 715 Provide a libcurl API for setting mutex callbacks in the underlying SSL 716 library, so that the same application code can use mutex-locking 717 independently of OpenSSL or GnutTLS being used. 718 71913.3 Support in-memory certs/ca certs/keys 720 721 You can specify the private and public keys for SSH/SSL as file paths. Some 722 programs want to avoid using files and instead just pass them as in-memory 723 data blobs. There's probably a challenge to make this work across the 724 plethory of different TLS and SSH backends that curl supports. 725 https://github.com/curl/curl/issues/2310 726 72713.4 Cache/share OpenSSL contexts 728 729 "Look at SSL cafile - quick traces look to me like these are done on every 730 request as well, when they should only be necessary once per SSL context (or 731 once per handle)". The major improvement we can rather easily do is to make 732 sure we don't create and kill a new SSL "context" for every request, but 733 instead make one for every connection and re-use that SSL context in the same 734 style connections are re-used. It will make us use slightly more memory but 735 it will libcurl do less creations and deletions of SSL contexts. 736 737 Technically, the "caching" is probably best implemented by getting added to 738 the share interface so that easy handles who want to and can reuse the 739 context specify that by sharing with the right properties set. 740 741 https://github.com/curl/curl/issues/1110 742 74313.5 Export session ids 744 745 Add an interface to libcurl that enables "session IDs" to get 746 exported/imported. Cris Bailiff said: "OpenSSL has functions which can 747 serialise the current SSL state to a buffer of your choice, and recover/reset 748 the state from such a buffer at a later date - this is used by mod_ssl for 749 apache to implement and SSL session ID cache". 750 75113.6 Provide callback for cert verification 752 753 OpenSSL supports a callback for customised verification of the peer 754 certificate, but this doesn't seem to be exposed in the libcurl APIs. Could 755 it be? There's so much that could be done if it were! 756 75713.7 improve configure --with-ssl 758 759 make the configure --with-ssl option first check for OpenSSL, then GnuTLS, 760 then NSS... 761 76213.8 Support DANE 763 764 DNS-Based Authentication of Named Entities (DANE) is a way to provide SSL 765 keys and certs over DNS using DNSSEC as an alternative to the CA model. 766 https://www.rfc-editor.org/rfc/rfc6698.txt 767 768 An initial patch was posted by Suresh Krishnaswamy on March 7th 2013 769 (https://curl.haxx.se/mail/lib-2013-03/0075.html) but it was a too simple 770 approach. See Daniel's comments: 771 https://curl.haxx.se/mail/lib-2013-03/0103.html . libunbound may be the 772 correct library to base this development on. 773 774 Björn Stenberg wrote a separate initial take on DANE that was never 775 completed. 776 77713.9 TLS record padding 778 779 TLS (1.3) offers optional record padding and OpenSSL provides an API for it. 780 I could make sense for libcurl to offer this ability to applications to make 781 traffic patterns harder to figure out by network traffic observers. 782 783 See https://github.com/curl/curl/issues/5398 784 78513.10 Support Authority Information Access certificate extension (AIA) 786 787 AIA can provide various things like CRLs but more importantly information 788 about intermediate CA certificates that can allow validation path to be 789 fulfilled when the HTTPS server doesn't itself provide them. 790 791 Since AIA is about downloading certs on demand to complete a TLS handshake, 792 it is probably a bit tricky to get done right. 793 794 See https://github.com/curl/curl/issues/2793 795 79613.11 Support intermediate & root pinning for PINNEDPUBLICKEY 797 798 CURLOPT_PINNEDPUBLICKEY does not consider the hashes of intermediate & root 799 certificates when comparing the pinned keys. Therefore it is not compatible 800 with "HTTP Public Key Pinning" as there also intermediate and root 801 certificates can be pinned. This is very useful as it prevents webadmins from 802 "locking themself out of their servers". 803 804 Adding this feature would make curls pinning 100% compatible to HPKP and 805 allow more flexible pinning. 806 80713.12 Support HSTS 808 809 "HTTP Strict Transport Security" is TOFU (trust on first use), time-based 810 features indicated by a HTTP header send by the webserver. It is widely used 811 in browsers and it's purpose is to prevent insecure HTTP connections after a 812 previous HTTPS connection. It protects against SSLStripping attacks. 813 814 Doc: https://developer.mozilla.org/en-US/docs/Web/Security/HTTP_strict_transport_security 815 RFC 6797: https://tools.ietf.org/html/rfc6797 816 81713.13 Make sure we forbid TLS 1.3 post-handshake authentication 818 819 RFC 8740 explains how using HTTP/2 must forbid the use of TLS 1.3 820 post-handshake authentication. We should make sure to live up to that. 821 822 See https://github.com/curl/curl/issues/5396 823 82413.14 Support the clienthello extension 825 826 Certain stupid networks and middle boxes have a problem with SSL handshake 827 pakets that are within a certain size range because how that sets some bits 828 that previously (in older TLS version) were not set. The clienthello 829 extension adds padding to avoid that size range. 830 831 https://tools.ietf.org/html/rfc7685 832 https://github.com/curl/curl/issues/2299 833 83414. GnuTLS 835 83614.2 check connection 837 838 Add a way to check if the connection seems to be alive, to correspond to the 839 SSL_peak() way we use with OpenSSL. 840 84115. Schannel 842 84315.1 Extend support for client certificate authentication 844 845 The existing support for the -E/--cert and --key options could be 846 extended by supplying a custom certificate and key in PEM format, see: 847 - Getting a Certificate for Schannel 848 https://msdn.microsoft.com/en-us/library/windows/desktop/aa375447.aspx 849 85015.2 Extend support for the --ciphers option 851 852 The existing support for the --ciphers option could be extended 853 by mapping the OpenSSL/GnuTLS cipher suites to the Schannel APIs, see 854 - Specifying Schannel Ciphers and Cipher Strengths 855 https://msdn.microsoft.com/en-us/library/windows/desktop/aa380161.aspx 856 85715.3 Add option to disable client certificate auto-send 858 859 Microsoft says "By default, Schannel will, with no notification to the client, 860 attempt to locate a client certificate and send it to the server." That could 861 be considered a privacy violation and unexpected. 862 863 Some Windows users have come to expect that default behavior and to change the 864 default to make it consistent with other SSL backends would be a breaking 865 change. An option should be added that can be used to disable the default 866 Schannel auto-send behavior. 867 868 https://github.com/curl/curl/issues/2262 869 87015.4 Add option to allow abrupt server closure 871 872 libcurl w/schannel will error without a known termination point from the 873 server (such as length of transfer, or SSL "close notify" alert) to prevent 874 against a truncation attack. Really old servers may neglect to send any 875 termination point. An option could be added to ignore such abrupt closures. 876 877 https://github.com/curl/curl/issues/4427 878 87916. SASL 880 88116.1 Other authentication mechanisms 882 883 Add support for other authentication mechanisms such as OLP, 884 GSS-SPNEGO and others. 885 88616.2 Add QOP support to GSSAPI authentication 887 888 Currently the GSSAPI authentication only supports the default QOP of auth 889 (Authentication), whilst Kerberos V5 supports both auth-int (Authentication 890 with integrity protection) and auth-conf (Authentication with integrity and 891 privacy protection). 892 89316.3 Support binary messages (i.e.: non-base64) 894 895 Mandatory to support LDAP SASL authentication. 896 897 89817. SSH protocols 899 90017.1 Multiplexing 901 902 SSH is a perfectly fine multiplexed protocols which would allow libcurl to do 903 multiple parallel transfers from the same host using the same connection, 904 much in the same spirit as HTTP/2 does. libcurl however does not take 905 advantage of that ability but will instead always create a new connection for 906 new transfers even if an existing connection already exists to the host. 907 908 To fix this, libcurl would have to detect an existing connection and "attach" 909 the new transfer to the existing one. 910 91117.2 Handle growing SFTP files 912 913 The SFTP code in libcurl checks the file size *before* a transfer starts and 914 then proceeds to transfer exactly that amount of data. If the remote file 915 grows while the transfer is in progress libcurl won't notice and will not 916 adapt. The OpenSSH SFTP command line tool does and libcurl could also just 917 attempt to download more to see if there is more to get... 918 919 https://github.com/curl/curl/issues/4344 920 92117.3 Support better than MD5 hostkey hash 922 923 libcurl offers the CURLOPT_SSH_HOST_PUBLIC_KEY_MD5 option for verifying the 924 server's key. MD5 is generally being deprecated so we should implement 925 support for stronger hashing algorithms. libssh2 itself is what provides this 926 underlying functionality and it supports at least SHA-1 as an alternative. 927 SHA-1 is also being deprecated these days so we should consider working with 928 libssh2 to instead offer support for SHA-256 or similar. 929 93017.4 Support CURLOPT_PREQUOTE 931 932 The two other QUOTE options are supported for SFTP, but this was left out for 933 unknown reasons! 934 93517.5 SSH over HTTPS proxy with more backends 936 937 The SSH based protocols SFTP and SCP didn't work over HTTPS proxy at 938 all until PR https://github.com/curl/curl/pull/6021 brought the 939 functionality with the libssh2 backend. Presumably, this support 940 can/could be added for the other backends as well. 941 94218. Command line tool 943 94418.1 sync 945 946 "curl --sync http://example.com/feed[1-100].rss" or 947 "curl --sync http://example.net/{index,calendar,history}.html" 948 949 Downloads a range or set of URLs using the remote name, but only if the 950 remote file is newer than the local file. A Last-Modified HTTP date header 951 should also be used to set the mod date on the downloaded file. 952 95318.2 glob posts 954 955 Globbing support for -d and -F, as in 'curl -d "name=foo[0-9]" URL'. 956 This is easily scripted though. 957 95818.3 prevent file overwriting 959 960 Add an option that prevents curl from overwriting existing local files. When 961 used, and there already is an existing file with the target file name 962 (either -O or -o), a number should be appended (and increased if already 963 existing). So that index.html becomes first index.html.1 and then 964 index.html.2 etc. 965 96618.4 --proxycommand 967 968 Allow the user to make curl run a command and use its stdio to make requests 969 and not do any network connection by itself. Example: 970 971 curl --proxycommand 'ssh pi@raspberrypi.local -W 10.1.1.75 80' \ 972 http://some/otherwise/unavailable/service.php 973 974 See https://github.com/curl/curl/issues/4941 975 97618.5 UTF-8 filenames in Content-Disposition 977 978 RFC 6266 documents how UTF-8 names can be passed to a client in the 979 Content-Disposition header, and curl does not support this. 980 981 https://github.com/curl/curl/issues/1888 982 98318.6 Option to make -Z merge lined based outputs on stdout 984 985 When a user requests multiple lined based files using -Z and sends them to 986 stdout, curl will not "merge" and send complete lines fine but may very well 987 send partial lines from several sources. 988 989 https://github.com/curl/curl/issues/5175 990 99118.7 at least N milliseconds between requests 992 993 Allow curl command lines issue a lot of request against services that limit 994 users to no more than N requests/second or similar. Could be implemented with 995 an option asking that at least a certain time has elapsed since the previous 996 request before the next one will be performed. Example: 997 998 $ curl "https://example.com/api?input=[1-1000]" -d yadayada --after 500 999 1000 See https://github.com/curl/curl/issues/3920 1001 100218.8 Consider convenience options for JSON and XML? 1003 1004 Could we add `--xml` or `--json` to add headers needed to call rest API: 1005 1006 `--xml` adds -H 'Content-Type: application/xml' -H "Accept: application/xml" and 1007 `--json` adds -H 'Content-Type: application/json' -H "Accept: application/json" 1008 1009 Setting Content-Type when doing a GET or any other method without a body 1010 would be a bit strange I think - so maybe only add CT for requests with body? 1011 Maybe plain `--xml` and ` --json` are a bit too brief and generic. Maybe 1012 `--http-json` etc? 1013 1014 See https://github.com/curl/curl/issues/5203 1015 101618.9 Choose the name of file in braces for complex URLs 1017 1018 When using braces to download a list of URLs and you use complicated names 1019 in the list of alternatives, it could be handy to allow curl to use other 1020 names when saving. 1021 1022 Consider a way to offer that. Possibly like 1023 {partURL1:name1,partURL2:name2,partURL3:name3} where the name following the 1024 colon is the output name. 1025 1026 See https://github.com/curl/curl/issues/221 1027 102818.10 improve how curl works in a windows console window 1029 1030 If you pull the scrollbar when transferring with curl in a Windows console 1031 window, the transfer is interrupted and can get disconnected. This can 1032 probably be improved. See https://github.com/curl/curl/issues/322 1033 103418.11 Windows: set attribute 'archive' for completed downloads 1035 1036 The archive bit (FILE_ATTRIBUTE_ARCHIVE, 0x20) separates files that shall be 1037 backed up from those that are either not ready or have not changed. 1038 1039 Downloads in progress are neither ready to be backed up, nor should they be 1040 opened by a different process. Only after a download has been completed it's 1041 sensible to include it in any integer snapshot or backup of the system. 1042 1043 See https://github.com/curl/curl/issues/3354 1044 104518.12 keep running, read instructions from pipe/socket 1046 1047 Provide an option that makes curl not exit after the last URL (or even work 1048 without a given URL), and then make it read instructions passed on a pipe or 1049 over a socket to make further instructions so that a second subsequent curl 1050 invoke can talk to the still running instance and ask for transfers to get 1051 done, and thus maintain its connection pool, DNS cache and more. 1052 105318.13 Ratelimit or wait between serial requests 1054 1055 Consider a command line option that can make curl do multiple serial requests 1056 slow, potentially with a (random) wait between transfers. There's also a 1057 propsed set of standard HTTP headers to let servers let the client adapt to 1058 its rate limits: 1059 https://www.ietf.org/id/draft-polli-ratelimit-headers-02.html 1060 1061 See https://github.com/curl/curl/issues/5406 1062 106318.14 --dry-run 1064 1065 A command line option that makes curl show exactly what it would do and send 1066 if it would run for real. 1067 1068 See https://github.com/curl/curl/issues/5426 1069 107018.15 --retry should resume 1071 1072 When --retry is used and curl actually retries transfer, it should use the 1073 already transferred data and do a resumed transfer for the rest (when 1074 possible) so that it doesn't have to transfer the same data again that was 1075 already transferred before the retry. 1076 1077 See https://github.com/curl/curl/issues/1084 1078 107918.16 send only part of --data 1080 1081 When the user only wants to send a small piece of the data provided with 1082 --data or --data-binary, like when that data is a huge file, consider a way 1083 to specify that curl should only send a piece of that. One suggested syntax 1084 would be: "--data-binary @largefile.zip!1073741823-2147483647". 1085 1086 See https://github.com/curl/curl/issues/1200 1087 108818.17 consider file name from the redirected URL with -O ? 1089 1090 When a user gives a URL and uses -O, and curl follows a redirect to a new 1091 URL, the file name is not extracted and used from the newly redirected-to URL 1092 even if the new URL may have a much more sensible file name. 1093 1094 This is clearly documented and helps for security since there's no surprise 1095 to users which file name that might get overwritten. But maybe a new option 1096 could allow for this or maybe -J should imply such a treatment as well as -J 1097 already allows for the server to decide what file name to use so it already 1098 provides the "may overwrite any file" risk. 1099 1100 This is extra tricky if the original URL has no file name part at all since 1101 then the current code path will error out with an error message, and we can't 1102 *know* already at that point if curl will be redirected to a URL that has a 1103 file name... 1104 1105 See https://github.com/curl/curl/issues/1241 1106 110718.18 retry on network is unreachable 1108 1109 The --retry option retries transfers on "transient failures". We later added 1110 --retry-connrefused to also retry for "connection refused" errors. 1111 1112 Suggestions have been brought to also allow retry on "network is unreachable" 1113 errors and while totally reasonable, maybe we should consider a way to make 1114 this more configurable than to add a new option for every new error people 1115 want to retry for? 1116 1117 https://github.com/curl/curl/issues/1603 1118 111918.19 expand ~/ in config files 1120 1121 For example .curlrc could benefit from being able to do this. 1122 1123 See https://github.com/curl/curl/issues/2317 1124 112518.20 host name sections in config files 1126 1127 config files would be more powerful if they could set different 1128 configurations depending on used URLs, host name or possibly origin. Then a 1129 default .curlrc could a specific user-agent only when doing requests against 1130 a certain site. 1131 113218.21 retry on the redirected-to URL 1133 1134 When curl is told to --retry a failed transfer and follows redirects, it 1135 might get a HTTP 429 response from the redirected-to URL and not the original 1136 one, which then could make curl decide to rather retry the transfer on that 1137 URL only instead of the original operation to the original URL. 1138 1139 Perhaps extra emphasized if the original transfer is a large POST that 1140 redirects to a separate GET, and that GET is what gets the 529 1141 1142 See https://github.com/curl/curl/issues/5462 1143 114418.23 Set the modification date on an uploaded file 1145 1146 For SFTP and posssibly FTP, curl could offer an option to set the 1147 modification time for the uploaded file. 1148 1149 See https://github.com/curl/curl/issues/5768 1150 115118.24 Use multiple parallel transfers for a single download 1152 1153 To enhance transfer speed, downloading a single URL can be split up into 1154 multiple separate range downloads that get combined into a single final 1155 result. 1156 1157 An ideal implementation would not use a specified number of parallel 1158 transfers, but curl could: 1159 - First start getting the full file as transfer A 1160 - If after N seconds have passed and the transfer is expected to continue for 1161 M seconds or more, add a new transfer (B) that asks for the second half of 1162 A's content (and stop A at the middle). 1163 - If splitting up the work improves the transfer rate, it could then be done 1164 again. Then again, etc up to a limit. 1165 1166 This way, if transfer B fails (because Range: isn't supported) it will let 1167 transfer A remain the single one. N and M could be set to some sensible 1168 defaults. 1169 1170 See https://github.com/curl/curl/issues/5774 1171 117219. Build 1173 117419.1 roffit 1175 1176 Consider extending 'roffit' to produce decent ASCII output, and use that 1177 instead of (g)nroff when building src/tool_hugehelp.c 1178 117919.2 Enable PIE and RELRO by default 1180 1181 Especially when having programs that execute curl via the command line, PIE 1182 renders the exploitation of memory corruption vulnerabilities a lot more 1183 difficult. This can be attributed to the additional information leaks being 1184 required to conduct a successful attack. RELRO, on the other hand, masks 1185 different binary sections like the GOT as read-only and thus kills a handful 1186 of techniques that come in handy when attackers are able to arbitrarily 1187 overwrite memory. A few tests showed that enabling these features had close 1188 to no impact, neither on the performance nor on the general functionality of 1189 curl. 1190 119119.3 Don't use GNU libtool on OpenBSD 1192 When compiling curl on OpenBSD with "--enable-debug" it will give linking 1193 errors when you use GNU libtool. This can be fixed by using the libtool 1194 provided by OpenBSD itself. However for this the user always needs to invoke 1195 make with "LIBTOOL=/usr/bin/libtool". It would be nice if the script could 1196 have some magic to detect if this system is an OpenBSD host and then use the 1197 OpenBSD libtool instead. 1198 1199 See https://github.com/curl/curl/issues/5862 1200 120120. Test suite 1202 120320.1 SSL tunnel 1204 1205 Make our own version of stunnel for simple port forwarding to enable HTTPS 1206 and FTP-SSL tests without the stunnel dependency, and it could allow us to 1207 provide test tools built with either OpenSSL or GnuTLS 1208 120920.2 nicer lacking perl message 1210 1211 If perl wasn't found by the configure script, don't attempt to run the tests 1212 but explain something nice why it doesn't. 1213 121420.3 more protocols supported 1215 1216 Extend the test suite to include more protocols. The telnet could just do FTP 1217 or http operations (for which we have test servers). 1218 121920.4 more platforms supported 1220 1221 Make the test suite work on more platforms. OpenBSD and Mac OS. Remove 1222 fork()s and it should become even more portable. 1223 122420.5 Add support for concurrent connections 1225 1226 Tests 836, 882 and 938 were designed to verify that separate connections 1227 aren't used when using different login credentials in protocols that 1228 shouldn't re-use a connection under such circumstances. 1229 1230 Unfortunately, ftpserver.pl doesn't appear to support multiple concurrent 1231 connections. The read while() loop seems to loop until it receives a 1232 disconnect from the client, where it then enters the waiting for connections 1233 loop. When the client opens a second connection to the server, the first 1234 connection hasn't been dropped (unless it has been forced - which we 1235 shouldn't do in these tests) and thus the wait for connections loop is never 1236 entered to receive the second connection. 1237 123820.6 Use the RFC6265 test suite 1239 1240 A test suite made for HTTP cookies (RFC 6265) by Adam Barth is available at 1241 https://github.com/abarth/http-state/tree/master/tests 1242 1243 It'd be really awesome if someone would write a script/setup that would run 1244 curl with that test suite and detect deviances. Ideally, that would even be 1245 incorporated into our regular test suite. 1246 124720.7 Support LD_PRELOAD on macOS 1248 1249 LD_RELOAD doesn't work on macOS, but there are tests which require it to run 1250 properly. Look into making the preload support in runtests.pl portable such 1251 that it uses DYLD_INSERT_LIBRARIES on macOS. 1252 125320.8 Run web-platform-tests url tests 1254 1255 Run web-platform-tests url tests and compare results with browsers on wpt.fyi 1256 1257 It would help us find issues to fix and help us document where our parser 1258 differs from the WHATWG URL spec parsers. 1259 1260 See https://github.com/curl/curl/issues/4477 1261 126220.9 Use "random" ports for the test servers 1263 1264 Instead of insisting and using fixed port numbers for the tests (even though 1265 they can be changed with a switch), consider letting each server pick a 1266 random available one at start-up, store that info in a file and let the test 1267 suite use that. 1268 1269 We could then remove the "check that it is our server that's running"-check 1270 and we would immediately detect when we write tests wrongly to use hard-coded 1271 port numbers. 1272 127321. Next SONAME bump 1274 127521.1 http-style HEAD output for FTP 1276 1277 #undef CURL_FTP_HTTPSTYLE_HEAD in lib/ftp.c to remove the HTTP-style headers 1278 from being output in NOBODY requests over FTP 1279 128021.2 combine error codes 1281 1282 Combine some of the error codes to remove duplicates. The original 1283 numbering should not be changed, and the old identifiers would be 1284 macroed to the new ones in an CURL_NO_OLDIES section to help with 1285 backward compatibility. 1286 1287 Candidates for removal and their replacements: 1288 1289 CURLE_FILE_COULDNT_READ_FILE => CURLE_REMOTE_FILE_NOT_FOUND 1290 1291 CURLE_FTP_COULDNT_RETR_FILE => CURLE_REMOTE_FILE_NOT_FOUND 1292 1293 CURLE_FTP_COULDNT_USE_REST => CURLE_RANGE_ERROR 1294 1295 CURLE_FUNCTION_NOT_FOUND => CURLE_FAILED_INIT 1296 1297 CURLE_LDAP_INVALID_URL => CURLE_URL_MALFORMAT 1298 1299 CURLE_TFTP_NOSUCHUSER => CURLE_TFTP_ILLEGAL 1300 1301 CURLE_TFTP_NOTFOUND => CURLE_REMOTE_FILE_NOT_FOUND 1302 1303 CURLE_TFTP_PERM => CURLE_REMOTE_ACCESS_DENIED 1304 130521.3 extend CURLOPT_SOCKOPTFUNCTION prototype 1306 1307 The current prototype only provides 'purpose' that tells what the 1308 connection/socket is for, but not any protocol or similar. It makes it hard 1309 for applications to differentiate on TCP vs UDP and even HTTP vs FTP and 1310 similar. 1311 131222. Next major release 1313 131422.1 cleanup return codes 1315 1316 curl_easy_cleanup() returns void, but curl_multi_cleanup() returns a 1317 CURLMcode. These should be changed to be the same. 1318 131922.2 remove obsolete defines 1320 1321 remove obsolete defines from curl/curl.h 1322 132322.3 size_t 1324 1325 make several functions use size_t instead of int in their APIs 1326 132722.4 remove several functions 1328 1329 remove the following functions from the public API: 1330 1331 curl_getenv 1332 1333 curl_mprintf (and variations) 1334 1335 curl_strequal 1336 1337 curl_strnequal 1338 1339 They will instead become curlx_ - alternatives. That makes the curl app 1340 still capable of using them, by building with them from source. 1341 1342 These functions have no purpose anymore: 1343 1344 curl_multi_socket 1345 1346 curl_multi_socket_all 1347 134822.5 remove CURLOPT_FAILONERROR 1349 1350 Remove support for CURLOPT_FAILONERROR, it has gotten too kludgy and weird 1351 internally. Let the app judge success or not for itself. 1352 135322.7 remove progress meter from libcurl 1354 1355 The internally provided progress meter output doesn't belong in the library. 1356 Basically no application wants it (apart from curl) but instead applications 1357 can and should do their own progress meters using the progress callback. 1358 1359 The progress callback should then be bumped as well to get proper 64bit 1360 variable types passed to it instead of doubles so that big files work 1361 correctly. 1362 136322.8 remove 'curl_httppost' from public 1364 1365 curl_formadd() was made to fill in a public struct, but the fact that the 1366 struct is public is never really used by application for their own advantage 1367 but instead often restricts how the form functions can or can't be modified. 1368 1369 Changing them to return a private handle will benefit the implementation and 1370 allow us much greater freedoms while still maintaining a solid API and ABI. 1371