Python 3.6 urllib TypeError: can’t concat bytes to str
The data argument is expected to be a bytes-like object. you need to do the following: urllib.request.urlopen({api_url}, data=bytes(json.dumps(headers), encoding=”utf-8″))
The data argument is expected to be a bytes-like object. you need to do the following: urllib.request.urlopen({api_url}, data=bytes(json.dumps(headers), encoding=”utf-8″))
response.read() returns an instance of bytes while StringIO is an in-memory stream for text only. Use BytesIO instead. From What’s new in Python 3.0 – Text Vs. Data Instead Of Unicode Vs. 8-bit The StringIO and cStringIO modules are gone. Instead, import the io module and use io.StringIO or io.BytesIO for text and data respectively.
For both Python 3 and Python 2, this works: try: from urllib.request import Request, urlopen # Python 3 except ImportError: from urllib2 import Request, urlopen # Python 2 req = Request(‘http://api.company.com/items/details?country=US&language=en’) req.add_header(‘apikey’, ‘xxx’) content = urlopen(req).read() print(content)
The error code 10060 means it cannot connect to the remote peer. It might be because of the network problem or mostly your setting issues, such as proxy setting. You could try to connect the same host with other tools(such as ncat) and/or with another PC within your same local network to find out where … Read more
import urllib.request as req proxy = req.ProxyHandler({‘http’: r’http://username:password@url:port’}) auth = req.HTTPBasicAuthHandler() opener = req.build_opener(proxy, auth, req.HTTPHandler) req.install_opener(opener) conn = req.urlopen(‘http://google.com’) return_str = conn.read()
Here is an example that works: import urllib2 def main(): download_file(“http://mensenhandel.nl/files/pdftest2.pdf”) def download_file(download_url): response = urllib2.urlopen(download_url) file = open(“document.pdf”, ‘wb’) file.write(response.read()) file.close() print(“Completed”) if __name__ == “__main__”: main()
Check out urllib.urlretrieve‘s complete code: def urlretrieve(url, filename=None, reporthook=None, data=None): global _urlopener if not _urlopener: _urlopener = FancyURLopener() return _urlopener.retrieve(url, filename, reporthook, data) In other words, you can use urllib.FancyURLopener (it’s part of the public urllib API). You can override http_error_default to detect 404s: class MyURLopener(urllib.FancyURLopener): def http_error_default(self, url, fp, errcode, errmsg, headers): # handle … Read more
I suspect this is implementation-dependent. That said, for CPython: From socket.create_connection, If no timeout is supplied, the global default timeout setting returned by :func:getdefaulttimeout is used. From socketmodule.c, static PyObject * socket_getdefaulttimeout(PyObject *self) { if (defaulttimeout < 0.0) { Py_INCREF(Py_None); return Py_None; } else return PyFloat_FromDouble(defaulttimeout); } Earlier in the same file, static double defaulttimeout … Read more
If you need to write code which is Python2 and Python3 compatible you can use the following import try: from urllib.parse import urlparse except ImportError: from urlparse import urlparse
Thankfully to you guys I finally figured out the way it works. Here is my code: request = urllib.request.Request(‘http://mysite/admin/index.cgi?index=127’) base64string = base64.b64encode(bytes(‘%s:%s’ % (‘login’, ‘password’),’ascii’)) request.add_header(“Authorization”, “Basic %s” % base64string.decode(‘utf-8’)) result = urllib.request.urlopen(request) resulttext = result.read() After all, there is one more difference with urllib: the resulttext variable in my case had the type of … Read more