import boto3
import pandas as pd
from sagemaker import get_execution_role
role = get_execution_role()
bucket="my-bucket"
data_key = 'train.csv'
data_location = 's3://{}/{}'.format(bucket, data_key)
pd.read_csv(data_location)
More Related Contents:
- How to write a file or data to an S3 object using boto3
- Boto3 to download all files from a S3 Bucket
- How to upload a file to directory in S3 bucket using boto
- Read file content from S3 bucket with boto3
- Getting started with secure AWS CloudFront streaming with Python
- Amazon S3 boto – how to delete folder?
- Listing contents of a bucket with boto3
- Retrieving subfolders names in S3 bucket from boto3
- AWS Content Type Settings in S3 Using Boto3
- Read a file line by line from S3 using boto?
- Can I use boto3 anonymously?
- Getting S3 objects’ last modified datetimes with boto
- How to specify credentials when connecting to boto3 S3?
- Open S3 object as a string with Boto3
- how to copy s3 object from one bucket to another using python boto3
- Reading an JSON file from S3 using Python boto3
- Python: How to read and load an excel file from AWS S3?
- Download a folder from S3 using Boto3
- Will it lead to Overfitting / Curse of Dimensionality
- How do I initialize weights in PyTorch?
- How does Pytorch’s “Fold” and “Unfold” work?
- LabelEncoder for categorical features?
- unable to call firefox from selenium in python on AWS machine
- Identifying statistical outliers with pandas: groupby and reduce rows into different dataframe
- How to bootstrap installation of Python modules on Amazon EMR?
- Mixing categorial and continuous data in Naive Bayes classifier using scikit-learn
- Save Naive Bayes Trained Classifier in NLTK
- OneHotEncoder categorical_features deprecated, how to transform specific column
- Model help using Scikit-learn when using GridSearch
- Determining the most contributing features for SVM classifier in sklearn