You can use cumcount
to avoid a dummy column:
>>> df["Occ_Number"] = df.groupby("Name").cumcount()+1
>>> df
Name Occ_Number
0 abc 1
1 def 1
2 ghi 1
3 abc 2
4 abc 3
5 def 2
6 jkl 1
7 jkl 2
More Related Contents:
- How to split a dataframe string column into two columns?
- Annotate bars with values on Pandas bar plots
- Comparing two dataframes and getting the differences [duplicate]
- How to make separator in pandas read_csv more flexible wrt whitespace, for irregular separators?
- Why isn’t my Pandas ‘apply’ function referencing multiple columns working? [closed]
- Convert the string 2.90K to 2900 or 5.2M to 5200000 in pandas dataframe
- Conditionally fill column values based on another columns value in pandas
- pandas concat generates nan values
- Normalize columns of a dataframe
- Pandas Groupby and Sum Only One Column
- Quickest way to make a get_dummies type dataframe from a column with a multiple of strings
- Pandas Dataframe: Replacing NaN with row average
- How to read a .xlsx file using the pandas Library in iPython?
- “unstack” a pandas column containing lists into multiple rows [duplicate]
- Pandas Replace NaN with blank/empty string
- How to check whether a pandas DataFrame is empty?
- Python pandas Filtering out nan from a data selection of a column of strings
- How to append rows in a pandas dataframe in a for loop?
- ValueError: Length of values does not match length of index | Pandas DataFrame.unique()
- Copy pandas dataframe to excel using openpyxl
- How to drop unique rows in a pandas dataframe?
- How to reorder indexed rows based on a list in Pandas data frame
- Trying to merge 2 dataframes but get ValueError
- How can I count the number of consecutive TRUEs in a DataFrame?
- Truncate `TimeStamp` column to hour precision in pandas `DataFrame`
- Remove row with null value from pandas data frame
- Replace NaN with empty list in a pandas dataframe
- Get weekday/day-of-week for Datetime column of DataFrame
- Get max value from row of a dataframe in python [duplicate]
- When is it appropriate to use df.value_counts() vs df.groupby(‘…’).count()?