I have a file that contains flow statistics from a switch. This is a big file with alot of values I would like to print out the statistics based on user input.
I'm having a hard time to figure out which way is the most efficient, using a dictionary or a list. I'm looking for a way to split my keys and values

element ={"cookie": " ","duration": " ","table": " ","n_packets": " ","n_bytes": " ",
"idle_timeout": " ","idle_age": " ","priority": " ","arp": " ","in_port": " ","vlan_tci": " ","dl_src": " ","dl_dst": " ","arp_spat": " ",",arp_tpa": " ",",arp_op": " ",",actions": " ",}

c=[] #cookie list
dur =[] #duration list

with open("flow.txt") as f: #opening file
  for line in f: #loop the file
    if "cookie" in line: #if this cookie exsist in file append its value into c

    if "duration" in line: #if this duration exsist in file append its value into c


Your question appears to be: "most efficient, using a dictionary or a list." To answer that you would have to try both and benchmark the results. I hate to write this but here if the report doesn't take more than a minute we rarely rewrite it.

It seems to me you need to scan the input file, line-by-line, looking for (name, value) pairs. The file is very regular. To find the value for any named item, I'd just use Regular Expressions to scan for something that looks like: "value=", e.g. "cookie=" or "duration=". From the character just after the "=", just extract everything up to next comma or end-of-line and that is the corresponding value. The Regular Expression can hand you the item name without the equals-sign and the value without the trailing comma.

Regular Expressions (RegEx's) are explicitly designed to scan a string and pick out patterns in the string. The patterns can be quite complex; I've seen a RegEx that can verify if an email address is syntactically correct according to the formal standard. Your situation is a very simple one.