Could one be preferred over the other in terms of performance? Both option have to traverse the whole array somehow to find out which string contains an 'a'. Or does it not matter much and is it just a syntax thing.
Or are there better ways to do this?
All your opinions are greatly appriciated. Here's the code.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Data;

namespace ConsoleApplication10
{
    class Program
    {
        static void Main(string[] args)
        {
            string[] fruits = { "prune", "apple", "pear", "banana", "cherry", "orange", "blueberry" };

            Console.WriteLine(" --- Option1 use of LINQ");
            Option1(fruits);
            Console.WriteLine(" --- Option2 use of List");
            Option2(fruits);
            Console.ReadKey();
        }

        static void Option1(string[] F)
        {
            IEnumerable<string> query = F.Where(fruit => fruit.Contains("a"));
            foreach (string f in query)
            {
                Console.WriteLine(f);
            }
        }

        static void Option2(string[] F)
        {
            List<string> query = new List<string>();
            for (int i = 0; i < F.Count(); i++)
            {
                if (F[i].Contains("a"))
                {
                    query.Add(F[i]);
                }
            }
            foreach (string f in query)
            {
                Console.WriteLine(f);
            }

        }}

Option 1 would be preferred in this case. In option 2 you're allocating a whole new data structure to hold the filtered results and also iterating the original list in its entirety once followed by the filtered results once.

In option 1 you have what essentially boils down to a lazy evaluated yielding iterator. So it doesn't allocate any new data structures and only traverses the original list once (filtering out items that fail your predicate as it goes).

I'd expect option 1 to be faster as well as less wasteful of resources. But the compiler may optimize things such that the resulting difference is negligible.

Comments
Great!

Thanks for your answer decepticon. I heard about lazy evaluation, but don't know much about it. I'll give it a go with our good friend google! And of course in this trivial example speed of excecution would not matter much, but it might make a difference with a list of say 10000 items.
Thanks again!

Edited 3 Years Ago by ddanbe: typo

Just thought I'd throw my 2p in.

Be careful with lazy evaluation, you can easily cripple performance by re-iterating queries you already executed.

IEnumerable<int> query = myIntList.Where(i => i > 0);

long result = 0;
// Executes Query
foreach(int i in query)
{
    result += query;
}

// Executes Query again
foreach(int i in query)
{
    result += (query * 2);
}

It's good for things like this though...

IEnumerable<int> queryGreaterThanZero = myIntList.Where(i => i > 0); // Not Executed
IEnumerable<int> queryFilterFive = queryGreaterThanZero.Where(i => i != 5); // Still not executed
IEnumerable<int> queryFinal = queryFilterFive.Where(i => i < 10); // Guess what... ;)

// Executes all queries once only
foreach(int i in queryFinal)
{
    Console.WriteLine(i);
}

// Be careful doing this...

// Executes here
if(queryFinal.Count() > 0)
{
    // Uh oh, executes again
    foreach(int i in queryFinal)
    {
        Console.WriteLine(i);
    }
}

Deceptikon is spot on with the fact that no intermediary objects are created though. That is the single biggest advantage, especially when working with humongous datasets.

Comments
Thanks for valuable feedback!
This question has already been answered. Start a new discussion instead.