Hello all. So, I'm in my final year of my B.Tech. in computer science programme and I have to do a project for my final year. The thing is I want the project to be unique/innovative. I brainstormed a few things:

  1. An online music player that can store your local computer files on cloud. Sorts them according to album, artists, stars, favorites, etc. You can listen to your songs anywhere on any anydevice once the are synched. But this is already implemented by amazon as cloudplayer.com. Is there any extra feature or more innovative faeture that I can implement.
  2. Then I thought of building an OS from scratch or may be re-using a few things. But is it feasible in 7-8 months in a group of two people??
  3. A search Engine that uses big data to query and rank results using MapReduce - is it good enough? Is it feasible??

I'd really like to know your thoughts on these or may be you can suggest some other project or idea that I can implement.
Thanks in advance.

Recommended Answers

All 15 Replies

So, I'm in my final year of my B.Tech. in computer science programme

You're in your final year, but you haven't found something you might be interested in?

The problem with the 3 things you listed is that they don't have much do to with computer science at all. It sounds more like it's for "software engineering"/"software development"/"programming" which is a quite a bit different then computer science. Is that ok with you?

Also, they seem to be "big" projects, but they're not interesting (in a sense of computation).

For project's more clostly related to Computer Science:

  1. How about a GPU-implemented neural network for phones. The problem with phones is that most don't have OpenCL/Cuda availibility, which means you'll probably want to use OpenGl shaders. I beleive this would be the first of it's kind, since most libraries I see are based on either Cuda, OpenCL or the CPU. It's also somewhat usefell, and can be interesting for a game.

  2. It's not anything new, but how about you make a real-time operating system that focuses on energy efficeny? It's a little more interesting then a plain old OS. Yes, it should be very possible for 2 people to write it in 7 months.

  3. Using MapReduce for a search engine seems like a bad idea (without some serious optimizations). If you wanted to (I'm presuming your good at linear algebra), you might be able to apply MapReduce to something like google's PageRank algorithm. I guess technically PageRank is a MapReduce, but the idea is to use linalge so it can be optimized/ran on the gpu.

  4. How about solving a murder? You can use a FFT to quicky compute an autocorrilation. So, when a gun is fired (and you only have a audio recording of the incident), you can match the sound of the gun to a database of sounds of guns to determine what kind of gun it was. This can also be extended to other forms of signal processing, like images (possibly matching faces), and things like that. This can be made faster with the help of a GPU.

  5. If you're into music, then how about making mroe of a playlist sharing site. Using statistics and matching algorithms, you can suggest songs that the user might like but not have heard yet.

commented: Good suggestions Hiroshe! +12

Hiroshe has some great suggestions. Myself, of those I would suggest the low-power real-time OS (having worked with embedded real-time systems for 30+ years). Look at Thoth (the original research micro-kernel OS that QNX came from), QNX, and other such systems. FWIW, a micro-kernel approach with message-passing between the kernel and drivers, applications, etc. provides the best (in my opinion) possibilities to minimize power consumption in a system.

In any case, for any of these options, research is required. That's why it's called "computer science". :-)

Thanks, both of you. Actually my programme's complete name is computer science and engineering. And so, even

software engineering"/"software development"/"programming

ideas are welcome.
Well, I'm interested in web development and have brainstormed another idea tell me what do you think of it:
A website which allows users to add websites/web-pages on the website for review. Other registered users in the community will rate the pages on a scale of 1-10 and may even write a review. Then for later, I can also create a firefox add-on which shows a wesite's rating in google search results.

A website which allows users to add websites/web-pages on the website for review. Other registered users in the community will rate the pages on a scale of 1-10 and may even write a review. Then for later, I can also create a firefox add-on which shows a wesite's rating in google search results.

I don't really know what your standards are, but essentially that's just a forum/review site. It's not interesting computationally at all. I would expect a high school web developer to be able to do that easily. Is your prof ok with that?

It's not interesting computationally

Well, there are a few things thar are not clearly visible from the problem statement I gave above. A casual user's rating is more valuable than that of a regular or dedicated user which I wish to solve by using a credibility point system. A user with high credibility points, rating the web site is more valuable than a lower credibility user. The credibility will be gained by user's activities on the site.
Also some companies can spam it and increase their website's rating which can be solved using the above system or may be I can think of a better idea later, so that's a point of challenge as well.
Plus an addon for common browsers is the main portion of this idea as the ratings on the website will provide users with an idea if its a good site in search results itself.
We can also categorize websites and thus users can also use this website to discover good websites in a particular field.

May be we can discuss some ideas on how to make this very idea more interesting computationally. Because I think this is a very simple idea but that has not been implemented yet. And this will help make web a better place.

Because I think this is a very simple idea but that has not been implemented yet.

Plugin and reviews: https://www.mywot.com/

Rankings and other statistics: http://www.alexa.com/

A user with high credibility points, rating the web site is more valuable than a lower credibility user.

This isn't anything new, and I would expect it of any ranking service. It's just using statistics to get meaningfull summaries of the information. If you can make an interesting ranking algorithm (that does not introduce bias!), then you might be able to make a project based around the algorithm itself.

It's easy to introduce bias though. For example, if you make an algortihm that give users who score closer to the average more reputation (makes intuitive sense), then users will choose scores closer to the average for more reputation, introducing a bias.

If you make an interesting algorithm (and the project will be based on the algortihm, so advertise it as an "effective rating algortihm implementation"), then it would be something interesting.

Thanks for your algorithm advice I really like it. And I'm thinking about it now.
Plus the two links you gave are completely different from my idea. My webiste will ask users to rate websites based on performance, content, credibility and whatever they can think of - may be I can ask them to give an overall rating or ratings in varous fields like performance and content.
WOT: is simply about malwares or bad websites security wise - when people rate/review they'll ceratinly cover that. Plus, you also see questions like - is XYZ.com a Spam in forums so users will rate bad for such websites as well if they really are.
Alexa: Ranks websites based on traffic data, while what I intend to do is to rank websites based on actual user experience. A website with high traffic may not have content that everybody likes.
Say there is a tutorial on a famous topic on a good tutorial website - But most people don't like this particular tutorial because it isn't really clear or maybe its hard to understand so users will not rate it higher and thus you know when searcing for the tutorial on that topic that even though this shows higher in google results people on the web don't really like what it has to say.

@Hiroshe: Here is an initial idea I have thought of.
First of all, a user gains/looses reputation points on getting up/down votes on a review he/she writes.
Secondly - the idea - I aimed at kind of simulating up votes on ratings itself - i.e people agreeing with your rating. So, I propose - Say a user gives 5/10 rating to a website then if any other user in the future also gives the same rating to the website then you gain reps and the rep gained also depends on the reputation of the user who rated same as you.
What do you say?? Does it hava flaw or is it too simple? Also I would like to know if it is too inefficient to implement because each time a user rates I may have to update reputations of multiple users?

WOT has fuller reviews, but it is focused on saftey. Alexia is an example of how statistics can be used.

First of all, a user gains/looses reputation points on getting up/down votes on a review he/she writes.

That's standard. We have a simular system at Daniweb.

Secondly - the idea - I aimed at kind of simulating up votes on ratings itself - i.e people agreeing with your rating.

Well when a user likes a review, they don't nessecarily like website. They like the review. If the review is good and rate the movie as bad, then if anything a an upvote should be a downvote for the movie. Also, sometimes reviewers will like over reviewers posts which creates a bias towards the reviewers who get 2 votes. So be carefull.

Say a user gives 5/10 rating to a website then if any other user in the future also gives the same rating to the website then you gain reps

This is what I just cited not to do. It gives users an insentive to make reviews that agree with the average (so then the users gets more reps) which is a bias towards the norm. You need to make sure that their is NO incentive to write a review with a specific rating in order to avoid biases. Users should be given reputation points to write a good and well thought review, reguardless of what the actual score was.

The system your talking about is tending towards a weighted rating system, which is simular to what we use at Daniweb. It's not anything new.

How about you look into Google's pagerank algorthm (that I linked in another post), and deriving an algorithm off of that for a weighted review system? You'll need to replace "links" with something a little more inventive (what do you think you should use?), and you can use it to weight reviews themselves so you can make a nice weighted sum for the final score. You can also incorperate an ELO system for user raitings instead of something static for a bit more accuracy (and making it harder to cheat). This also gives you something interesting to talk about (assuming you'll need to make a write up): it can be optimized by using the GPU or even FPGA's which is interesting in itself.

I'd look into your suggestion and try out a compleletely new approach instead of weighted approach. But I have to say though:
You have got me a bit wrong.

Well when a user likes a review, they don't nessecarily like website.

Ya, and I am not at all saying that. When a user likes - upvotes/downvotes a review the user who wrote that review gains reputation because other users liked his review whether it was positive or negative.

This is what I just cited not to do. It gives users an incentive to make reviews that agree with the average (so then the users gets more reps) which is a bias towards the norm.

There is no incentive here for unreal rating. In fact there is an incentive to rate well. Consider how the average rating comes out. Say the first user rated it to be 9/10 he doesn't know if in future other's will rate it same as him or not, so, his best bet is to rate it real and thus expect others who rate real to give it the same rating.
Then comes the second user, the same problem rises for him too so he also has to rate real and so on.
Though this system fails only if every user assumes that every other user is malicious and will rate it at exactly the average rating. So it does have flaws and I'll redesign a better newer algo now. But I just wanted to make myself clear that's all.

There is no incentive here for unreal rating. In fact there is an incentive to rate well. Consider how the average rating comes out. Say the first user rated it to be 9/10 he doesn't know if in future other's will rate it same as him or not, so, his best bet is to rate it real and thus expect others who rate real to give it the same rating. Then comes the second user, the same problem rises for him too so he also has to rate real and so on.

Ok, let's say that the average is 8/10. However you didn't like the web site. If you rate it lower, then you'll get less reputation even if your review is great. The existance of that insentive breaks the statistics (any statition would agree that for accurate statistics, there can be no insentive). From a statistics standpoint, this is flawed (this is one things thats stressed in first year statistics).

Also, if you do this, you need to consider other statistics like standard deviation, and even grouping sometimes. If a web site has high standard deviation, then it would be inaccurate to give more reputation to people who rate close to the average, since it's not representitive. For example, in an extreme case, consider a web site that has 50% 0/10 ratings and 50% 10/10 ratings. The average is 5/10, but it's not representitive and nobody gets the extra reputation. (in mathematics, extreme cases are a good way to see how things like this act in general).

Another point on this: Even though people disagree about ratings, that doesn't mean that the review itself is good or bad. You can have two conflicting reviews, and have both of them to be excellent reviews at the same time. True, if more people agree with one review, then that is how the rating of the web site will be represented. However, a user shouldn't loose reputation if his review is still a great review (but from a different stand point). People often like reading reviews that conflict with eachother, because it highlights the good and the bad of the website.

This would work if the rating is blind (ie, reviewers cannot see the current average rating, so the insentive is no longer there). In order to do that you'll need to do all of the rating at the same time and not reveil the final result until afterwords. This does not work for websites, since they're always changing.

Most schools have a statistics department availible as a resource for students if you want to verify that your statistics are good.

Another idea might be a personalised score, where if you like that review, then (instead of giving reputation to a user) your personal score that you see weighs that persons reviews higher in the scoring proccess. Also, it slightly weighs people who agree with that person and their opinions slightly higher. That way users are "bubbled into" a mroe agreeable score according to their preference.

The problem (benifit?) with that is it will require some user interaction to determine what they like to see. Of course this creates a bit of a bubble (by design), but sometimes it can be usefull to determine if you'll like the content or not.

You can use a PageRank like algorithm to do this quickly and accurately.

Thanks, ya I get your point. But I like this project:
Effective rating algorithm and its implementation to rate web content
What do you say?? Shall I seal it and finalize?? I'll develop an algorithm and then build and host the website. And an addon can be made later.

If you need one other suggestion which would cover both computer science and software engineering, and involves an actual open research problem, you might look into the question of compiling and linking generics (such as C++ templates) in an object format. It is a difficult issue because instantiating a generic involves generating new code based on the generic function or class at compile time - the compiler would have to be able to extract the generic from the object file without necessarily having the source code available in its original form. The advantages would be significant, however, as it would allow library writers to remove the template implementations from the headers and keep them in the source files, where they belong.

I would recommend looking at the ELF format to begin with, as it is a completely open standard and would be easier to extend, IMAO. If you succeed in this, consider doing the same with the PE format (convincing Microsoft to adopt your extensions is left as an exercise for the reader).

thanks guys

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.