Extension of the difficulty rating

Announcements, comments, ideas, feedback, and "How do I... ?" questions
Post Reply
ZeHgS
Posts: 4
Joined: Sat Jul 28, 2018 4:36 am

Extension of the difficulty rating

Post by ZeHgS » Mon Aug 13, 2018 4:19 am

Hi guys!

What do you think of an extension of the difficulty rating in the style of ELO ratings but adjusted for the non-competitive nature of the problems? The already existing difficulty system could serve as a basis and the total rating would be a sum of the number of problems solved while taking into account how difficult they were. There could then be a leaderboard and a ranking system for total problems solved instead of fastest solvers.

Also, the column "Performance" here https://projecteuler.net/eulerians refers to how fast the problem was solved since it was posted, right? Not how efficient the algorithm was? If so, are there any plans to implement something that would automatically check for algorithm efficiency as well?

User avatar
Animus
Administrator
Posts: 1616
Joined: Sat Aug 16, 2014 12:23 pm

Re: Extension of the difficulty rating

Post by Animus » Mon Aug 13, 2018 7:46 am

Hi ZeHgS,

let me answer your last question. We have no plan to measure the efficiency of individual solutions.
There are many competitive programming sites that offer this benchmarking.

However (apart from not wanting Project Euler to become another look-alike), there are several drawbacks to this design decision that bother me:
First, you have to restrict the use of programming languages respective environments, and even if allowed some languages are biased (for example, would you still prefere Python to C++ if you come in last every time?).
Moreover, debugging becomes harder if your program runs all right in your personal environment, but just don't get the green hatch in the benchmarking system, taking the focus from just implementing a decent algorithm.
And finally, until benchmarking is over, you can't allow users to get access to the solution forum, in order to prevent them to just reuse the ideas discussed there in order to get better results.

All of this makes benchmarking more helpful in measuring existing skills than in actually developing them, which is Project Euler's foremost goal.

That being said, while we don't want to measure runtimes, we still have the one minute rule with many participants taking pride to adhere to. Especially for hard problems this means implementing a sophisticated algorithm, and good algorithms presented in the problem's fora are regulary rewarded with kudos, so there is defintely some acknowledgement for efficient algorithms after all.

ZeHgS
Posts: 4
Joined: Sat Jul 28, 2018 4:36 am

Re: Extension of the difficulty rating

Post by ZeHgS » Mon Aug 13, 2018 6:37 pm

Thanks a lot for your answer!

What about just some other way of ranking players based on the number of problems they solved and their difficulty? I say this because, honestly, the "Eulerians" page doesn't really make much sense to me personally and, in my opinion, isn't relevant to most people who don't want to compete to be the first person to solve a problem.

In other words, what I'm saying is I wish there was a better way of measuring my progress compared to other people who just want to solve as many problems as possible. Even though this is not a competitive environment, I miss having an ELO like rating to gauge my improvement.

User avatar
hk
Administrator
Posts: 10403
Joined: Sun Mar 26, 2006 9:34 am
Location: Haren, Netherlands

Re: Extension of the difficulty rating

Post by hk » Mon Aug 13, 2018 7:34 pm

When you will have solved 25 problems or more you get listed in one of the levels 1 through 24:
1 : solved 25-49
2: solved 50 -74
.
23 solved 575-599
24 solved 600+
You can access those levels by means of the Levels tab in Statistics.

You can track your progress in those levels.

The Eulerians list is meant for those that compete for the most recent problems.
Image

Post Reply