I recently applied to a job from a HN monthly post and I was told that the company is looking for more experienced people. That made me think about metrics for measuring experience. Which made me ask myself questions on self-assessment. I realised that I might lack the skill.
Therefore, I'm turning to the HN community to see how that goes here.
Note: I'm mostly referring to technical self-assessment, but I'm curious about self-assessment in other domains too (specifically in those that have a more fluid definition of "quality").
Imagine this:
P: Problems that can be quickly solved by algorithms on modern hardware. HP: Problems where solutions can be quickly verified for truth by humans (or algorithms).
The real-life P = NP question: Can every text generated quickly by an algorithm (news summary, scientific claim, legal doc) be quickly verified for truthfulness by a human or automated system?
How would this approach change our current methods for verifying the accuracy of generated content in journalism, academia, and law?
What are the potential limitations or challenges in framing P = NP this way? What better models do you have?
What's your advice on leading a project where everyone is higher in seniority than me?
Thanks!
I'm looking for research ideas for my master thesis and I'm quite bad at determining what's feasible to do within 6-10 worth of work (approx. 15h/week).
Currently I'm thinking about either measuring bias/variance in LLMs or looking into time-series applications of transformer-based models, but I'm still in the exploration phase and I'm very much open to any thoughts/remarks/suggestions :DD
Thanks & wishing y'all a nice end of February & nicer beginning of March ^^