Email: me@dipankar.name
I built SolScript, a compiler that lets you write smart contracts in Solidity syntax and deploy them to Solana.
The problem: Solana has mass dev interest (17k+ active developers in 2025), but the Rust learning curve remains a 3-6 month barrier. Anchor helps, but you still need to grok ownership, lifetimes, and borrowing. Meanwhile, there are 30k+ Solidity developers who already know how to write smart contracts.
SolScript bridges that gap. You write this:
contract Token {
mapping(address => uint256) public balanceOf;
function transfer(address to, uint256 amount) public {
balanceOf[msg.sender] -= amount;
balanceOf[to] += amount;
emit Transfer(msg.sender, to, amount);
}
}
And it compiles to a native Solana program with automatic PDA derivation, account validation, and full Anchor compatibility.How it works:
- Parser turns Solidity-like source into an AST - Type checker validates and annotates - Two codegen backends: (1) Anchor/Rust output that goes through cargo build-sbf, or (2) direct LLVM-to-BPF compilation - Mappings become PDAs automatically, account structs are derived from your type system
What's supported:
- State variables, structs, arrays, nested mappings - Events and custom errors - Modifiers (inlined) - Cross-program invocation (CPI) - SPL Token operations - msg.sender, block.timestamp equivalents
Current limitations:
- No msg.value for incoming SOL (use wrapped SOL or explicit transfers) - No Token 2022 support yet (planned for v0.4) - Modifiers are inlined, so keep them small
The output is standard Anchor/Rust code. You can eject anytime and continue in pure Rust. It's a launchpad, not a lock-in.
Written in Rust. Ships with a VS Code extension (LSP, syntax highlighting, go-to-definition, autocomplete).
Install: cargo install solscript-cli
Repo: https://github.com/cryptuon/solscript
I'd love feedback on the language design, the compilation approach, or use cases I haven't thought of. Happy to answer questions about the internals.
Current research tools often struggle with accurately representing source material. They frequently introduce factual errors, misattribute quotes, present opinions as facts, and fail to provide sufficient context.
Many AI assistants also add their own interpretations or editorializing that wasn't present in the original sources.
Unzoi takes a different approach by:
- Extracting only information that directly answers the query
- Maintaining critical context to prevent misunderstandings
- Clearly differentiating between facts and opinions from sources
- Avoiding the introduction of unsourced claims or commentary
- Preserving the integrity of quotes without alteration
The tool is particularly useful for researching complex topics where accuracy is essential and misrepresentation could be harmful.
Some example queries:
- Video games with highest learning curves for new players: https://www.unzoi.com/query/video-games-have-highest-learnin...
- Setting up model context protocol servers with PostgreSQL: https://www.unzoi.com/query/set-up-model-context-protocol-se...
- Eligibility criteria for assisted dying in the UK: https://www.unzoi.com/query/eligibility-criteria-assisted-dy...
I'd appreciate feedback from the HN community on both the concept and implementation.
How do you currently handle these challenges in your research workflows?