I’ll chip in, ignore if you want…
Altering the PoW without knowing how solvers are implemented in hardware is foolish. So where is equihash weak from a hardware guys viewpoint?
1. Memory footprint.
As shown by most solvers you don’t have to solve for the entire space. The SW guys use buckets and solve within each bucket, the HW guys generate hashes quickly enough that they can just throw away hashes leaving a small subset (a single bucket if you will) to solve. If you have external DDR memory then you can stream and partition the hashes and later run the buckets into the sorting stage.
2. Memory bandwidth.
Memory operations are expensive. In hardware you don’t need to do them (traditional memory access anyway), you can sort using data flow techniques. In a small low grade FPGA you can sort a few thousand 20bit numbers with index bits in ~41 cycles, you can even xor them at the output while you wait for the index information to flow through the sorter. All you need to do is load in 6kbyte of data and read matching index information a few 100ns later.
This is the easy stuff, there are plenty of other ways you can increase the effective hash rate in hardware which just don’t make sense in software.
The best way to address hardware miners IMO is to update the protocol to make n,k variable on a per-round basis as a type of difficulty parameter. This will force hardware be become “generic” by removing a lot of the nice static optimisation tricks, thus defeating hardware (at least for a while longer). If you’ve looked at the ePic thread you’ll soon realise they want money to address the the n,k issue (i.e a good hint at how to defeat hardware).