threshold-mod10

Trivial case: computes Hamming weight mod 10 for 8-bit inputs. Since max HW is 8 < 10, this is just HW.

Circuit

  xβ‚€ x₁ xβ‚‚ x₃ xβ‚„ xβ‚… x₆ x₇
   β”‚  β”‚  β”‚  β”‚  β”‚  β”‚  β”‚  β”‚
   β”‚  β”‚  β”‚  β”‚  β”‚  β”‚  β”‚  β”‚
   w: 1  1  1  1  1  1  1  1
   β””β”€β”€β”΄β”€β”€β”΄β”€β”€β”΄β”€β”€β”Όβ”€β”€β”΄β”€β”€β”΄β”€β”€β”΄β”€β”€β”˜
               β–Ό
          β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”
          β”‚ b:  0   β”‚
          β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
               β”‚
               β–Ό
       HW (= HW mod 10)

Why Trivial?

For mod m where m > (number of inputs), no reset ever occurs:

  • 8 inputs β†’ max HW = 8
  • 8 mod 10 = 8 (no wraparound)

Parameters

Weights [1, 1, 1, 1, 1, 1, 1, 1]
Bias 0
Total 9 parameters

Usage

from safetensors.torch import load_file
import torch

w = load_file('model.safetensors')

def mod10(bits):  # Actually just HW
    inputs = torch.tensor([float(b) for b in bits])
    return int((inputs * w['weight']).sum() + w['bias'])

Files

threshold-mod10/
β”œβ”€β”€ model.safetensors
β”œβ”€β”€ model.py
β”œβ”€β”€ config.json
└── README.md

License

MIT

Downloads last month
10
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Collection including phanerozoic/threshold-mod10