@mdo in your code you have:
rr = torchsort.soft_rank(pred, regularization_strength=regularization_strength)
# change pred to uniform distribution
pred = (rr - .5)/rr.shape[1]
However this is assuming that rr returns the ranked results from 0…size-1, after installing torchsort and trying a couple of times I was surprised to see that the soft_rank returns a ranking that not necessarily starts at 0.
Check the following tests:
import pytest
import torch
from torchsort import soft_rank, soft_sort
def test_less_than_one_numbers():
z = torch.tensor([[0.4385, 0.4385, 0.4385, 0.5649]])
ranked = soft_rank(z)
print(ranked)
assert ranked.min() == 0
def test_bigger_than_one_numbers():
z = torch.tensor([[5000, 10, 20, 34, ]])
ranked = soft_rank(z)
print(ranked)
assert ranked.min() == 0
ranked = soft_rank(torch.tensor([[5000, 5000, 10, 20, 5000, 34, 10, 20, 34, ]]))
print(ranked)
assert ranked.min() == 0
def test_mix_big_small_numbers():
z = torch.tensor([[5000, 10, 0.01, 0.4385, 0.5649, 20, 34, ]])
print(soft_rank(z))
ranked = soft_rank(z)
assert ranked.min() == 0
This makes the correlation unrealiable I think, can you tell me exactly which library did you use for torchsort?
I’m using torchsort · PyPI for this tests.
Also, in ```
pred = (rr - .5)/rr.shape[1]
Any help to understand all this is greatly appreciated.