ShiYu
67b630e67f
Merge pull request #243 from ElhamDevelopmentStudio/fix/batch-dimension-training
...
fix: preserve batch dimension in tokenizer and predictor training
2026-04-13 20:38:49 +08:00
ShiYu
edee1e5bef
Merge pull request #234 from Nikhil1869/master
...
Fix data leakage in normalization window (#227 )
2026-04-13 20:36:38 +08:00
ShiYu
22140cdebd
Merge pull request #232 from kuishou68/fix/issue-231-sample-from-logits-topk-bug
...
fix: use torch.topk instead of calling top_k int parameter as a function in sample_from_logits
2026-04-13 20:36:00 +08:00
ShiYu
05b75b6384
Merge pull request #224 from randyy179/codex/fix-webui-python312-deps
...
Fixes #223 : remove incompatible numpy pin from webui requirements
2026-04-13 20:35:21 +08:00
ShiYu
29491b0f11
Merge pull request #139 from billconan2017/my-feature-branch
...
修改内容
2026-04-13 20:35:03 +08:00
Elhamullah Hossaini
8ca282123c
fix: preserve batch dimension in tokenizer and predictor training
2026-04-13 13:53:18 +04:30
Nikhil1869
79d6d403d8
Fix data leakage in normalization window ( #227 )
2026-04-11 12:01:04 +05:30
cocoon
fde8f60a0d
fix: use torch.topk instead of calling top_k parameter as function in sample_from_logits
...
In sample_from_logits(), the branch for sample_logits=False incorrectly
called 'top_k' as if it were a function (top_k(probs, k=1, dim=-1)),
but 'top_k' is an integer parameter in this scope. This raises:
TypeError: 'int' object is not callable
Fix: replace with torch.topk(probs, k=1, dim=-1) which is the correct
PyTorch API for greedy (argmax) token selection.
Also fix a docstring typo in decode_s2(): 'torch.torch.Tensor' -> 'torch.Tensor'
Closes #231
2026-04-09 16:03:01 +00:00
Shanren Yang
8c08af6e5f
fix: relax numpy requirement in webui dependencies
2026-03-09 20:41:36 -07:00
Shanren Yang
a7d0d235fa
fix: remove incompatible numpy pin from webui requirements
2026-03-09 20:27:39 -07:00
ShiYu
d5ffd46ab0
Merge pull request #211 from alexliao/fix-getting-started
...
Auto-detect device for easier getting started
2026-01-02 22:16:31 +08:00
Alex Liao
369bc0a70e
Auto-detect device for easier getting started
2025-12-20 11:52:32 -08:00
ShiYu
391bfab26e
Update news
...
Update news
2025-11-10 09:26:19 +08:00
ShiYu
1abb87ee7c
Merge pull request #182 from phaoer/feature/cn-markets
...
Add examples from China’s A-share market
2025-11-05 17:48:37 +08:00
ShiYu
ab5ef4b5ce
Merge pull request #174 from AnMakc/master
...
Improve inference throughput
2025-11-05 17:48:13 +08:00
phaoer
a46b680413
feat: China A-share market
2025-11-04 17:51:54 +08:00
Maxim
b62f780de2
Refactor auto_regressive_inference to reduce memory allocations and cpu-gpy syncs.
2025-10-29 17:37:05 -03:00
ShiYu
788bc6f760
Merge pull request #173 from AnMakc/regression_test
...
Add regression tests
2025-10-29 23:20:55 +08:00
Maxim
64569135b2
Remove unnecessary CUDA cache clearing - no memory leak reproducible now.
2025-10-29 12:04:43 -03:00
Maxim
21eb36afc7
Do not collect tokenizer metrics during inference
2025-10-29 12:04:43 -03:00
Maxim
dccfa764fc
Use pytorch SDPA implementation
2025-10-29 12:04:43 -03:00
Maxim
94f3212f4c
Parametrize ctx len for mse regression test.
2025-10-29 11:46:56 -03:00
Maxim
a2d8a27e4e
Add regression tests
2025-10-29 10:45:23 -03:00
ShiYu
eeb3168f71
Merge pull request #167 from SoYuCry/master
...
fix: define missing split_token in HierarchicalEmbedding
2025-10-26 20:30:18 +08:00
YuCry
a7e294cc56
fix: define missing split_token in HierarchicalEmbedding
2025-10-25 23:37:30 +08:00
ShiYu
a5f5aba12d
Merge pull request #152 from RahulPatel2727/master
...
Update README.md
2025-10-19 16:25:33 +08:00
Rahul Patel
b4e24f3e1b
Update README.md
2025-10-18 10:25:56 +05:30
ShiYu
082ab7ef62
Merge pull request #138 from Luciferbobo/master
...
add CSV-based finetuning pipeline for Kronos models
2025-10-12 17:06:30 +08:00
billconan2017
cf9744a46b
修改内容
2025-10-10 16:32:17 +08:00
zhangboyu1
7f658d9672
update config & readme
2025-10-09 18:14:04 +08:00
zhangboyu1
166b4162fb
add example data
2025-10-09 17:36:56 +08:00
zhangboyu1
a8df339586
update readme
2025-10-09 16:35:59 +08:00
zhangboyu1
38b5176cb7
update readme & figs
2025-10-09 16:24:35 +08:00
BOBO
2f7d3484cf
Update README.md
2025-10-09 16:04:52 +08:00
BOBO
a50b425863
Update README.md
2025-10-09 16:00:01 +08:00
zhangboyu1
814a5edb42
add vis figs
2025-10-09 15:53:50 +08:00
BOBO
c5eeb50a99
Update README.md
2025-10-09 15:50:25 +08:00
zhangboyu1
84f74ae341
add custom data finetune
2025-10-09 15:48:39 +08:00
ShiYu
083294bd84
Merge pull request #93 from ArtificialZeng/patch-1
...
wrong expressions and typo
2025-09-16 10:36:13 +08:00
ShiYu
87157161d4
Bug fix
2025-09-16 10:35:14 +08:00
ShiYu
ac69e16750
Bug fix
2025-09-16 10:32:03 +08:00
Dr. Artificial曾小健
c05233c065
wrong expressions and typo
...
wrong expressions and typo
2025-09-15 16:05:04 +08:00
ShiYu
f82acd69bd
Refactor qlib_test.py
2025-09-10 21:37:57 +08:00
ShiYu
764913b7d0
Refactor dataset and backtesting logic in qlib_test.py
...
Refactor QlibTestDataset and QlibBacktest classes for improved structure and readability. Update inference logic and main execution flow.
2025-09-10 21:36:27 +08:00
ShiYu
a4c14cb094
Merge pull request #63 from dowithless/patch-1
...
doc: add links to translated README versions
2025-09-03 21:57:07 +08:00
neo
ed9581b8f4
doc: add links to translated README versions
...
Added language selection links to the README for easier access to translated versions: German, Spanish, French, Japanese, Korean, Portuguese, Russian, and Chinese.
2025-09-03 17:53:03 +08:00
ShiYu
2d1a1ae809
Merge pull request #59 from pengxiao-song/dev_empty_cuda_cache
...
fix: add torch.cuda.empty_cache() during autoregressive inference
2025-09-02 10:36:56 +08:00
Pengxiao Song
e027051b38
fix: add torch.cuda.empty_cache() during autoregressive inference
...
Without releasing cached GPU memory, usage will keep growing during autoregressive prediction, leading to significant memory increase or OOM. Calling torch.cuda.empty_cache() prevents this accumulation.
2025-09-02 10:26:27 +08:00
ShiYu
939986adb1
Merge pull request #53 from QuantML-C/master
...
add batch prediction
2025-09-01 21:27:10 +08:00
quant
38a643b761
update kronos model code
2025-09-01 21:22:27 +08:00