introvoyz041 commited on
Commit
22ae4cc
·
verified ·
1 Parent(s): a864928

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +73 -0
README.md ADDED
@@ -0,0 +1,73 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: mlx
3
+ tags:
4
+ - translation
5
+ - mlx
6
+ - mlx
7
+ - mlx-my-repo
8
+ language:
9
+ - zh
10
+ - en
11
+ - fr
12
+ - pt
13
+ - es
14
+ - ja
15
+ - tr
16
+ - ru
17
+ - ar
18
+ - ko
19
+ - th
20
+ - it
21
+ - de
22
+ - vi
23
+ - ms
24
+ - id
25
+ - tl
26
+ - hi
27
+ - pl
28
+ - cs
29
+ - nl
30
+ - km
31
+ - my
32
+ - fa
33
+ - gu
34
+ - ur
35
+ - te
36
+ - mr
37
+ - he
38
+ - bn
39
+ - ta
40
+ - uk
41
+ - bo
42
+ - kk
43
+ - mn
44
+ - ug
45
+ pipeline_tag: text-generation
46
+ base_model: shawizir/Hunyuan-MT-7B-mlx-4bit
47
+ ---
48
+
49
+ # introvoyz041/Hunyuan-MT-7B-mlx-4bit-mlx-4Bit
50
+
51
+ The Model [introvoyz041/Hunyuan-MT-7B-mlx-4bit-mlx-4Bit](https://huggingface.co/introvoyz041/Hunyuan-MT-7B-mlx-4bit-mlx-4Bit) was converted to MLX format from [shawizir/Hunyuan-MT-7B-mlx-4bit](https://huggingface.co/shawizir/Hunyuan-MT-7B-mlx-4bit) using mlx-lm version **0.28.3**.
52
+
53
+ ## Use with mlx
54
+
55
+ ```bash
56
+ pip install mlx-lm
57
+ ```
58
+
59
+ ```python
60
+ from mlx_lm import load, generate
61
+
62
+ model, tokenizer = load("introvoyz041/Hunyuan-MT-7B-mlx-4bit-mlx-4Bit")
63
+
64
+ prompt="hello"
65
+
66
+ if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
67
+ messages = [{"role": "user", "content": prompt}]
68
+ prompt = tokenizer.apply_chat_template(
69
+ messages, tokenize=False, add_generation_prompt=True
70
+ )
71
+
72
+ response = generate(model, tokenizer, prompt=prompt, verbose=True)
73
+ ```