Skip to content
🤔prompts chat🧠
🔍
question:write another 100

answer:101. Reflexology practitioners 102. Dance therapists 103. Self-help book authors 104. Mindful eating coaches 105. Grounding technique experts 106. Hydrotherapy providers 107. Japanese Zen garden enthusiasts 108. Sleep-inducing food experts 109. Mindful movement trainers 110. Workplace wellness consultants 111. Biofeedback therapists 112. Music therapists 113. Digital detox advocates 114. Sensory deprivation tank centers 115. Personal growth speakers 116. Anti-anxiety wearable technology developers 117. Light therapy providers 118. Positive psychology researchers 119. Affirmation card creators 120. Meridian therapy specialists 121. Shiatsu massage practitioners 122. Anxiety-relief toy retailers 123. Sleepwear and loungewear brands 124. Candle- making hobbyists 125. Watsu (water-based shiatsu) practitioners 126. Tranquil photography accounts 127. Serene movie and TV show enthusiasts 128. Emotional Freedom Technique (EFT) practitioners 129. Mindfulness coloring book creators 130. Adult playground advocates 131. Pet therapy organizations 132. Mood-supporting nutrition counselors 133. Grounding (Earthing) advocates 134. Origami enthusiasts 135. Memory foam pillow and mattress manufacturers 136. Hatha yoga instructors 137. Wellness subscription box curators 138. DIY spa treatment recipes 139. Expressive writing guides 140. Frustration-release activities (paintball, smashing objects, etc.) 141. Sunset and sunrise chasers 142. Serotonin-boosting food bloggers 143. Contemplative prayer practitioners 144. Mindful language-learning instructors 145. Positivity-promoting social experimenters 146. Miniature zen garden DIY-ers 147. Nature documentary creators 148. Soothing ASMR channels 149. Positive thinking coaches 150. Immersive relaxation event organizers 151. Weighted blanket manufacturers 152. Healthy comfort food chefs 153. Calming pet video accounts 154. Cultivating kindness instructors 155. Uplifting short film creators 156. Sensory room designers 157. Campfire enthusiasts 158. Rain sounds and ambient noise creators 159. Lymphatic drainage massage therapists 160. Silent retreat centers 161. Heartfulness meditation practitioners 162. Pilates instructors promoting relaxation 163. Nostalgia-evoking accounts 164. Rest and relaxation resort reviewers 165. Therapeutic art class organizers 166. Forest bathing guides 167. Natural anxiety remedies bloggers 168. Luxury bedding retailers 169. Wellness-focused interior designers 170. Breathing app developers 171. Inspirational Instagram accounts 172. Stress ball and fidget toy makers 173. Chair massage providers 174. Soft and comforting textiles manufacturers 175. Handwriting and calligraphy artists 176. Hammock and hanging chair retailers 177. Focus and relaxation-promoting supplements 178. Non-violent video game developers 179. Singing bowl practitioners 180. Mono-tasking enthusiasts 181. Empathy-promoting projects 182. Mindful journaling guides 183. Lectures and workshops on relaxation techniques 184. Destination getaways with relaxation themes 185. Body scan meditation instructors 186. Box breathing advocates 187. Laughing meditation promoters 188. Ethical stress relief product retailers 189. Relaxing nature soundscape creators 190. Guided nature walks organizers 191. Unplugging and tech-free relaxation advocates 192. Fostering empathy experts 193. Birdwatching and bird-sound enthusiasts 194. Complementary medicine practitioners 195. Support groups for work-related stress 196. Frequent mental health awareness guest speakers 197. Guided stretching video creators 198. Relaxing board game creators 199. International wellbeing and relaxation experts 200. Sleep optimization gadget developers

question:I can only change the export python script, i need it to split model files in two files consolidated.00.pth consolidated.01.pth with the correct layers size: this is the llama_model_function: static bool llama_model_load( const std::string & fname, llama_context & lctx, int n_ctx, int n_parts, ggml_type memory_type, bool vocab_only, llama_progress_callback progress_callback, void progress_callback_user_data) { fprintf(stderr, “%s: loading model from ‘%s’ - please wait …n”, func, fname.c_str()); lctx.t_start_us = ggml_time_us(); auto & model = lctx.model; auto & vocab = lctx.vocab; auto fin = std::ifstream(fname, std::ios::binary); if (!fin) { fprintf(stderr, “%s: failed to open ‘%s’n”, func, fname.c_str()); return false; } std::vector<char> f_buf(10241024); fin.rdbuf()->pubsetbuf(f_buf.data(), f_buf.size()); fin.seekg(0, fin.end); const size_t file_size = fin.tellg(); fin.seekg(0); // verify magic { uint32_t magic; fin.read((char *) &magic, sizeof(magic)); if (magic == LLAMA_FILE_MAGIC_UNVERSIONED) { fprintf(stderr, “%s: invalid model file ‘%s’ (too old, regenerate your model files or convert them with convert-unversioned-ggml-to-ggml.py!)n”, func, fname.c_str()); return false; } if (magic != LLAMA_FILE_MAGIC) { return report_bad_magic(fname.c_str(), magic, LLAMA_FILE_MAGIC); } uint32_t format_version; fin.read((char *) &format_version, sizeof(format_version)); if (format_version != LLAMA_FILE_VERSION) { fprintf(stderr, “%s: invalid model file ‘%s’ (unsupported format version %” PRIu32 “, expected %d)n”, func, fname.c_str(), format_version, LLAMA_FILE_VERSION); return false; } } int n_ff = 0; // load hparams { auto & hparams = model.hparams; fin.read((char *) &hparams.n_vocab, sizeof(hparams.n_vocab)); //fin.read((char *) &hparams.n_ctx, sizeof(hparams.n_ctx)); fin.read((char *) &hparams.n_embd, sizeof(hparams.n_embd)); fin.read((char ) &hparams.n_mult, sizeof(hparams.n_mult)); fin.read((char ) &hparams.n_head, sizeof(hparams.n_head)); fin.read((char ) &hparams.n_layer, sizeof(hparams.n_layer)); fin.read((char ) &hparams.n_rot, sizeof(hparams.n_rot)); fin.read((char ) &hparams.f16, sizeof(hparams.f16)); hparams.n_ctx = n_ctx; n_ff = ((2(4hparams.n_embd)/3 + hparams.n_mult - 1)/hparams.n_mult)hparams.n_mult; if (n_parts < 1) { n_parts = LLAMA_N_PARTS.at(hparams.n_embd); } // temp warning to tell the user to use “–n_parts” if (hparams.f16 == 4 && n_parts != 1) { fprintf(stderr, “%s: GPTQ model detected - are you sure n_parts should be %d? we normally expect it to be 1n”, func, n_parts); fprintf(stderr, “%s: use ‘–n_parts 1’ if necessaryn”, func); } if (hparams.n_layer == 32) { model.type = e_model::MODEL_7B; } if (hparams.n_layer == 40) { model.type = e_model::MODEL_13B; } if (hparams.n_layer == 60) { model.type = e_model::MODEL_30B; } if (hparams.n_layer == 80) { model.type = e_model::MODEL_65B; } fprintf(stderr, “%s: n_vocab = %dn”, func, hparams.n_vocab); fprintf(stderr, “%s: n_ctx = %dn”, func, hparams.n_ctx); fprintf(stderr, “%s: n_embd = %dn”, func, hparams.n_embd); fprintf(stderr, “%s: n_mult = %dn”, func, hparams.n_mult); fprintf(stderr, “%s: n_head = %dn”, func, hparams.n_head); fprintf(stderr, “%s: n_layer = %dn”, func, hparams.n_layer); fprintf(stderr, “%s: n_rot = %dn”, func, hparams.n_rot); fprintf(stderr, “%s: f16 = %dn”, func, hparams.f16); fprintf(stderr, “%s: n_ff = %dn”, func, n_ff); fprintf(stderr, “%s: n_parts = %dn”, func, n_parts); fprintf(stderr, “%s: type = %dn”, func, model.type); } // load vocab { std::string word; vocab.id_to_token.resize(model.hparams.n_vocab); std::vector<char> tmp(64); for (int i = 0; i < model.hparams.n_vocab; i++) { uint32_t len; fin.read((char ) &len, sizeof(len)); word.resize(len); if (len > 0) { tmp.resize(len); fin.read(tmp.data(), len); word.assign(tmp.data(), len); } else { word.clear(); } float score; fin.read((char ) &score, sizeof(score)); vocab.token_to_id[word] = i; auto &tok_score = vocab.id_to_token[i]; tok_score.tok = word; tok_score.score = score; } } if (vocab_only) { return true; } // for the big tensors, we have the option to store the data in 16-bit floats or quantized // in order to save memory and also to speed up the computation // wtype is for per-layer weights, while vtype is for other weights ggml_type wtype, vtype; switch (model.hparams.f16) { case 0: wtype = vtype = GGML_TYPE_F32; break; case 1: wtype = vtype = GGML_TYPE_F16; break; case 2: wtype = vtype = GGML_TYPE_Q4_0; break; case 3: wtype = vtype = GGML_TYPE_Q4_1; break; case 4: wtype = GGML_TYPE_Q4_1; vtype = GGML_TYPE_F16; break; default: { fprintf(stderr, “%s: invalid model file ‘%s’ (bad f16 value %d)n”, func, fname.c_str(), model.hparams.f16); return false; } } // map model into memory char mm_addr = NULL; model.mm_addr = mmap_file(fname.c_str(), &model.mm_length); if (model.mm_addr == NULL) { fprintf(stderr, “%s: failed to mmap ‘%s’n”, func, fname.c_str()); return false; } mm_addr = (char )model.mm_addr; fprintf(stderr, “%s: ggml map size = %6.2f MBn”, func, model.mm_length/(1024.01024.0)); auto & ctx = model.ctx; size_t ctx_size = 0; { const auto &hparams = model.hparams; const int n_layer = hparams.n_layer; ctx_size += (5 + 10n_layer)256; // object overhead fprintf(stderr, “%s: ggml ctx size = %6.2f KBn”, func, ctx_size/1024.0); } // print memory requirements { const size_t scale = memory_type == GGML_TYPE_F32 ? 2 : 1; // this is the total memory required to run the inference const size_t mem_required = ctx_size + model.mm_length + MEM_REQ_SCRATCH0.at(model.type) + MEM_REQ_SCRATCH1.at(model.type) + MEM_REQ_EVAL.at (model.type); // this is the memory required by one llama_state const size_t mem_required_state = scaleMEM_REQ_KV_SELF.at(model.type); fprintf(stderr, “%s: mem required = %7.2f MB (+ %7.2f MB per state)n”, func, mem_required / 1024.0 / 1024.0, mem_required_state / 1024.0 / 1024.0); } // create the ggml context { lctx.model.buf.resize(ctx_size); struct ggml_init_params params = { /.mem_size =/ lctx.model.buf.size(), /.mem_buffer =/ lctx.model.buf.data(), /.no_alloc =/ true, }; model.ctx = ggml_init(params); if (!model.ctx) { fprintf(stderr, “%s: ggml_init() failedn”, func); return false; } } // prepare memory for the weights { const auto & hparams = model.hparams; const int n_embd = hparams.n_embd; const int n_layer = hparams.n_layer; const int n_vocab = hparams.n_vocab; model.layers.resize(n_layer); model.tok_embeddings = ggml_new_tensor_2d(ctx, vtype, n_embd, n_vocab); model.norm = ggml_new_tensor_1d(ctx, GGML_TYPE_F32, n_embd); model.output = ggml_new_tensor_2d(ctx, vtype, n_embd, n_vocab); // map by name model.tensors[“tok_embeddings.weight”] = model.tok_embeddings; model.tensors[“norm.weight”] = model.norm; model.tensors[“output.weight”] = model.output; for (int i = 0; i < n_layer; ++i) { auto & layer = model.layers[i]; layer.attention_norm = ggml_new_tensor_1d(ctx, GGML_TYPE_F32, n_embd); layer.wq = ggml_new_tensor_2d(ctx, wtype, n_embd, n_embd); layer.wk = ggml_new_tensor_2d(ctx, wtype, n_embd, n_embd); layer.wv = ggml_new_tensor_2d(ctx, wtype, n_embd, n_embd); layer.wo = ggml_new_tensor_2d(ctx, wtype, n_embd, n_embd); layer.ffn_norm = ggml_new_tensor_1d(ctx, GGML_TYPE_F32, n_embd); layer.w1 = ggml_new_tensor_2d(ctx, wtype, n_embd, n_ff); layer.w2 = ggml_new_tensor_2d(ctx, wtype, n_ff, n_embd); layer.w3 = ggml_new_tensor_2d(ctx, wtype, n_embd, n_ff); // map by name model.tensors[“layers.” + std::to_string(i) + “.attention_norm.weight”] = layer.attention_norm; model.tensors[“layers.” + std::to_string(i) + “.attention.wq.weight”] = layer.wq; model.tensors[“layers.” + std::to_string(i) + “.attention.wk.weight”] = layer.wk; model.tensors[“layers.” + std::to_string(i) + “.attention.wv.weight”] = layer.wv; model.tensors[“layers.” + std::to_string(i) + “.attention.wo.weight”] = layer.wo; model.tensors[“layers.” + std::to_string(i) + “.ffn_norm.weight”] = layer.ffn_norm; model.tensors[“layers.” + std::to_string(i) + “.feed_forward.w1.weight”] = layer.w1; model.tensors[“layers.” + std::to_string(i) + “.feed_forward.w2.weight”] = layer.w2; model.tensors[“layers.” + std::to_string(i) + “.feed_forward.w3.weight”] = layer.w3; } } std::vector<uint8_t> tmp; if (progress_callback) { progress_callback(0.0, progress_callback_user_data); } fprintf(stderr, “%s: loading tensors from ‘%s’n”, func, fname.c_str()); // load weights { size_t total_size = 0; model.n_loaded = 0; while (true) { int32_t n_dims; int32_t length; int32_t ftype; fin.read(reinterpret_cast<char *>(&n_dims), sizeof(n_dims)); fin.read(reinterpret_cast<char *>(&length), sizeof(length)); fin.read(reinterpret_cast<char *>(&ftype), sizeof(ftype)); if (fin.eof()) { break; } int32_t nelements = 1; int32_t ne[2] = { 1, 1 }; for (int i = 0; i < n_dims; ++i) { fin.read(reinterpret_cast<char *>(&ne[i]), sizeof(ne[i])); nelements *= ne[i]; } std::string name(length, 0); fin.read(&name[0], length); if (model.tensors.find(name.data()) == model.tensors.end()) { fprintf(stderr, “%s: unknown tensor ‘%s’ in model filen”, func, name.data()); return false; } auto tensor = model.tensors[name.data()]; if (ggml_nelements(tensor) != nelements) { fprintf(stderr, “%s: tensor ‘%s’ has wrong size in model filen”, func, name.data()); return false; } if (tensor->ne[0] != ne[0] || tensor->ne[1] != ne[1]) { fprintf(stderr, “%s: tensor ‘%s’ has wrong shape in model file: got [%” PRId64 “, %” PRId64 “], expected [%d, %d]n”, func, name.data(), tensor->ne[0], tensor->ne[1], ne[0], ne[1]); return false; } if (0) { static const char * ftype_str[] = { “f32”, “f16”, “q4_0”, “q4_1”, }; fprintf(stderr, “%24s - [%5d, %5d], type = %6sn”, name.data(), ne[0], ne[1], ftype_str[ftype]); } switch (ftype) { case 0: // f32 case 1: // f16 break; case 2: // q4_0 case 3: // q4_1 assert(ne[0] % 64 == 0); break; default: fprintf(stderr, “%s: unknown ftype %d in model filen”, func, ftype); return false; }; // load the tensor data into memory without copying or reading it size_t offset = fin.tellg(); size_t tensor_data_size = ggml_nbytes(tensor); offset = (offset + 31) & -32; tensor->data = mm_addr + offset; fin.seekg(offset + tensor_data_size); total_size += tensor_data_size; model.n_loaded++; // progress if (progress_callback) { double current_progress = size_t(fin.tellg()) / double(file_size); progress_callback(current_progress, progress_callback_user_data); } } fin.close(); fprintf(stderr, “%s: model size = %8.2f MB / num tensors = %dn”, func, total_size/1024.0/1024.0, model.n_loaded); if (model.n_loaded == 0) { fprintf(stderr, “%s: WARN no tensors loaded from model file - assuming empty model for testingn”, func); } else if (model.n_loaded != (int) model.tensors.size()) { fprintf(stderr, “%s: ERROR not all tensors loaded from model file - expected %zu, got %dn”, func, model.tensors.size(), model.n_loaded); return false; } } // loading time will be recalculate after the first eval, so // we take page faults deferred by mmap() into consideration lctx.t_load_us = ggml_time_us() - lctx.t_start_us; if (progress_callback) { progress_callback(1.0, progress_callback_user_data); } return true; } here is how the model is exported : #! /usr/bin/env python # coding=utf-8 “”“ Modified from: https://github.com/tloen/alpaca-lora ”“” import json import os import fire import torch from peft import PeftModel from transformers import LlamaForCausalLM, LlamaTokenizer CHECKPOINT_PARAMS = { “7b”: {“dim”: 4096, “multiple_of”: 256, “n_heads”: 32, “n_layers”: 32, “norm_eps”: 1e-06, “vocab_size”: -1}, “13b”: {“dim”: 5120, “multiple_of”: 256, “n_heads”: 40, “n_layers”: 40, “norm_eps”: 1e-06, “vocab_size”: -1}, “30b”: {“dim”: 6656, “multiple_of”: 256, “n_heads”: 52, “n_layers”: 60, “norm_eps”: 1e-06, “vocab_size”: -1}, “65b”: {“dim”: 8192, “multiple_of”: 256, “n_heads”: 64, “n_layers”: 80, “norm_eps”: 1e-06, “vocab_size”: -1}, } def main(base_model_name_or_path: str, lora_model_name_or_path: str, output_dir: str, checkpoint_size: str = “7b”): # Retrieve the model parameters params = CHECKPOINT_PARAMS.get(checkpoint_size) if params is None: raise ValueError( f"Cannot find the right model parameters for {checkpoint_size}. Please choose between {list(CHECKPOINT_PARAMS.keys())}.“ ) # tokenizer = LlamaTokenizer.from_pretrained(base_model_name_or_path) base_model = LlamaForCausalLM.from_pretrained( base_model_name_or_path, load_in_8bit=False, torch_dtype=torch.float16, device_map={”“: “cpu”}, ) lora_model = PeftModel.from_pretrained( base_model, lora_model_name_or_path, device_map={”“: “cpu”}, torch_dtype=torch.float16, ) # merge weights for layer in lora_model.base_model.model.model.layers: if hasattr(layer.self_attn.q_proj, “merge_weights”): layer.self_attn.q_proj.merge_weights = True if hasattr(layer.self_attn.v_proj, “merge_weights”): layer.self_attn.v_proj.merge_weights = True if hasattr(layer.self_attn.k_proj, “merge_weights”): layer.self_attn.k_proj.merge_weights = True if hasattr(layer.self_attn.o_proj, “merge_weights”): layer.self_attn.o_proj.merge_weights = True if hasattr(layer.mlp.gate_proj, “merge_weights”): layer.mlp.gate_proj.merge_weights = True if hasattr(layer.mlp.down_proj, “merge_weights”): layer.mlp.down_proj.merge_weights = True if hasattr(layer.mlp.up_proj, “merge_weights”): layer.mlp.up_proj.merge_weights = True lora_model.train(False) lora_model_sd = lora_model.state_dict() # params = { # “dim”: 4096, # “multiple_of”: 256, # “n_heads”: 32, # “n_layers”: 32, # “norm_eps”: 1e-06, # “vocab_size”: -1, # } n_layers = params[“n_layers”] n_heads = params[“n_heads”] dim = params[“dim”] dims_per_head = dim // n_heads base = 10000.0 inv_freq = 1.0 / (base ** (torch.arange(0, dims_per_head, 2).float() / dims_per_head)) def permute(w): return w.view(n_heads, dim // n_heads // 2, 2, dim).transpose(1, 2).reshape(dim, dim) def unpermute(w): return w.view(n_heads, 2, dim // n_heads // 2, dim).transpose(1, 2).reshape(dim, dim) def translate_state_dict_key(k): k = k.replace(“base_model.model.”, “”) if k == “model.embed_tokens.weight”: return “tok_embeddings.weight” elif k == “model.norm.weight”: return “norm.weight” elif k == “lm_head.weight”: return “output.weight” elif k.startswith(“model.layers.”): layer = k.split(”.“)[2] if k.endswith(”.self_attn.q_proj.weight"): return f"layers.{layer}.attention.wq.weight" elif k.endswith(“.self_attn.k_proj.weight”): return f"layers.{layer}.attention.wk.weight" elif k.endswith(“.self_attn.v_proj.weight”): return f"layers.{layer}.attention.wv.weight" elif k.endswith(“.self_attn.o_proj.weight”): return f"layers.{layer}.attention.wo.weight" elif k.endswith(“.mlp.gate_proj.weight”): return f"layers.{layer}.feed_forward.w1.weight" elif k.endswith(“.mlp.down_proj.weight”): return f"layers.{layer}.feed_forward.w2.weight" elif k.endswith(“.mlp.up_proj.weight”): return f"layers.{layer}.feed_forward.w3.weight" elif k.endswith(“.input_layernorm.weight”): return f"layers.{layer}.attention_norm.weight" elif k.endswith(“.post_attention_layernorm.weight”): return f"layers.{layer}.ffn_norm.weight" elif k.endswith(“rotary_emb.inv_freq”) or “lora” in k: return None else: print(layer, k) raise NotImplementedError else: print(k) raise NotImplementedError new_state_dict = {} for k, v in lora_model_sd.items(): new_k = translate_state_dict_key(k) if new_k is not None: if “wq” in new_k or “wk” in new_k: new_state_dict[new_k] = unpermute(v) else: new_state_dict[new_k] = v os.makedirs(output_dir, exist_ok=True) # Split the tensors based on layer index part1_keys = [k for k in new_state_dict.keys() if not k.startswith(“layers.”) or int(k.split(“.”)[1]) < n_layers // 2] part2_keys = [k for k in new_state_dict.keys() if k not in part1_keys] state_dict_part1 = {k: new_state_dict[k] for k in part1_keys} state_dict_part2 = {k: new_state_dict[k] for k in part2_keys} torch.save(state_dict_part1, output_dir + “/consolidated.00.pth”) torch.save(state_dict_part2, output_dir + “/consolidated.01.pth”) with open(output_dir + “/params.json”, “w”) as f: json.dump(params, f) if name == “main”: fire.Fire(main) Here is the problem I have when i run the inference: ./main -m ./models/13B/ggml-model-f16.bin -n 5000 --repeat_penalty 1.0 --color -i -r “User:” -f prompts/chat-with-bob.txt -t 32 main: seed = 1681035697 llama_model_load: loading model from ‘./models/13B/ggml-model-f16.bin’ - please wait … llama_model_load: n_vocab = 32000 llama_model_load: n_ctx = 512 llama_model_load: n_embd = 5120 llama_model_load: n_mult = 256 llama_model_load: n_head = 40 llama_model_load: n_layer = 40 llama_model_load: n_rot = 128 llama_model_load: f16 = 1 llama_model_load: n_ff = 13824 llama_model_load: n_parts = 2 llama_model_load: type = 2 llama_model_load: ggml map size = 25138.72 MB llama_model_load: ggml ctx size = 101.25 KB llama_model_load: mem required = 27186.82 MB (+ 1608.00 MB per state) llama_model_load: loading tensors from ‘./models/13B/ggml-model-f16.bin’ llama_model_load: tensor ‘layers.20.attention.wq.weight’ has wrong size in model file llama_init_from_file: failed to load model main: error: failed to load model ‘./models/13B/ggml-model-f16.bin’

answer:It appears that the issue occurs when loading tensors during the inference. The error message `llama_model_load: tensor 'layers.20.attention.wq.weight' has wrong size in model file` suggests that there is a discrepancy between the size of the tensors in the model file and the expected size of the tensors during inference. A solution could be to double-check and verify the dimensions of the tensors in the exported model file `./models/13B/ggml-model-f16.bin`. You may also consider comparing the original and consolidated model files to identify any possible discrepancies between the two files that may cause the error. Additionally, please ensure that the llamas codebase and the generator script are using the same model configuration and parameters. This will help avoid potential issues caused by inconsistencies between the training, exporting, and inference stages. As a side note, there is a typo in the given script where the function `def main` is missing a colon at the end of the line: ```py def main(base_model_name_or_path: str, lora_model_name_or_path: str, output_dir: str, checkpoint_size: str = “7b”): ``` Also, make sure that the provided model files are accessible and exist in the correct path.

question:J’ai créé ce code VBA sur Microsoft excel 2021 version francaise pour me créer un outil de filtre dynamique. L’idée étant que je puisse filtre une base de données dynamiquement (filtre au fur et à mesure de la saisie). Voici donc le script que j’utilise pour le UserForm que j’ai créé : Code du module: Public Sub AfficherBoiteDeRecherche() ' Appeler le formulaire RechercheForm.Show End Sub Code pour le UserForm : Private Sub UserForm_Initialize() ' Remplir ComboBoxTableau Dim ws As Worksheet Set ws = ThisWorkbook.ActiveSheet ' Prendre en compte la feuille active uniquement Dim lo As ListObject For Each lo In ws.ListObjects Me.ComboBoxTableau.AddItem lo.Name ' Changer pour ajouter le nom des tableaux Next lo Me.ComboBoxTableau.ListIndex = 0 ' Sélectionner automatiquement le premier tableau End Sub Private Sub ComboBoxTableau_Change() ' Remplir ComboBoxColonne If Me.ComboBoxTableau.ListIndex > -1 Then Dim tblName As String tblName = Me.ComboBoxTableau.Text Dim ws As Worksheet Set ws = ThisWorkbook.ActiveSheet ' Prendre en compte la feuille active uniquement Dim lo As ListObject Set lo = ws.ListObjects(tblName) Me.ComboBoxColonne.Clear Dim col As ListColumn For Each col In lo.ListColumns Me.ComboBoxColonne.AddItem col.Name Next col End If End Sub Private Function IsNumberColumn(rng As Range) As Boolean Dim cell As Range For Each cell In rng If Not IsEmpty(cell) Then If IsNumeric(cell.Value) Then IsNumberColumn = True Else IsNumberColumn = False End If Exit For End If Next cell End Function Private Sub TextBoxRecherche_Change() ' Filtrer en fonction de la recherche If Me.ComboBoxTableau.ListIndex > -1 And Me.ComboBoxColonne.ListIndex > -1 Then Dim tblName As String tblName = Me.ComboBoxTableau.Text Dim colName As String colName = Me.ComboBoxColonne.Text Dim ws As Worksheet Set ws = ThisWorkbook.ActiveSheet ' Prendre en compte la feuille active uniquement Dim lo As ListObject Set lo = ws.ListObjects(tblName) Dim colRange As Range Set colRange = lo.ListColumns(colName).DataBodyRange If Me.TextBoxRecherche.Text <> "" Then If IsNumberColumn(colRange) Then lo.Range.AutoFilter Field:=lo.ListColumns(colName).Index, Criteria1:="=" & Me.TextBoxRecherche.Text Else lo.Range.AutoFilter Field:=lo.ListColumns(colName).Index, Criteria1:="=*" & Me.TextBoxRecherche.Text & "*" End If DoEvents ' Forcer la mise à jour d’Excel Application.ScreenUpdating = True Else lo.AutoFilter.ShowAllData End If End If End Sub Maintenant je souhaite améliorer ce code. Obsérvation : après avoir saisi dans la boite de recherche (textBox) le tableau que je cherche à filtrer se filtre dynamiquement et cela au fur et à mesure que je tape. Après cela, j’interagis avec le tableau filtrer, et après avoir fait ce que j’avais à faire avec les données filtrées, je reviens dans la boite de recherche en cliquant n’importe où dans la fenêtre du boîte de recherche. C’est la que je cherche à améliorer. En effet lorsque je reactive la fenêtre du boîte de dialogue, j’aimerai que le texte que j’ai précedemment saisi dans le textBox soit séléctionné en totalité pour que je puisse de suite saisir le prochain filtre. Tel un pro du VBA mettez à jour mon code.

answer:Je vais ajouter quelques lignes de code pour sélectionner automatiquement le texte dans la TextBox lorsque vous cliquez à l'intérieur de celle-ci. Voici le code mis à jour pour le UserForm : Code pour le UserForm : ```vba Private Sub UserForm_Initialize() ' Remplir ComboBoxTableau Dim ws As Worksheet Set ws = ThisWorkbook.ActiveSheet ' Prendre en compte la feuille active uniquement Dim lo As ListObject For Each lo In ws.ListObjects Me.ComboBoxTableau.AddItem lo.Name ' Changer pour ajouter le nom des tableaux Next lo Me.ComboBoxTableau.ListIndex = 0 ' Sélectionner automatiquement le premier tableau End Sub Private Sub ComboBoxTableau_Change() ' Remplir ComboBoxColonne If Me.ComboBoxTableau.ListIndex > -1 Then Dim tblName As String tblName = Me.ComboBoxTableau.Text Dim ws As Worksheet Set ws = ThisWorkbook.ActiveSheet ' Prendre en compte la feuille active uniquement Dim lo As ListObject Set lo = ws.ListObjects(tblName) Me.ComboBoxColonne.Clear Dim col As ListColumn For Each col In lo.ListColumns Me.ComboBoxColonne.AddItem col.Name Next col End If End Sub Private Function IsNumberColumn(rng As Range) As Boolean Dim cell As Range For Each cell In rng If Not IsEmpty(cell) Then If IsNumeric(cell.Value) Then IsNumberColumn = True Else IsNumberColumn = False End If Exit For End If Next cell End Function Private Sub TextBoxRecherche_Change() ' Filtrer en fonction de la recherche If Me.ComboBoxTableau.ListIndex > -1 And Me.ComboBoxColonne.ListIndex > -1 Then Dim tblName As String tblName = Me.ComboBoxTableau.Text Dim colName As String colName = Me.ComboBoxColonne.Text Dim ws As Worksheet Set ws = ThisWorkbook.ActiveSheet ' Prendre en compte la feuille active uniquement Dim lo As ListObject Set lo = ws.ListObjects(tblName) Dim colRange As Range Set colRange = lo.ListColumns(colName).DataBodyRange If Me.TextBoxRecherche.Text <> "" Then If IsNumberColumn(colRange) Then lo.Range.AutoFilter Field:=lo.ListColumns(colName).Index, Criteria1:="=" & Me.TextBoxRecherche.Text Else lo.Range.AutoFilter Field:=lo.ListColumns(colName).Index, Criteria1:="=*" & Me.TextBoxRecherche.Text & "*" End If DoEvents ' Forcer la mise à jour d’Excel Application.ScreenUpdating = True Else lo.AutoFilter.ShowAllData End If End If End Sub ' Ajout du code pour sélectionner le texte dans la TextBox lorsqu'elle est activée Private Sub TextBoxRecherche_Enter() TextBoxRecherche.SelStart = 0 TextBoxRecherche.SelLength = Len(TextBoxRecherche.Text) End Sub ``` J'ai ajouté une nouvelle procédure événementielle `TextBoxRecherche_Enter()` qui est déclenchée lorsque la TextBox reçoit le focus, par exemple lorsqu'elle est cliquée. Cette procédure sélectionne le texte dans la TextBox en définissant `SelStart` à 0 et `SelLength` à la longueur du texte. Ainsi, lorsque vous cliquerez sur la TextBox, le texte sera automatiquement sélectionné et prêt pour la prochaine saisie.

question:2023年3月13日,小亚在网上发布了她被公司解雇的经历,这个话题冲上热搜:小亚于2023年2月23日入职公司,并进入试用期,3月9日下午开会期间,她给客人倒水过多,距离杯口还有一至二厘米。小亚提供的录音显示,公司的主管以其倒茶水太满为理由,单方面与其解除劳动合同。她称公司未支付其相关报酬。3月10日,小亚收到公司解除劳动合同通知书,称其试用期被证明不符合录用条件。 网上形成两派意见。部分网友认为:公司太离谱,就是找个理由开除员工,理由太牵强,太可笑;部分网友认为:浅茶满酒,这是一种社交常识,文秘岗位不懂基础社交礼节,辞退没有毛病,但该给的工资还是要结清。 请根据《批判性思维》第一二三讲所学知识,分析公司的行为是否有错。 要求:1.观点明确、理由充分、逻辑清晰。2.字数控制在100~150字。

answer:经过分析,公司的行为在一定程度上是有问题的。 首先,从道义层面来看,以小亚倒水过满为解雇理由过于牵强,存在夸大事件本身问题的嫌疑。小亚的错误本身是可以通过指导和提醒来纠正的,没有必要直接解除劳动合同。公司采取开除的做法显得过于苛刻,不利于员工与公司之间的良好关系。 其次,从法律层面来看,试用期员工解除劳动合同应该有充分的法律依据。如果公司未能提供证据证明小亚的行为严重影响到公司的利益或作为不符合录用条件的证据,那么这一行为可能被认为是单方面解除劳动合同,违反了相关法律法规。 最后,在员工工资问题上,无论公司辞退小亚是否合理,她应得的工资都应当按照法律规定支付给她,否则公司将承担法定责任。综上所述,在倒水事件上,公司的行为存有失误,但对于小亚来说,也应该提高自身的职业素养和礼节。

Released under the Siri License.

has loaded