dc.contributor.author |
Vashchuk, Oleksandr
|
|
dc.date.accessioned |
2024-08-23T11:31:28Z |
|
dc.date.available |
2024-08-23T11:31:28Z |
|
dc.date.issued |
2024 |
|
dc.identifier.citation |
Vashchuk Oleksandr. Fact editing in Large Language Models: in-weights vs in-context techniques. Ukrainian Catholic University, Faculty of Applied Sciences, Department of Computer Sciences. Lviv 2024, 46 p. |
uk |
dc.identifier.uri |
https://er.ucu.edu.ua/handle/1/4678 |
|
dc.language.iso |
en |
uk |
dc.subject |
in Large Language Models |
uk |
dc.subject |
in-weights techniques |
uk |
dc.subject |
in-context techniques |
uk |
dc.title |
Fact editing in Large Language Models: in-weights vs in-context techniques |
uk |
dc.type |
Preprint |
uk |
dc.status |
Публікується вперше |
uk |
dc.description.abstracten |
As Large Language Models (LLMs) have gained visibility for their ability to gen-
erate human-like text, ensuring the accuracy and reliability of the information they
produce has become crucial. Thus facts-editing approaches received wide attention
due to the possibility of editing the model’s factual knowledge without investing
resources to improve the dataset used for training the model, fine-tuning, or adap-
tive tuning. Together with the development of fact-editing methods also improves
understanding of facts storage and retrieval inside the LLMs. The main goal of the
work is to study factual retrieval mechanisms for in-context and in-weights knowl-
edge.
This work concentrates on mechanistic interpretability for LLMs. Several experi-
ments were conducted to understand the factual retrieval mechanisms in the model.
As a result, important model components that contribute most during factual recall
were identified. |
uk |