Skip to content

Commit

Permalink
Normalize JSON files (#4312)
Browse files Browse the repository at this point in the history
Use tabs and unicode encoding for JSON files
  • Loading branch information
github-actions[bot] committed Sep 20, 2024
1 parent 4dd1344 commit e23b55a
Show file tree
Hide file tree
Showing 1,557 changed files with 98,049 additions and 98,049 deletions.
2 changes: 1 addition & 1 deletion addons/10dita/2023.1.0.json
Original file line number Diff line number Diff line change
Expand Up @@ -26,4 +26,4 @@
"license": "GPL v2",
"licenseURL": "https://www.gnu.org/licenses/gpl-2.0.html",
"translations": []
}
}
56 changes: 28 additions & 28 deletions addons/AIContentDescriber/2023.11.23.json
Original file line number Diff line number Diff line change
@@ -1,30 +1,30 @@
{
"addonId": "AIContentDescriber",
"displayName": "AI Content Describer",
"URL": "https://github.com/cartertemm/AI-content-describer/releases/download/v2023.11.23/AIContentDescriber-2023.11.23.nvda-addon",
"description": "This add-on makes it possible to describe the focus object, navigator object, or screen using the popular GPT4 vision artificial intelegence LLM.\nThough content descriptions are quite detailed, they may not always be completely accurate or reflect real world information.\nTo begin, head to https://platform.openai.com/account/api-keys and create an account, then create a key for interacting with the API. See add-on documentation for more information on this.\nThen, choose the \"AI content describer\" category from NVDA's settings dialog and enter your API key.\nPress NVDA+shift+i to pop up a menu asking how you wish to describe based on the current position, or NVDA+shift+u to describe the navigator object, or NVDA+shift+y for an image that has been copied to the clipboard such as in windows explorer. Other keystrokes are customizable from the input gestures dialog.",
"sha256": "25a233f75a21c53bf4e8870abe4d74728bde630d7e85fc7d20037fbde7fe7b4a",
"addonVersionName": "2023.11.23",
"addonVersionNumber": {
"major": 2023,
"minor": 11,
"patch": 23
},
"minNVDAVersion": {
"major": 2023,
"minor": 1,
"patch": 0
},
"lastTestedVersion": {
"major": 2023,
"minor": 2,
"patch": 0
},
"channel": "stable",
"publisher": "Carter Temm",
"sourceURL": "https://github.com/cartertemm/AI-content-describer",
"license": "GPL v2",
"licenseURL": "https://www.gnu.org/licenses/gpl-2.0.html",
"translations": [],
"reviewUrl": "https://github.com/nvaccess/addon-datastore/discussions/2048#discussioncomment-7647238"
"addonId": "AIContentDescriber",
"displayName": "AI Content Describer",
"URL": "https://github.com/cartertemm/AI-content-describer/releases/download/v2023.11.23/AIContentDescriber-2023.11.23.nvda-addon",
"description": "This add-on makes it possible to describe the focus object, navigator object, or screen using the popular GPT4 vision artificial intelegence LLM.\nThough content descriptions are quite detailed, they may not always be completely accurate or reflect real world information.\nTo begin, head to https://platform.openai.com/account/api-keys and create an account, then create a key for interacting with the API. See add-on documentation for more information on this.\nThen, choose the \"AI content describer\" category from NVDA's settings dialog and enter your API key.\nPress NVDA+shift+i to pop up a menu asking how you wish to describe based on the current position, or NVDA+shift+u to describe the navigator object, or NVDA+shift+y for an image that has been copied to the clipboard such as in windows explorer. Other keystrokes are customizable from the input gestures dialog.",
"sha256": "25a233f75a21c53bf4e8870abe4d74728bde630d7e85fc7d20037fbde7fe7b4a",
"addonVersionName": "2023.11.23",
"addonVersionNumber": {
"major": 2023,
"minor": 11,
"patch": 23
},
"minNVDAVersion": {
"major": 2023,
"minor": 1,
"patch": 0
},
"lastTestedVersion": {
"major": 2023,
"minor": 2,
"patch": 0
},
"channel": "stable",
"publisher": "Carter Temm",
"sourceURL": "https://github.com/cartertemm/AI-content-describer",
"license": "GPL v2",
"licenseURL": "https://www.gnu.org/licenses/gpl-2.0.html",
"translations": [],
"reviewUrl": "https://github.com/nvaccess/addon-datastore/discussions/2048#discussioncomment-7647238"
}
88 changes: 44 additions & 44 deletions addons/AIContentDescriber/2024.3.9.json

Large diffs are not rendered by default.

88 changes: 44 additions & 44 deletions addons/AIContentDescriber/2024.4.14.json
Original file line number Diff line number Diff line change
@@ -1,46 +1,46 @@
{
"addonId": "AIContentDescriber",
"displayName": "AI Content Describer",
"URL": "https://github.com/cartertemm/AI-content-describer/releases/download/v2024.04.14/AIContentDescriber-2024.04.14.nvda-addon",
"description": "This add-on makes it possible to describe the focus object, navigator object, or screen using popular vision capable AI language models, like Claude, Gemini, or GPT4.\nIt also lets one understand where their face is positioned in the frame of a connected camera.\nThough content descriptions are quite detailed, they may not always be completely accurate or reflect real world information.\nTo begin with GPT, head to https://platform.openai.com/account/api-keys and create an account, then create a key for interacting with the API. See add-on documentation for more information on this.\nThen, choose the \"AI content describer\" category from NVDA's settings dialog -> manage models and enter your API key.\nPress NVDA+shift+i to pop up a menu asking how you wish to describe based on the current position, or NVDA+shift+u to describe the navigator object, or NVDA+shift+y for an image that has been copied to the clipboard such as in windows explorer. Other keystrokes are customizable from the input gestures dialog.",
"sha256": "e9fb2c4cf7cbe55b8c615766d9bd7599a257afdd23250ec77e2aa933198f934d",
"addonVersionName": "2024.04.14",
"addonVersionNumber": {
"major": 2024,
"minor": 4,
"patch": 14
},
"minNVDAVersion": {
"major": 2023,
"minor": 1,
"patch": 0
},
"lastTestedVersion": {
"major": 2024,
"minor": 1,
"patch": 0
},
"channel": "stable",
"publisher": "Carter Temm",
"sourceURL": "https://github.com/cartertemm/AI-content-describer/",
"license": "GPL v2",
"licenseURL": "https://www.gnu.org/licenses/gpl-2.0.html",
"translations": [
{
"language": "ru",
"displayName": "\u041e\u043f\u0438\u0441\u0430\u0442\u0435\u043b\u044c \u043a\u043e\u043d\u0442\u0435\u043d\u0442\u0430 \u0441 \u043f\u043e\u043c\u043e\u0449\u044c\u044e \u0418\u0418",
"description": "This add-on makes it possible to describe the focus object, navigator object, or screen using popular vision capable AI language models, like Claude, Gemini, or GPT4.\nIt also lets one understand where their face is positioned in the frame of a connected camera.\nThough content descriptions are quite detailed, they may not always be completely accurate or reflect real world information.\nTo begin with GPT, head to https://platform.openai.com/account/api-keys and create an account, then create a key for interacting with the API. See add-on documentation for more information on this.\nThen, choose the \"AI content describer\" category from NVDA's settings dialog -> manage models and enter your API key.\nPress NVDA+shift+i to pop up a menu asking how you wish to describe based on the current position, or NVDA+shift+u to describe the navigator object, or NVDA+shift+y for an image that has been copied to the clipboard such as in windows explorer. Other keystrokes are customizable from the input gestures dialog."
},
{
"language": "sr",
"displayName": "AI opisiva\u010d sadr\u017eaja",
"description": "This add-on makes it possible to describe the focus object, navigator object, or screen using popular vision capable AI language models, like Claude, Gemini, or GPT4.\nIt also lets one understand where their face is positioned in the frame of a connected camera.\nThough content descriptions are quite detailed, they may not always be completely accurate or reflect real world information.\nTo begin with GPT, head to https://platform.openai.com/account/api-keys and create an account, then create a key for interacting with the API. See add-on documentation for more information on this.\nThen, choose the \"AI content describer\" category from NVDA's settings dialog -> manage models and enter your API key.\nPress NVDA+shift+i to pop up a menu asking how you wish to describe based on the current position, or NVDA+shift+u to describe the navigator object, or NVDA+shift+y for an image that has been copied to the clipboard such as in windows explorer. Other keystrokes are customizable from the input gestures dialog."
},
{
"language": "uk",
"displayName": "\u041e\u043f\u0438\u0441\u0443\u0432\u0430\u0447 \u0432\u043c\u0456\u0441\u0442\u0443 \u0437\u0430 \u0434\u043e\u043f\u043e\u043c\u043e\u0433\u043e\u044e \u0428\u0406",
"description": "This add-on makes it possible to describe the focus object, navigator object, or screen using popular vision capable AI language models, like Claude, Gemini, or GPT4.\nIt also lets one understand where their face is positioned in the frame of a connected camera.\nThough content descriptions are quite detailed, they may not always be completely accurate or reflect real world information.\nTo begin with GPT, head to https://platform.openai.com/account/api-keys and create an account, then create a key for interacting with the API. See add-on documentation for more information on this.\nThen, choose the \"AI content describer\" category from NVDA's settings dialog -> manage models and enter your API key.\nPress NVDA+shift+i to pop up a menu asking how you wish to describe based on the current position, or NVDA+shift+u to describe the navigator object, or NVDA+shift+y for an image that has been copied to the clipboard such as in windows explorer. Other keystrokes are customizable from the input gestures dialog."
}
],
"reviewUrl": "https://github.com/nvaccess/addon-datastore/discussions/2048"
"addonId": "AIContentDescriber",
"displayName": "AI Content Describer",
"URL": "https://github.com/cartertemm/AI-content-describer/releases/download/v2024.04.14/AIContentDescriber-2024.04.14.nvda-addon",
"description": "This add-on makes it possible to describe the focus object, navigator object, or screen using popular vision capable AI language models, like Claude, Gemini, or GPT4.\nIt also lets one understand where their face is positioned in the frame of a connected camera.\nThough content descriptions are quite detailed, they may not always be completely accurate or reflect real world information.\nTo begin with GPT, head to https://platform.openai.com/account/api-keys and create an account, then create a key for interacting with the API. See add-on documentation for more information on this.\nThen, choose the \"AI content describer\" category from NVDA's settings dialog -> manage models and enter your API key.\nPress NVDA+shift+i to pop up a menu asking how you wish to describe based on the current position, or NVDA+shift+u to describe the navigator object, or NVDA+shift+y for an image that has been copied to the clipboard such as in windows explorer. Other keystrokes are customizable from the input gestures dialog.",
"sha256": "e9fb2c4cf7cbe55b8c615766d9bd7599a257afdd23250ec77e2aa933198f934d",
"addonVersionName": "2024.04.14",
"addonVersionNumber": {
"major": 2024,
"minor": 4,
"patch": 14
},
"minNVDAVersion": {
"major": 2023,
"minor": 1,
"patch": 0
},
"lastTestedVersion": {
"major": 2024,
"minor": 1,
"patch": 0
},
"channel": "stable",
"publisher": "Carter Temm",
"sourceURL": "https://github.com/cartertemm/AI-content-describer/",
"license": "GPL v2",
"licenseURL": "https://www.gnu.org/licenses/gpl-2.0.html",
"translations": [
{
"language": "ru",
"displayName": "Описатель контента с помощью ИИ",
"description": "This add-on makes it possible to describe the focus object, navigator object, or screen using popular vision capable AI language models, like Claude, Gemini, or GPT4.\nIt also lets one understand where their face is positioned in the frame of a connected camera.\nThough content descriptions are quite detailed, they may not always be completely accurate or reflect real world information.\nTo begin with GPT, head to https://platform.openai.com/account/api-keys and create an account, then create a key for interacting with the API. See add-on documentation for more information on this.\nThen, choose the \"AI content describer\" category from NVDA's settings dialog -> manage models and enter your API key.\nPress NVDA+shift+i to pop up a menu asking how you wish to describe based on the current position, or NVDA+shift+u to describe the navigator object, or NVDA+shift+y for an image that has been copied to the clipboard such as in windows explorer. Other keystrokes are customizable from the input gestures dialog."
},
{
"language": "sr",
"displayName": "AI opisivač sadržaja",
"description": "This add-on makes it possible to describe the focus object, navigator object, or screen using popular vision capable AI language models, like Claude, Gemini, or GPT4.\nIt also lets one understand where their face is positioned in the frame of a connected camera.\nThough content descriptions are quite detailed, they may not always be completely accurate or reflect real world information.\nTo begin with GPT, head to https://platform.openai.com/account/api-keys and create an account, then create a key for interacting with the API. See add-on documentation for more information on this.\nThen, choose the \"AI content describer\" category from NVDA's settings dialog -> manage models and enter your API key.\nPress NVDA+shift+i to pop up a menu asking how you wish to describe based on the current position, or NVDA+shift+u to describe the navigator object, or NVDA+shift+y for an image that has been copied to the clipboard such as in windows explorer. Other keystrokes are customizable from the input gestures dialog."
},
{
"language": "uk",
"displayName": "Описувач вмісту за допомогою ШІ",
"description": "This add-on makes it possible to describe the focus object, navigator object, or screen using popular vision capable AI language models, like Claude, Gemini, or GPT4.\nIt also lets one understand where their face is positioned in the frame of a connected camera.\nThough content descriptions are quite detailed, they may not always be completely accurate or reflect real world information.\nTo begin with GPT, head to https://platform.openai.com/account/api-keys and create an account, then create a key for interacting with the API. See add-on documentation for more information on this.\nThen, choose the \"AI content describer\" category from NVDA's settings dialog -> manage models and enter your API key.\nPress NVDA+shift+i to pop up a menu asking how you wish to describe based on the current position, or NVDA+shift+u to describe the navigator object, or NVDA+shift+y for an image that has been copied to the clipboard such as in windows explorer. Other keystrokes are customizable from the input gestures dialog."
}
],
"reviewUrl": "https://github.com/nvaccess/addon-datastore/discussions/2048"
}
Loading

0 comments on commit e23b55a

Please sign in to comment.