152 Commits

Author SHA1 Message Date
b31ba5d278 Fix kustomize
All checks were successful
continuous-integration/drone/push Build is passing
2025-12-28 10:57:30 -05:00
29cbab545f Remove ns
All checks were successful
continuous-integration/drone/push Build is passing
2025-12-28 10:55:26 -05:00
60ef6a5df6 Manual pg setup
All checks were successful
continuous-integration/drone/push Build is passing
2025-12-28 10:53:32 -05:00
1688178cd3 Remove kustomize from deps
All checks were successful
continuous-integration/drone/push Build is passing
2025-12-28 10:51:45 -05:00
ab5f1289c4 Add kind setup script
All checks were successful
continuous-integration/drone/push Build is passing
2025-12-28 10:47:43 -05:00
2c4627d467 Add skeleton aor logic
All checks were successful
continuous-integration/drone/push Build is passing
2025-12-28 04:37:01 -05:00
34017ee771 Add review endpoint
All checks were successful
continuous-integration/drone/push Build is passing
2025-12-28 03:40:29 -05:00
e41d34dd3d Group buttons and add confirmation dialogues (#310)
All checks were successful
continuous-integration/drone/push Build is passing
continuous-integration/drone/pr Build is passing
Reviewer:
<img width="409" alt="image.png" src="attachments/a090c61e-a2d8-4685-ae64-547851d1ee84">
Submitter:
<img width="404" alt="image.png" src="attachments/9205a438-1f1f-4af4-b9a0-6a8d56580afa">
<img width="411" alt="image.png" src="attachments/7ae8115b-3376-4306-b9b9-acc12226abb3">
Admin:
<img width="392" alt="image.png" src="attachments/07a182d1-5375-4195-bfda-c14f09469cbe">
<img width="388" alt="image.png" src="attachments/ce82017d-5c1d-4a93-9247-9b5608f9030e">

Confirmation Dialogue:
<img width="545" alt="image.png" src="attachments/1efff8be-1d41-429e-8c6e-3d36b7dad128">

Example where both groups show up:
<img width="404" alt="image.png" src="attachments/b0ca4be2-7c58-4c0c-9a5f-dcd89e23b08f">

Reviewed-on: #310
Reviewed-by: Rhys Lloyd <quaternions@noreply@itzana.me>
Co-authored-by: itzaname <me@sliving.io>
Co-committed-by: itzaname <me@sliving.io>
2025-12-28 00:34:58 +00:00
f49e27e230 Support editing map fix descriptions (#309)
All checks were successful
continuous-integration/drone/push Build is passing
The description can be edited by the **submitter** only if the status is Changes Requested or Under Construction.

<img width="734" alt="image.png" src="attachments/9fd7b838-f946-4091-a396-ef66f5e655bc">
<img width="724" alt="image.png" src="attachments/f65f059e-af97-448a-9627-fee827d30e59">

Reviewed-on: #309
Reviewed-by: Rhys Lloyd <quaternions@noreply@itzana.me>
Co-authored-by: itzaname <me@sliving.io>
Co-committed-by: itzaname <me@sliving.io>
2025-12-27 23:40:42 +00:00
d500462fc7 Add user nudges for certain statuses (#308)
Some checks failed
continuous-integration/drone/push Build is failing
Will show a badge icon on the audit tab if there are any validator errors/checklists to direct attention to it. Will show nudge message ONLY to the submitter.

![image.png](/attachments/f5cd9ab6-b996-40b2-ad43-fa5e9b28caf5)
![image.png](/attachments/9aba2132-ec85-4ae9-b0fa-be253ecc2355)

Closes !205

Reviewed-on: #308
Reviewed-by: Rhys Lloyd <quaternions@noreply@itzana.me>
Co-authored-by: itzaname <me@sliving.io>
Co-committed-by: itzaname <me@sliving.io>
2025-12-27 23:30:38 +00:00
ee2bc94312 Add releasing status to the processing list (#307)
All checks were successful
continuous-integration/drone/push Build is passing
Closes !269

Reviewed-on: #307
Reviewed-by: Rhys Lloyd <quaternions@noreply@itzana.me>
Co-authored-by: itzaname <me@sliving.io>
Co-committed-by: itzaname <me@sliving.io>
2025-12-27 22:25:39 +00:00
84edc71574 Add game name to review page (#305)
All checks were successful
continuous-integration/drone/push Build is passing
continuous-integration/drone/pr Build is passing
Deduped all the game name usage to a single lib. Closes !281

<img width="785" alt="image.png" src="attachments/0f226438-fed1-40b2-81a9-2988dd2d4a33">

Reviewed-on: #305
Reviewed-by: Rhys Lloyd <quaternions@noreply@itzana.me>
Co-authored-by: itzaname <me@sliving.io>
Co-committed-by: itzaname <me@sliving.io>
2025-12-27 19:56:33 +00:00
7c5d8a2163 Add script review page (#304)
All checks were successful
continuous-integration/drone/push Build is passing
Closes !2

Added review dashboard button as well.

<img width="1313" alt="image.png" src="attachments/a2abd430-7ff6-431a-9261-82e026de58f5">

![image.png](/attachments/e1ba3536-2869-4661-b46c-007ddaff8f3e)

Reviewed-on: #304
Reviewed-by: Rhys Lloyd <quaternions@noreply@itzana.me>
Co-authored-by: itzaname <me@sliving.io>
Co-committed-by: itzaname <me@sliving.io>
2025-12-27 19:56:19 +00:00
7eaa84a0ed Change Timeline Text (#301)
All checks were successful
continuous-integration/drone/pr Build is passing
continuous-integration/drone/push Build is passing
Some tweaks to the descriptions.  Evidently I didn't read carefully enough.

Reviewed-on: #301
Reviewed-by: itzaname <itzaname@noreply@itzana.me>
Co-authored-by: Rhys Lloyd <krakow20@gmail.com>
Co-committed-by: Rhys Lloyd <krakow20@gmail.com>
2025-12-27 08:19:17 +00:00
cf0cf9da7a Add workflow timeline (#300)
All checks were successful
continuous-integration/drone/push Build is passing
Closes !232

<img width="763" alt="image.png" src="attachments/559715f5-630e-4029-a19b-c9f4cf4c7270">

Reviewed-on: #300
Reviewed-by: Rhys Lloyd <quaternions@noreply@itzana.me>
Co-authored-by: itzaname <me@sliving.io>
Co-committed-by: itzaname <me@sliving.io>
2025-12-27 08:04:02 +00:00
74565e567a Fix "0" displaying in "Review Dashboard" button on user dashboard (#298)
All checks were successful
continuous-integration/drone/pr Build is passing
continuous-integration/drone/push Build is passing
The review dashboard link only shows when the user has the correct roles. A normal user would not see the button but instead the text "0".

Reviewed-on: #298
Reviewed-by: Rhys Lloyd <quaternions@noreply@itzana.me>
Co-authored-by: itzaname <me@sliving.io>
Co-committed-by: itzaname <me@sliving.io>
2025-12-27 05:39:33 +00:00
ea65794255 Cycle before and after images every 1.5 seconds (#295)
All checks were successful
continuous-integration/drone/push Build is passing
continuous-integration/drone/pr Build is passing
The images should auto cycle now that the thumbnails are working.

I don't know how to test this!  This is what I tried:
```
bun install
bun run build
VITE_API_HOST=https://maps.staging.strafes.net/v1 bun run preview
```
but the mapfixes page won't load the mapfixes.

Reviewed-on: #295
Reviewed-by: itzaname <itzaname@noreply@itzana.me>
Co-authored-by: Rhys Lloyd <krakow20@gmail.com>
Co-committed-by: Rhys Lloyd <krakow20@gmail.com>
2025-12-27 05:26:04 +00:00
58706a5687 Add user/reviewer dashboard (#297)
All checks were successful
continuous-integration/drone/push Build is passing
Adds "at a glance" dashboard so life is less painful.

![image.png](/attachments/43e83777-7196-4274-9adc-e1268e43bc0f)
![image.png](/attachments/1cbe99ab-50b8-443a-aa48-ad9107ccfb1e)

Reviewed-on: #297
Reviewed-by: Rhys Lloyd <quaternions@noreply@itzana.me>
Co-authored-by: itzaname <me@sliving.io>
Co-committed-by: itzaname <me@sliving.io>
2025-12-27 05:20:45 +00:00
efeb525e19 Merge pull request 'Add mapfix history on maps page' (#294) from feature/mapfix-list into staging
All checks were successful
continuous-integration/drone/push Build is passing
Reviewed-on: #294
Reviewed-by: Rhys Lloyd <quaternions@noreply@itzana.me>
2025-12-27 04:51:03 +00:00
5a1fe60a7b fix quat docker
All checks were successful
continuous-integration/drone/push Build is passing
2025-12-26 19:13:03 -08:00
01cfe67848 Just exclude rejected and released for active list
All checks were successful
continuous-integration/drone/push Build is passing
continuous-integration/drone/pr Build is passing
2025-12-26 20:38:18 -05:00
a19bc4d380 Add mapfix history on maps page
All checks were successful
continuous-integration/drone/push Build is passing
2025-12-26 20:32:55 -05:00
ae006565d6 Merge pull request 'Fix overflow on mapfix/submission' (#293) from fix/overflow into staging
All checks were successful
continuous-integration/drone/push Build is passing
Reviewed-on: #293
Reviewed-by: Rhys Lloyd <quaternions@noreply@itzana.me>
2025-12-27 00:44:26 +00:00
57bca99109 Fix overflow
All checks were successful
continuous-integration/drone/push Build is passing
continuous-integration/drone/pr Build is passing
2025-12-26 19:42:36 -05:00
cd09c9b18e Populate username for map fixes by author id
Some checks failed
continuous-integration/drone/push Build is failing
continuous-integration/drone/pr Build is passing
2025-12-25 20:42:22 -08:00
e48cbaff72 Make maps behave like normal link 2025-12-25 20:42:22 -08:00
140d58b808 Make comments support newlines 2025-12-25 20:42:22 -08:00
ba761549b8 Force dark theme 2025-12-25 20:42:22 -08:00
86643fef8d Merge branch 'master' into staging 2025-12-25 20:42:18 -08:00
96af864c5e Deploy staging to prod (#286)
All checks were successful
continuous-integration/drone/push Build is passing
Pull in validator changes and full ui rework to remove nextjs.

Co-authored-by: Rhys Lloyd <krakow20@gmail.com>
Reviewed-on: #286
Reviewed-by: Rhys Lloyd <quaternions@noreply@itzana.me>
Co-authored-by: itzaname <me@sliving.io>
Co-committed-by: itzaname <me@sliving.io>
2025-12-26 03:30:36 +00:00
7db89fd99b Fix bun lock file
All checks were successful
continuous-integration/drone/pr Build is passing
continuous-integration/drone/push Build is passing
2025-12-25 22:10:29 -05:00
f2bb1b078d Fix content width and standardize on skeleton loading
Some checks failed
continuous-integration/drone/push Build is passing
continuous-integration/drone/pr Build is failing
2025-12-25 21:37:23 -05:00
66878fba4e Switch loading text to skeleton
All checks were successful
continuous-integration/drone/push Build is passing
2025-12-25 21:02:15 -05:00
bda99550be Fix submission icon 2025-12-25 21:00:28 -05:00
8a216c7e82 Add username api
All checks were successful
continuous-integration/drone/push Build is passing
2025-12-25 20:55:15 -05:00
e5277c05a1 Avatar image loading
All checks were successful
continuous-integration/drone/push Build is passing
2025-12-25 20:38:17 -05:00
e4af76cfd4 Fix api endpoint
All checks were successful
continuous-integration/drone/push Build is passing
2025-12-25 20:22:24 -05:00
30db1cc375 Fix the build issues
All checks were successful
continuous-integration/drone/push Build is passing
2025-12-25 19:52:01 -05:00
b50c84f8cf Use port 3000
Some checks failed
continuous-integration/drone/push Build is failing
2025-12-25 19:49:52 -05:00
7589ef7df6 Fix dockerfile for spa
Some checks failed
continuous-integration/drone/push Build was killed
2025-12-25 19:49:06 -05:00
8ab8c441b0 Home page and header fixes
All checks were successful
continuous-integration/drone/push Build is passing
2025-12-25 19:45:16 -05:00
a26b228ebe Add 404 page 2025-12-25 19:45:16 -05:00
3654755540 Thumbnail/nav cleanup 2025-12-25 19:45:16 -05:00
c2b50ffab2 Cleanup home/nav 2025-12-25 19:45:16 -05:00
75756917b1 some theming 2025-12-25 19:45:16 -05:00
8989c08857 theme 2025-12-25 19:45:16 -05:00
b2232f4177 Initial work to nuke nextjs 2025-12-25 19:45:16 -05:00
7d1c4d2b6c Add stats endpoint
Some checks failed
continuous-integration/drone Build was killed
continuous-integration/drone/push Build is passing
2025-12-25 18:58:52 -05:00
ca401d4b96 Add batch thumbnail endpoint (#285)
All checks were successful
continuous-integration/drone/push Build is passing
Step 1 of eliminating nextjs is adding a way to query thumbnails from roblox since nextjs handles that. This implements a batch endpoint and caching to do that. Bonus: thumbnails will actually work once we start using this.

Reviewed-on: #285
Co-authored-by: itzaname <me@sliving.io>
Co-committed-by: itzaname <me@sliving.io>
2025-12-25 22:56:59 +00:00
9ab80931bf remove unfulfilled lints
All checks were successful
continuous-integration/drone/push Build is passing
2025-12-09 14:34:39 -08:00
09022e7292 change allow to expect 2025-12-09 14:34:16 -08:00
3400056c23 submissions-api: v0.10.1 audit events
All checks were successful
continuous-integration/drone/push Build is passing
2025-12-08 18:10:32 -08:00
57501d446f submissions-api: add audit events 2025-12-08 18:09:54 -08:00
47c0fff0ec Merge pull request 'Update javascript' (#283) from staging into master
All checks were successful
continuous-integration/drone/push Build is passing
Reviewed-on: #283
2025-12-06 04:48:21 +00:00
e6ef4e33ac mui
All checks were successful
continuous-integration/drone/push Build is passing
continuous-integration/drone/pr Build is passing
2025-12-05 20:44:16 -08:00
aeba355d6c format
Some checks failed
continuous-integration/drone/push Build is failing
2025-12-05 20:27:02 -08:00
8ad94bcdc8 bug 2025-12-05 20:27:02 -08:00
66f02a2f45 docker: update bun
Some checks failed
continuous-integration/drone/push Build is failing
2025-12-05 20:16:51 -08:00
c6a685310e bun update
Some checks failed
continuous-integration/drone/push Build is failing
2025-12-05 20:14:19 -08:00
72b95ae271 drone: update bun 2025-12-05 20:13:51 -08:00
7c04cc5c23 drone: update rust
All checks were successful
continuous-integration/drone/push Build is passing
2025-11-27 16:36:25 -08:00
a3bf111b4e update deps
Some checks failed
continuous-integration/drone/push Build is failing
2025-11-27 15:56:03 -08:00
d82f44e9d2 remove unused type
All checks were successful
continuous-integration/drone/push Build is passing
2025-11-09 09:30:03 -08:00
4c5a8c39c1 update deps
All checks were successful
continuous-integration/drone/push Build is passing
2025-11-09 06:00:03 -08:00
4e55b1d665 drop lazy_regex dep 2025-11-09 05:56:58 -08:00
63d7bec3a3 validation: fix variable names
All checks were successful
continuous-integration/drone/push Build is passing
2025-11-07 10:00:16 -08:00
b7c28616ad Merge pull request 'submissions: Fix Maps.Update Date + Release Date Mixup' (#282) from staging into master
All checks were successful
continuous-integration/drone/push Build is passing
Reviewed-on: #282
2025-09-29 02:14:51 +00:00
ce9b26378c submissions: Fix Maps.Update Date
All checks were successful
continuous-integration/drone/push Build is passing
continuous-integration/drone/pr Build is passing
2025-09-28 19:09:59 -07:00
df8f6463da bruh
All checks were successful
continuous-integration/drone/push Build is passing
2025-09-23 17:26:03 -07:00
6ccc56cc55 submissions: fix release date mixup
All checks were successful
continuous-integration/drone/push Build is passing
2025-09-23 16:31:50 -07:00
89ab25dfb9 Merge pull request 'deploy fixes' (#279) from staging into master
All checks were successful
continuous-integration/drone/push Build is passing
Reviewed-on: #279
2025-09-23 22:41:35 +00:00
ffa1308e73 update deps
All checks were successful
continuous-integration/drone/push Build is passing
continuous-integration/drone/pr Build is passing
2025-09-23 15:33:11 -07:00
b5a367e159 Return OperationID from release-submissions (#278)
The release-submissions endpoint creates an operation, but does not return it.

Reviewed-on: #278
Co-authored-by: Rhys Lloyd <krakow20@gmail.com>
Co-committed-by: Rhys Lloyd <krakow20@gmail.com>
2025-09-23 15:31:34 -07:00
6b05836a56 openapi: increase max script path length
All checks were successful
continuous-integration/drone/push Build is passing
2025-09-23 15:12:20 -07:00
8abee39d15 web: do not show Admin Submit button on mapfixes
All checks were successful
continuous-integration/drone/push Build is passing
2025-09-17 14:24:25 -07:00
b0b5ff0725 Merge pull request 'web: add missing button lost in refactor' (#275) from staging into master
All checks were successful
continuous-integration/drone/push Build is passing
Reviewed-on: #275
2025-09-17 00:12:15 +00:00
456b62104b web: add missing button lost in refactor
All checks were successful
continuous-integration/drone/push Build is passing
continuous-integration/drone/pr Build is passing
This was lost in 8f2a0b53e4
2025-09-16 16:56:31 -07:00
574a05424d update readme
All checks were successful
continuous-integration/drone/push Build is passing
2025-08-18 15:54:46 -07:00
0532965d37 Merge pull request 'Maps Metadata Maintenance' (#267) from staging into master
All checks were successful
continuous-integration/drone/push Build is passing
Reviewed-on: #267
2025-08-16 06:24:19 +00:00
51ba05df69 backend: remove mapfixes migrate endpoint
All checks were successful
continuous-integration/drone/pr Build is passing
continuous-integration/drone/push Build is passing
2025-08-15 23:20:08 -07:00
30b594b345 backend: fix completely wrong gorm thing
All checks were successful
continuous-integration/drone/push Build is passing
continuous-integration/drone/pr Build is passing
2025-08-15 23:05:40 -07:00
ab361dffd1 backend: fix completely wrong gorm thing
All checks were successful
continuous-integration/drone/push Build is passing
2025-08-15 23:01:37 -07:00
d30a94e42d backend: add mission interface method
All checks were successful
continuous-integration/drone/push Build is passing
2025-08-15 22:34:22 -07:00
dae378a188 backend: update go-grpc 2025-08-15 22:31:26 -07:00
cd912d683e validation: remove table from luau execution script
All checks were successful
continuous-integration/drone/push Build is passing
2025-08-15 22:23:29 -07:00
f5dfd5a163 Revert "validation: spend way to much time saving 1 microsecond"
This reverts commit 18d51af7ca.
2025-08-15 22:19:17 -07:00
18d51af7ca validation: spend way to much time saving 1 microsecond
All checks were successful
continuous-integration/drone/push Build is passing
2025-08-15 22:17:54 -07:00
a45aa700d8 make: frontend image builds from scratch
All checks were successful
continuous-integration/drone/push Build is passing
2025-08-15 20:13:01 -07:00
907b6d2034 web: fix Releasing statusChip 2025-08-15 20:08:02 -07:00
a454ea01b6 web: fix unknown status
All checks were successful
continuous-integration/drone/push Build is passing
2025-08-15 19:59:17 -07:00
7f8c9210a5 docker: move auth so non-editor-searchable location 2025-08-15 19:57:47 -07:00
f76f8cd136 Merge pull request 'Mapfix Release' (#264) from mapfix-release into staging
All checks were successful
continuous-integration/drone/push Build is passing
Reviewed-on: #264
2025-08-16 02:47:28 +00:00
55b79b8f9b backend: typo
Some checks failed
continuous-integration/drone/push Build is passing
continuous-integration/drone/pr Build is failing
2025-08-15 19:40:24 -07:00
1ce09e3f9b docker: add env vars to compose.yml
All checks were successful
continuous-integration/drone/push Build is passing
continuous-integration/drone/pr Build is passing
2025-08-15 19:35:39 -07:00
2878467cbf backend: add forgotten permission 2025-08-15 19:35:28 -07:00
2639abc7c8 submissions-api-rs: v0.9.1 fixes
All checks were successful
continuous-integration/drone/push Build is passing
continuous-integration/drone/pr Build is passing
2025-08-15 19:02:00 -07:00
231c11632b openapi: add missing fields 2025-08-15 19:01:20 -07:00
877f5c024f submissions-api-rs: fix MapfixResponse 2025-08-15 19:01:20 -07:00
18ca6de7d3 submissions-api-rs: v0.9.0 get_mapfixes
All checks were successful
continuous-integration/drone/push Build is passing
continuous-integration/drone/pr Build is passing
2025-08-15 18:41:04 -07:00
6ee8816eed submissions-api-rs: add get_mapfixes endpoint
All checks were successful
continuous-integration/drone/push Build is passing
continuous-integration/drone/pr Build is passing
2025-08-15 18:39:43 -07:00
de6163093f backend: add missing list query 2025-08-15 18:39:43 -07:00
d7456d500b backend: create mapfixes migration code UPLOADED -> RELEASED
All checks were successful
continuous-integration/drone/push Build is passing
continuous-integration/drone/pr Build is passing
2025-08-15 17:48:45 -07:00
1a558f35cf openapi: generate 2025-08-15 17:14:05 -07:00
b5b07ec1ce openapi: create migration endpoint 2025-08-15 17:13:59 -07:00
efd60f45df validator: respond correctly to upload failure
All checks were successful
continuous-integration/drone/push Build is passing
continuous-integration/drone/pr Build is passing
2025-08-14 19:48:08 -07:00
10507c62ab openapi: generate 2025-08-14 19:48:07 -07:00
0d18167b03 remove SubmissionStatusReleasing 2025-08-14 19:48:07 -07:00
e90cc425ba validator: perform unnecessary allocation to appease borrow checker 2025-08-14 19:48:07 -07:00
76512bec0d validator: update deps 2025-08-14 19:48:07 -07:00
412dadfc3e validator: add mapfix and submission release 2025-08-14 19:48:07 -07:00
31cca0d450 validator: update rust-grpc 2025-08-14 19:48:07 -07:00
cfb7461c5a validator: remove implicit map update 2025-08-14 19:48:07 -07:00
0cb419430a backend: make release pipeline internals 2025-08-14 19:48:07 -07:00
807d394646 web: add release buttons 2025-08-12 17:46:42 -07:00
3c9d04d637 openapi: generate 2025-08-12 16:18:15 -07:00
94ad0ff774 openapi: add new status endpoints 2025-08-12 16:18:01 -07:00
25f6c9e086 backend: tweak status sets to reflect new statuses
All checks were successful
continuous-integration/drone/push Build is passing
2025-08-12 16:05:33 -07:00
a62a231b0a backend: new statuses for mapfix and submission 2025-08-12 16:05:13 -07:00
468204b299 validator: dedicated api key for luau execution
All checks were successful
continuous-integration/drone/push Build is passing
2025-08-12 15:48:59 -07:00
63458aee09 Merge pull request 'Update Metadata on Mapfix' (#263) from metadata-maintenance into staging
All checks were successful
continuous-integration/drone/push Build is passing
Reviewed-on: #263
2025-08-12 21:56:07 +00:00
8297ed165b readme: document env vars
All checks were successful
continuous-integration/drone/push Build is passing
continuous-integration/drone/pr Build is passing
2025-08-11 17:50:39 -07:00
c98c3fe47e validator: update modes on mapfix
All checks were successful
continuous-integration/drone/push Build is passing
continuous-integration/drone/pr Build is passing
2025-08-11 17:40:14 -07:00
9c1e1e4347 validator: move count_sequential fn 2025-08-11 17:40:02 -07:00
d37a8b9030 validator: call Luau Execution API to get LoadAssetVersion
All checks were successful
continuous-integration/drone/push Build is passing
2025-08-11 17:02:44 -07:00
75682a2375 validator: connect to maps_extended 2025-08-11 17:02:28 -07:00
8f6012c7ef validator: remove MessageHandler constructor (too many arguments) 2025-08-11 16:33:37 -07:00
f59979987f Merge pull request 'Deploy Public API' (#256) from staging into master
All checks were successful
continuous-integration/drone/push Build is passing
Reviewed-on: #256
2025-08-08 22:07:37 +00:00
295b1d842b Sequential Modes Check (#260)
Some checks failed
continuous-integration/drone/pr Build is failing
continuous-integration/drone/push Build is passing
Closes #242

Reviewed-on: #260
Co-authored-by: Rhys Lloyd <krakow20@gmail.com>
Co-committed-by: Rhys Lloyd <krakow20@gmail.com>
2025-08-07 19:48:16 -07:00
93147060d6 drone: re-sign pipeline after self-inflicted catastrophe
Some checks failed
continuous-integration/drone/pr Build is failing
continuous-integration/drone/push Build is passing
2025-08-07 19:22:12 -07:00
fe539bf190 move rust submissions api 2025-08-07 17:37:15 -07:00
759ac08aef swagger: generate 2025-08-07 16:13:42 -07:00
34bc623ce6 make UserInfoHandle.HasRoles public 2025-08-07 16:08:47 -07:00
9999d1ff87 fix docs redirect
Some checks failed
continuous-integration/drone/pr Build is failing
continuous-integration/drone/push Build is passing
2025-08-06 20:15:53 -07:00
8a0cd50b68 fix public api
Some checks failed
continuous-integration/drone/pr Build is failing
continuous-integration/drone/push Build is passing
2025-08-06 20:04:33 -07:00
a232269d54 Merge pull request 'Extend Web API Maps With New Fields' (#250) from staging into master
All checks were successful
continuous-integration/drone/push Build is passing
Reviewed-on: #250
2025-07-26 03:29:09 +00:00
a7c4ca4b49 Merge pull request 'Implement Maps' (#248) from staging into master
All checks were successful
continuous-integration/drone/push Build is passing
Reviewed-on: #248
2025-07-26 01:26:26 +00:00
ca9f82a5aa Merge pull request 'Set Download File Name' (#245) from staging into master
All checks were successful
continuous-integration/drone/push Build is passing
Reviewed-on: #245
2025-07-23 09:32:27 +00:00
e1a2f6f075 Merge pull request 'Fix gRPC' (#244) from staging into master
All checks were successful
continuous-integration/drone/push Build is passing
Reviewed-on: #244
2025-07-23 04:53:29 +00:00
dad904cd86 Merge pull request 'Convert Validator API to gRPC' (#239) from staging into master
All checks were successful
continuous-integration/drone/push Build is passing
Reviewed-on: #239
2025-07-22 04:32:04 +00:00
ad7117a69c Merge pull request 'Scream Test Backend Overhaul' (#237) from staging into master
All checks were successful
continuous-integration/drone/push Build is passing
Reviewed-on: #237
2025-07-18 06:28:18 +00:00
d566591ea6 Merge pull request 'Fix Audit Event Order + Check Unanchored Parts' (#234) from staging into master
All checks were successful
continuous-integration/drone/push Build is passing
Reviewed-on: #234
2025-07-16 06:47:29 +00:00
424ef6238b Merge pull request 'Prevent Mapfix Duplicates + Correctly Report Transaction Errors' (#221) from staging into master
All checks were successful
continuous-integration/drone/push Build is passing
Reviewed-on: #221
2025-07-01 12:40:26 +00:00
0f0ab4d3e0 Merge pull request 'Update Roblox Api + Update Deps' (#217) from staging into master
All checks were successful
continuous-integration/drone/push Build is passing
Reviewed-on: #217
2025-07-01 08:47:27 +00:00
3e2d782289 Merge pull request 'QoL Web Changes + Map Download Permission Fix' (#214) from staging into master
All checks were successful
continuous-integration/drone/push Build is passing
Reviewed-on: #214
2025-06-30 10:20:03 +00:00
dc446c545f Fix Bypass Submit + Audit Checklist + Map Download Button (#207)
All checks were successful
continuous-integration/drone/push Build is passing
Reviewed-on: #207
2025-06-24 06:41:56 +00:00
e234a87d05 Replace Bypass Submit With Submit Unchecked + Error Endpoint (#200)
Some checks are pending
continuous-integration/drone/push Build is running
Reviewed-on: #200
Co-authored-by: Quaternions <krakow20@gmail.com>
Co-committed-by: Quaternions <krakow20@gmail.com>
2025-06-23 23:39:18 -07:00
8ab772ea81 Validate Asset Version + Website QoL + Script Names Fix (#193)
All checks were successful
continuous-integration/drone/push Build is passing
Reviewed-on: #193
2025-06-10 23:53:07 +00:00
9b58b1d26a Frontend Rework (#185)
All checks were successful
continuous-integration/drone/push Build is passing
Reviewed-on: #185
2025-06-09 01:09:17 +00:00
7689001e74 Merge pull request '404 / 500 Thumbnails + Fix Regex Capture Groups' (#168) from staging into master
All checks were successful
continuous-integration/drone/push Build is passing
Reviewed-on: #168
2025-06-07 04:02:26 +00:00
e89abed3d5 Merge pull request 'Thumbnail Fixes + Bypass Submit Button' (#161) from staging into master
All checks were successful
continuous-integration/drone/push Build is passing
Reviewed-on: #161
2025-06-05 01:34:35 +00:00
b792d33164 Merge pull request 'Update Rust Dependencies (Roblox Format Zstd Support)' (#142) from staging into master
All checks were successful
continuous-integration/drone/push Build is passing
Reviewed-on: #142
2025-06-01 23:13:58 +00:00
929b5949f0 Merge pull request 'Snapshot "Working" Code' (#139) from staging into master
Some checks failed
continuous-integration/drone/push Build is failing
Reviewed-on: #139
2025-04-27 21:21:05 +00:00
166 changed files with 30769 additions and 3140 deletions

View File

@@ -24,7 +24,7 @@ steps:
- staging
- name: build-validator
image: clux/muslrust:1.86.0-stable
image: clux/muslrust:1.91.0-stable
commands:
- make build-validator
when:
@@ -33,7 +33,7 @@ steps:
- staging
- name: build-frontend
image: oven/bun:1.2.8
image: oven/bun:1.3.3
commands:
- apt-get update
- apt-get install make
@@ -149,6 +149,6 @@ steps:
- pull_request
---
kind: signature
hmac: cc7f2f8dac4285b5fa1df163bd92115f1a51a92050687cd08169e17803a2de4c
hmac: 6de9d4b91f14b30561856daf275d1fd523e1ce7a5a3651b660f0d8907b4692fb
...

993
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -1,6 +1,6 @@
[workspace]
members = [
"validation",
"validation/api",
"submissions-api-rs",
]
resolver = "2"

View File

@@ -34,7 +34,6 @@ docker-validator:
make build-validator
make image-validator
docker-frontend:
make build-frontend
make image-frontend
docker: docker-backend docker-validator docker-frontend

View File

@@ -13,11 +13,11 @@ Prerequisite: golang installed
1. Run `go generate` to ensure the generated API is up-to-date. This project uses [ogen](https://github.com/ogen-go/ogen).
```bash
go generate -run "go run github.com/ogen-go/ogen/cmd/ogen@latest --target api --clean openapi.yaml"
go generate
```
2. Build the project.
```bash
go build git.itzana.me/strafesnet/maps-service
make build-backend
```
By default, the project opens at `localhost:8080`.
@@ -47,14 +47,16 @@ AUTH_HOST="http://localhost:8083/"
Prerequisite: rust installed
1. `cd validation`
2. `cargo run --release`
1. `cargo run --release -p maps-validation`
Environment Variables:
- ROBLOX_GROUP_ID
- RBXCOOKIE
- RBX_API_KEY
- API_HOST_INTERNAL
- NATS_HOST
- LOAD_ASSET_VERSION_PLACE_ID
- LOAD_ASSET_VERSION_UNIVERSE_ID
#### License

View File

@@ -25,6 +25,8 @@ func main() {
app := cmds.NewApp()
app.Commands = []*cli.Command{
cmds.NewServeCommand(),
cmds.NewApiCommand(),
cmds.NewAORCommand(),
}
if err := app.Run(os.Args); err != nil {

View File

@@ -34,7 +34,7 @@ services:
"--data-rpc-host","dataservice:9000",
]
env_file:
- ../auth-compose/strafesnet_staging.env
- /home/quat/auth-compose/strafesnet_staging.env
depends_on:
- authrpc
- nats
@@ -59,11 +59,13 @@ services:
maptest-validator
container_name: validation
env_file:
- ../auth-compose/strafesnet_staging.env
- /home/quat/auth-compose/strafesnet_staging.env
environment:
- ROBLOX_GROUP_ID=17032139 # "None" is special case string value
- API_HOST_INTERNAL=http://submissions:8083/v1
- NATS_HOST=nats:4222
- LOAD_ASSET_VERSION_PLACE_ID=14001440964
- LOAD_ASSET_VERSION_UNIVERSE_ID=4850603885
depends_on:
- nats
# note: this races the submissions which creates a nats stream
@@ -73,26 +75,6 @@ services:
networks:
- maps-service-network
public_api:
image:
maptest-api
container_name: public_api
command: [
# debug
"--debug","api",
# http service port
"--port","8084",
"--dev-rpc-host","dev-service:8081",
"--maps-rpc-host","maptest-api:8081",
]
depends_on:
- submissions
- dev_service
networks:
- maps-service-network
ports:
- "8084:8084"
dataservice:
image: registry.itzana.me/strafesnet/data-service:master
container_name: dataservice
@@ -123,7 +105,7 @@ services:
- REDIS_ADDR=authredis:6379
- RBX_GROUP_ID=17032139
env_file:
- ../auth-compose/auth-service.env
- /home/quat/auth-compose/auth-service.env
depends_on:
- authredis
networks:
@@ -137,7 +119,7 @@ services:
environment:
- REDIS_ADDR=authredis:6379
env_file:
- ../auth-compose/auth-service.env
- /home/quat/auth-compose/auth-service.env
depends_on:
- authredis
networks:

View File

@@ -230,7 +230,7 @@ var SwaggerInfo = &swag.Spec{
BasePath: "/public-api/v1",
Schemes: []string{},
Title: "StrafesNET Maps API",
Description: "Obtain an api key at https://dev.strafes.net\nRequires Data:Read permission",
Description: "Obtain an api key at https://dev.strafes.net\nRequires Maps:Read permission",
InfoInstanceName: "swagger",
SwaggerTemplate: docTemplate,
LeftDelim: "{{",

View File

@@ -1,7 +1,7 @@
{
"swagger": "2.0",
"info": {
"description": "Obtain an api key at https://dev.strafes.net\nRequires Data:Read permission",
"description": "Obtain an api key at https://dev.strafes.net\nRequires Maps:Read permission",
"title": "StrafesNET Maps API",
"contact": {},
"version": "1.0"

View File

@@ -64,7 +64,7 @@ info:
contact: {}
description: |-
Obtain an api key at https://dev.strafes.net
Requires Data:Read permission
Requires Maps:Read permission
title: StrafesNET Maps API
version: "1.0"
paths:

47
go.mod
View File

@@ -6,22 +6,23 @@ toolchain go1.24.5
require (
git.itzana.me/StrafesNET/dev-service v0.0.0-20250628052121-92af8193b5ed
git.itzana.me/strafesnet/go-grpc v0.0.0-20250807005013-301d35b914ef
git.itzana.me/strafesnet/go-grpc v0.0.0-20250815013325-1c84f73bdcb1
git.itzana.me/strafesnet/utils v0.0.0-20220716194944-d8ca164052f9
github.com/dchest/siphash v1.2.3
github.com/gin-gonic/gin v1.10.1
github.com/go-faster/errors v0.7.1
github.com/go-faster/jx v1.1.0
github.com/go-faster/jx v1.2.0
github.com/nats-io/nats.go v1.37.0
github.com/ogen-go/ogen v1.2.1
github.com/ogen-go/ogen v1.18.0
github.com/redis/go-redis/v9 v9.10.0
github.com/sirupsen/logrus v1.9.3
github.com/swaggo/files v1.0.1
github.com/swaggo/gin-swagger v1.6.0
github.com/swaggo/swag v1.16.6
github.com/urfave/cli/v2 v2.27.6
go.opentelemetry.io/otel v1.32.0
go.opentelemetry.io/otel/metric v1.32.0
go.opentelemetry.io/otel/trace v1.32.0
go.opentelemetry.io/otel v1.39.0
go.opentelemetry.io/otel/metric v1.39.0
go.opentelemetry.io/otel/trace v1.39.0
google.golang.org/grpc v1.48.0
gorm.io/driver/postgres v1.6.0
gorm.io/gorm v1.25.12
@@ -33,9 +34,11 @@ require (
github.com/PuerkitoBio/urlesc v0.0.0-20170810143723-de5bf2ad4578 // indirect
github.com/bytedance/sonic v1.11.6 // indirect
github.com/bytedance/sonic/loader v0.1.1 // indirect
github.com/cespare/xxhash/v2 v2.3.0 // indirect
github.com/cloudwego/base64x v0.1.4 // indirect
github.com/cloudwego/iasm v0.2.0 // indirect
github.com/cpuguy83/go-md2man/v2 v2.0.5 // indirect
github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f // indirect
github.com/gabriel-vasile/mimetype v1.4.3 // indirect
github.com/gin-contrib/sse v0.1.0 // indirect
github.com/go-openapi/jsonpointer v0.19.5 // indirect
@@ -55,7 +58,7 @@ require (
github.com/jinzhu/now v1.1.5 // indirect
github.com/josharian/intern v1.0.0 // indirect
github.com/json-iterator/go v1.1.12 // indirect
github.com/klauspost/compress v1.17.6 // indirect
github.com/klauspost/compress v1.18.1 // indirect
github.com/klauspost/cpuid/v2 v2.2.7 // indirect
github.com/leodido/go-urn v1.4.0 // indirect
github.com/mailru/easyjson v0.7.6 // indirect
@@ -65,36 +68,38 @@ require (
github.com/nats-io/nuid v1.0.1 // indirect
github.com/pelletier/go-toml/v2 v2.2.2 // indirect
github.com/russross/blackfriday/v2 v2.1.0 // indirect
github.com/shopspring/decimal v1.4.0 // indirect
github.com/twitchyliquid64/golang-asm v0.15.1 // indirect
github.com/ugorji/go/codec v1.2.12 // indirect
github.com/xrash/smetrics v0.0.0-20240521201337-686a1a2994c1 // indirect
go.opentelemetry.io/auto/sdk v1.2.1 // indirect
golang.org/x/arch v0.8.0 // indirect
golang.org/x/crypto v0.32.0 // indirect
golang.org/x/mod v0.17.0 // indirect
golang.org/x/tools v0.21.1-0.20240508182429-e35e4ccd0d2d // indirect
golang.org/x/crypto v0.46.0 // indirect
golang.org/x/mod v0.31.0 // indirect
golang.org/x/tools v0.40.0 // indirect
google.golang.org/genproto v0.0.0-20200526211855-cb27e3aa2013 // indirect
google.golang.org/protobuf v1.34.1 // indirect
gopkg.in/yaml.v3 v3.0.1 // indirect
)
require (
github.com/dlclark/regexp2 v1.11.0 // indirect
github.com/fatih/color v1.17.0 // indirect
github.com/dlclark/regexp2 v1.11.5 // indirect
github.com/fatih/color v1.18.0 // indirect
github.com/ghodss/yaml v1.0.0 // indirect
github.com/go-faster/yaml v0.4.6 // indirect
github.com/go-logr/logr v1.4.2 // indirect
github.com/go-logr/logr v1.4.3 // indirect
github.com/go-logr/stdr v1.2.2 // indirect
// github.com/golang/protobuf v1.5.4 // indirect
github.com/google/uuid v1.6.0 // indirect
github.com/mattn/go-colorable v0.1.13 // indirect
github.com/mattn/go-colorable v0.1.14 // indirect
github.com/mattn/go-isatty v0.0.20 // indirect
github.com/segmentio/asm v1.2.0 // indirect
github.com/segmentio/asm v1.2.1 // indirect
go.uber.org/multierr v1.11.0 // indirect
go.uber.org/zap v1.27.0 // indirect
golang.org/x/exp v0.0.0-20240531132922-fd00a4e0eefc // indirect
golang.org/x/net v0.34.0 // indirect
golang.org/x/sync v0.12.0 // indirect
golang.org/x/sys v0.29.0 // indirect
golang.org/x/text v0.23.0 // indirect
go.uber.org/zap v1.27.1 // indirect
golang.org/x/exp v0.0.0-20251219203646-944ab1f22d93 // indirect
golang.org/x/net v0.48.0 // indirect
golang.org/x/sync v0.19.0 // indirect
golang.org/x/sys v0.39.0 // indirect
golang.org/x/text v0.32.0 // indirect
gopkg.in/yaml.v2 v2.4.0 // indirect
)

73
go.sum
View File

@@ -2,8 +2,8 @@ cloud.google.com/go v0.26.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMT
cloud.google.com/go v0.34.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
git.itzana.me/StrafesNET/dev-service v0.0.0-20250628052121-92af8193b5ed h1:eGWIQx2AOrSsLC2dieuSs8MCliRE60tvpZnmxsTBtKc=
git.itzana.me/StrafesNET/dev-service v0.0.0-20250628052121-92af8193b5ed/go.mod h1:KJal0K++M6HEzSry6JJ2iDPZtOQn5zSstNlDbU3X4Jg=
git.itzana.me/strafesnet/go-grpc v0.0.0-20250807005013-301d35b914ef h1:SJi4V4+xzScFnbMRN1gkZxcqR1xKfiT7CaXanLltEzw=
git.itzana.me/strafesnet/go-grpc v0.0.0-20250807005013-301d35b914ef/go.mod h1:X7XTRUScRkBWq8q8bplbeso105RPDlnY7J6Wy1IwBMs=
git.itzana.me/strafesnet/go-grpc v0.0.0-20250815013325-1c84f73bdcb1 h1:imXibfeYcae6og0TTDUFRQ3CQtstGjIoLbCn+pezD2o=
git.itzana.me/strafesnet/go-grpc v0.0.0-20250815013325-1c84f73bdcb1/go.mod h1:X7XTRUScRkBWq8q8bplbeso105RPDlnY7J6Wy1IwBMs=
git.itzana.me/strafesnet/utils v0.0.0-20220716194944-d8ca164052f9 h1:7lU6jyR7S7Rhh1dnUp7GyIRHUTBXZagw8F4n4hOyxLw=
git.itzana.me/strafesnet/utils v0.0.0-20220716194944-d8ca164052f9/go.mod h1:uyYerSieEt4v0MJCdPLppG0LtJ4Yj035vuTetWGsxjY=
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
@@ -14,12 +14,18 @@ github.com/PuerkitoBio/purell v1.1.1/go.mod h1:c11w/QuzBsJSee3cPx9rAFu61PvFxuPbt
github.com/PuerkitoBio/urlesc v0.0.0-20170810143723-de5bf2ad4578 h1:d+Bc7a5rLufV/sSk/8dngufqelfh6jnri85riMAaF/M=
github.com/PuerkitoBio/urlesc v0.0.0-20170810143723-de5bf2ad4578/go.mod h1:uGdkoq3SwY9Y+13GIhn11/XLaGBb4BfwItxLd5jeuXE=
github.com/antihax/optional v1.0.0/go.mod h1:uupD/76wgC+ih3iEmQUL+0Ugr19nfwCT1kdvxnR2qWY=
github.com/bsm/ginkgo/v2 v2.12.0 h1:Ny8MWAHyOepLGlLKYmXG4IEkioBysk6GpaRTLC8zwWs=
github.com/bsm/ginkgo/v2 v2.12.0/go.mod h1:SwYbGRRDovPVboqFv0tPTcG1sN61LM1Z4ARdbAV9g4c=
github.com/bsm/gomega v1.27.10 h1:yeMWxP2pV2fG3FgAODIY8EiRE3dy0aeFYt4l7wh6yKA=
github.com/bsm/gomega v1.27.10/go.mod h1:JyEr/xRbxbtgWNi8tIEVPUYZ5Dzef52k01W3YH0H+O0=
github.com/bytedance/sonic v1.11.6 h1:oUp34TzMlL+OY1OUWxHqsdkgC/Zfc85zGqw9siXjrc0=
github.com/bytedance/sonic v1.11.6/go.mod h1:LysEHSvpvDySVdC2f87zGWf6CIKJcAvqab1ZaiQtds4=
github.com/bytedance/sonic/loader v0.1.1 h1:c+e5Pt1k/cy5wMveRDyk2X4B9hF4g7an8N3zCYjJFNM=
github.com/bytedance/sonic/loader v0.1.1/go.mod h1:ncP89zfokxS5LZrJxl5z0UJcsk4M4yY2JpfqGeCtNLU=
github.com/census-instrumentation/opencensus-proto v0.2.1/go.mod h1:f6KPmirojxKA12rnyqOA5BBL4O983OfeGPqjHWSTneU=
github.com/cespare/xxhash/v2 v2.1.1/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
github.com/cespare/xxhash/v2 v2.3.0 h1:UL815xU9SqsFlibzuggzjXhog7bL6oX9BbNZnL2UFvs=
github.com/cespare/xxhash/v2 v2.3.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw=
github.com/cloudwego/base64x v0.1.4 h1:jwCgWpFanWmN8xoIUHa2rtzmkd5J2plF/dnLS6Xd/0Y=
github.com/cloudwego/base64x v0.1.4/go.mod h1:0zlkT4Wn5C6NdauXdJRhSKRlJvmclQ1hhJgA0rcu/8w=
@@ -39,8 +45,12 @@ github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/dchest/siphash v1.2.3 h1:QXwFc8cFOR2dSa/gE6o/HokBMWtLUaNDVd+22aKHeEA=
github.com/dchest/siphash v1.2.3/go.mod h1:0NvQU092bT0ipiFN++/rXm69QG9tVxLAlQHIXMPAkHc=
github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f h1:lO4WD4F/rVNCu3HqELle0jiPLLBs70cWOduZpkS1E78=
github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f/go.mod h1:cuUVRXasLTGF7a8hSLbxyZXjz+1KgoB3wDUb6vlszIc=
github.com/dlclark/regexp2 v1.11.0 h1:G/nrcoOa7ZXlpoa/91N3X7mM3r8eIlMBBJZvsz/mxKI=
github.com/dlclark/regexp2 v1.11.0/go.mod h1:DHkYz0B9wPfa6wondMfaivmHpzrQ3v9q8cnmRbL6yW8=
github.com/dlclark/regexp2 v1.11.5 h1:Q/sSnsKerHeCkc/jSTNq1oCm7KiVgUMZRDUoRu0JQZQ=
github.com/dlclark/regexp2 v1.11.5/go.mod h1:DHkYz0B9wPfa6wondMfaivmHpzrQ3v9q8cnmRbL6yW8=
github.com/envoyproxy/go-control-plane v0.9.0/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4=
github.com/envoyproxy/go-control-plane v0.9.1-0.20191026205805-5f8ba28d4473/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4=
github.com/envoyproxy/go-control-plane v0.9.4/go.mod h1:6rpuAdCZL397s3pYoYcLgu1mIlRU8Am5FuJP05cCM98=
@@ -49,6 +59,8 @@ github.com/envoyproxy/go-control-plane v0.10.2-0.20220325020618-49ff273808a1/go.
github.com/envoyproxy/protoc-gen-validate v0.1.0/go.mod h1:iSmxcyjqTsJpI2R4NaDN7+kN2VEUnK/pcBlmesArF7c=
github.com/fatih/color v1.17.0 h1:GlRw1BRJxkpqUCBKzKOw098ed57fEsKeNjpTe3cSjK4=
github.com/fatih/color v1.17.0/go.mod h1:YZ7TlrGPkiz6ku9fK3TLD/pl3CpsiFyu8N92HLgmosI=
github.com/fatih/color v1.18.0 h1:S8gINlzdQ840/4pfAwic/ZE0djQEH3wM94VfqLTZcOM=
github.com/fatih/color v1.18.0/go.mod h1:4FelSpRwEGDpQ12mAdzqdOukCy4u8WUtOY6lkT/6HfU=
github.com/gabriel-vasile/mimetype v1.4.3 h1:in2uUcidCuFcDKtdcBxlR0rJ1+fsokWf+uqxgUFjbI0=
github.com/gabriel-vasile/mimetype v1.4.3/go.mod h1:d8uq/6HKRL6CGdk+aubisF/M5GcPfT7nKyLpA0lbSSk=
github.com/ghodss/yaml v1.0.0 h1:wQHKEahhL6wmXdzwWG11gIVCkOv05bNOh+Rxn0yngAk=
@@ -63,11 +75,13 @@ github.com/go-faster/errors v0.7.1 h1:MkJTnDoEdi9pDabt1dpWf7AA8/BaSYZqibYyhZ20AY
github.com/go-faster/errors v0.7.1/go.mod h1:5ySTjWFiphBs07IKuiL69nxdfd5+fzh1u7FPGZP2quo=
github.com/go-faster/jx v1.1.0 h1:ZsW3wD+snOdmTDy9eIVgQdjUpXRRV4rqW8NS3t+20bg=
github.com/go-faster/jx v1.1.0/go.mod h1:vKDNikrKoyUmpzaJ0OkIkRQClNHFX/nF3dnTJZb3skg=
github.com/go-faster/jx v1.2.0 h1:T2YHJPrFaYu21fJtUxC9GzmluKu8rVIFDwwGBKTDseI=
github.com/go-faster/jx v1.2.0/go.mod h1:UWLOVDmMG597a5tBFPLIWJdUxz5/2emOpfsj9Neg0PE=
github.com/go-faster/yaml v0.4.6 h1:lOK/EhI04gCpPgPhgt0bChS6bvw7G3WwI8xxVe0sw9I=
github.com/go-faster/yaml v0.4.6/go.mod h1:390dRIvV4zbnO7qC9FGo6YYutc+wyyUSHBgbXL52eXk=
github.com/go-logr/logr v1.2.2/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A=
github.com/go-logr/logr v1.4.2 h1:6pFjapn8bFcIbiKo3XT4j/BhANplGihG6tvd+8rYgrY=
github.com/go-logr/logr v1.4.2/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY=
github.com/go-logr/logr v1.4.3 h1:CjnDlHq8ikf6E492q6eKboGOC0T8CDaOvkHCIg8idEI=
github.com/go-logr/logr v1.4.3/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY=
github.com/go-logr/stdr v1.2.2 h1:hSWxHoqTgW2S2qGc0LTAI563KZ5YKYRhT3MFKZMbjag=
github.com/go-logr/stdr v1.2.2/go.mod h1:mMo/vtBO5dYbehREoey6XUKy/eSumjCCveDpRre4VKE=
github.com/go-openapi/jsonpointer v0.19.3/go.mod h1:Pl9vOtqEWErmShwVjC8pYs9cog34VGT37dQOVbmoatg=
@@ -113,8 +127,8 @@ github.com/google/go-cmp v0.4.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/
github.com/google/go-cmp v0.5.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.5/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.6/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.6.0 h1:ofyhxvXcZhMsU5ulbFiLKl/XBFqE1GSq7atu8tAmTRI=
github.com/google/go-cmp v0.6.0/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
github.com/google/go-cmp v0.7.0 h1:wk8382ETsv4JYUZwIsn6YpYiWiBsYLSJiTsyBybVuN8=
github.com/google/go-cmp v0.7.0/go.mod h1:pXiqmnSA92OHEEa9HXL2W4E7lf9JzCmGVUdgjX3N/iU=
github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
github.com/google/uuid v1.1.2/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/google/uuid v1.3.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
@@ -140,6 +154,8 @@ github.com/json-iterator/go v1.1.12 h1:PV8peI4a0ysnczrg+LtxykD8LfKY9ML6u2jnxaEnr
github.com/json-iterator/go v1.1.12/go.mod h1:e30LSqwooZae/UwlEbR2852Gd8hjQvJoHmT4TnhNGBo=
github.com/klauspost/compress v1.17.6 h1:60eq2E/jlfwQXtvZEeBUYADs+BwKBWURIY+Gj2eRGjI=
github.com/klauspost/compress v1.17.6/go.mod h1:/dCuZOvVtNoHsyb+cuJD3itjs3NbnF6KH9zAO4BDxPM=
github.com/klauspost/compress v1.18.1 h1:bcSGx7UbpBqMChDtsF28Lw6v/G94LPrrbMbdC3JH2co=
github.com/klauspost/compress v1.18.1/go.mod h1:ZQFFVG+MdnR0P+l6wpXgIL4NTtwiKIdBnrBd8Nrxr+0=
github.com/klauspost/cpuid/v2 v2.0.9/go.mod h1:FInQzS24/EEf25PyTYn52gqo7WaD8xa0213Md/qVLRg=
github.com/klauspost/cpuid/v2 v2.2.7 h1:ZWSB3igEs+d0qvnxR/ZBzXVmxkgt8DdzP6m9pfuVLDM=
github.com/klauspost/cpuid/v2 v2.2.7/go.mod h1:Lcz8mBdAVJIBVzewtcLocK12l3Y+JytZYpaMropDUws=
@@ -159,6 +175,8 @@ github.com/mailru/easyjson v0.7.6 h1:8yTIVnZgCoiM1TgqoeTl+LfU5Jg6/xL3QhGQnimLYnA
github.com/mailru/easyjson v0.7.6/go.mod h1:xzfreul335JAWq5oZzymOObrkdz5UnU4kGfJJLY9Nlc=
github.com/mattn/go-colorable v0.1.13 h1:fFA4WZxdEF4tXPZVKMLwD8oUnCTTo08duU7wxecdEvA=
github.com/mattn/go-colorable v0.1.13/go.mod h1:7S9/ev0klgBDR4GtXTXX8a3vIGJpMovkB8vQcUbaXHg=
github.com/mattn/go-colorable v0.1.14 h1:9A9LHSqF/7dyVVX6g0U9cwm9pG3kP9gSzcuIPHPsaIE=
github.com/mattn/go-colorable v0.1.14/go.mod h1:6LmQG8QLFO4G5z1gPvYEzlUgJ2wF+stgPZH1UqBm1s8=
github.com/mattn/go-isatty v0.0.16/go.mod h1:kYGgaQfpe5nmfYZH+SKPsOc2e4SrIfOl2e/yFXSvRLM=
github.com/mattn/go-isatty v0.0.20 h1:xfD0iDuEKnDkl03q4limB+vH+GxLEtL/jb4xVJSWWEY=
github.com/mattn/go-isatty v0.0.20/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y=
@@ -176,11 +194,15 @@ github.com/nats-io/nuid v1.0.1/go.mod h1:19wcPz3Ph3q0Jbyiqsd0kePYG7A95tJPxeL+1OS
github.com/niemeyer/pretty v0.0.0-20200227124842-a10e7caefd8e/go.mod h1:zD1mROLANZcx1PVRCS0qkT7pwLkGfwJo4zjcN/Tysno=
github.com/ogen-go/ogen v1.2.1 h1:C5A0lvUMu2wl+eWIxnpXMWnuOJ26a2FyzR1CIC2qG0M=
github.com/ogen-go/ogen v1.2.1/go.mod h1:P2zQdEu8UqaVRfD5GEFvl+9q63VjMLvDquq1wVbyInM=
github.com/ogen-go/ogen v1.18.0 h1:6RQ7lFBjOeNaUWu4getfqIh4GJbEY4hqKuzDtec/g60=
github.com/ogen-go/ogen v1.18.0/go.mod h1:dHFr2Wf6cA7tSxMI+zPC21UR5hAlDw8ZYUkK3PziURY=
github.com/pelletier/go-toml/v2 v2.2.2 h1:aYUidT7k73Pcl9nb2gScu7NSrKCSHIDE89b3+6Wq+LM=
github.com/pelletier/go-toml/v2 v2.2.2/go.mod h1:1t835xjRzz80PqgE6HHgN2JOsmgYu/h4qDAS4n929Rs=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
github.com/redis/go-redis/v9 v9.10.0 h1:FxwK3eV8p/CQa0Ch276C7u2d0eNC9kCmAYQ7mCXCzVs=
github.com/redis/go-redis/v9 v9.10.0/go.mod h1:huWgSWd8mW6+m0VPhJjSSQ+d6Nh1VICQ6Q5lHuCH/Iw=
github.com/rogpeppe/fastuuid v1.2.0/go.mod h1:jVj6XXZzXRy/MSR5jhDC/2q6DgLz+nrA6LYCDYWNEvQ=
github.com/rogpeppe/go-internal v1.14.1 h1:UQB4HGPB6osV0SQTLymcB4TgvyWu6ZyliaW0tI/otEQ=
github.com/rogpeppe/go-internal v1.14.1/go.mod h1:MaRKkUm5W0goXpeCfT7UZI6fk/L7L7so1lCWt35ZSgc=
@@ -188,6 +210,10 @@ github.com/russross/blackfriday/v2 v2.1.0 h1:JIOH55/0cWyOuilr9/qlrm0BSXldqnqwMsf
github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
github.com/segmentio/asm v1.2.0 h1:9BQrFxC+YOHJlTlHGkTrFWf59nbL3XnCoFLTwDCI7ys=
github.com/segmentio/asm v1.2.0/go.mod h1:BqMnlJP91P8d+4ibuonYZw9mfnzI9HfxselHZr5aAcs=
github.com/segmentio/asm v1.2.1 h1:DTNbBqs57ioxAD4PrArqftgypG4/qNpXoJx8TVXxPR0=
github.com/segmentio/asm v1.2.1/go.mod h1:BqMnlJP91P8d+4ibuonYZw9mfnzI9HfxselHZr5aAcs=
github.com/shopspring/decimal v1.4.0 h1:bxl37RwXBklmTi0C79JfXCEBD1cqqHt0bbgBAGFp81k=
github.com/shopspring/decimal v1.4.0/go.mod h1:gawqmDU56v4yIKSwfBSFip1HdCCXN8/+DMd9qYNcwME=
github.com/sirupsen/logrus v1.8.1/go.mod h1:yWOB1SBYBC5VeMP7gHvWumXLIWorT60ONWic61uBYv0=
github.com/sirupsen/logrus v1.9.3 h1:dueUQJ1C2q9oE3F7wvmSGAaVtTmUizReu6fjN8uqzbQ=
github.com/sirupsen/logrus v1.9.3/go.mod h1:naHLuLoDiP4jHNo9R0sCBMtWGeIprob74mVsIT4qYEQ=
@@ -204,8 +230,9 @@ github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/
github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU=
github.com/stretchr/testify v1.8.1/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o6fzry7u4=
github.com/stretchr/testify v1.8.4/go.mod h1:sz/lmYIOXD/1dqDmKjjqLyZ2RngseejIcXlSw2iwfAo=
github.com/stretchr/testify v1.9.0 h1:HtqpIVDClZ4nwg75+f6Lvsy/wHu+3BoSGCbBAcpTsTg=
github.com/stretchr/testify v1.9.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY=
github.com/stretchr/testify v1.11.1 h1:7s2iGBzp5EwR7/aIZr8ao5+dra3wiQyKjjFuvgVKu7U=
github.com/stretchr/testify v1.11.1/go.mod h1:wZwfW3scLgRK+23gO65QZefKpKQRnfz6sD981Nm4B6U=
github.com/swaggo/files v1.0.1 h1:J1bVJ4XHZNq0I46UU90611i9/YzdrF7x92oX1ig5IdE=
github.com/swaggo/files v1.0.1/go.mod h1:0qXmMNH6sXNf+73t65aKeB+ApmgxdnkQzVTAj2uaMUg=
github.com/swaggo/gin-swagger v1.6.0 h1:y8sxvQ3E20/RCyrXeFfg60r6H0Z+SwpTjMYsMm+zy8M=
@@ -221,12 +248,14 @@ github.com/urfave/cli/v2 v2.27.6/go.mod h1:3Sevf16NykTbInEnD0yKkjDAeZDS0A6bzhBH5
github.com/xrash/smetrics v0.0.0-20240521201337-686a1a2994c1 h1:gEOO8jv9F4OT7lGCjxCBTO/36wtF6j2nSip77qHd4x4=
github.com/xrash/smetrics v0.0.0-20240521201337-686a1a2994c1/go.mod h1:Ohn+xnUBiLI6FVj/9LpzZWtj1/D6lUovWYBkxHVV3aM=
github.com/yuin/goldmark v1.4.13/go.mod h1:6yULJ656Px+3vBD8DxQVa3kxgyrAnzto9xy5taEt/CY=
go.opentelemetry.io/otel v1.32.0 h1:WnBN+Xjcteh0zdk01SVqV55d/m62NJLJdIyb4y/WO5U=
go.opentelemetry.io/otel v1.32.0/go.mod h1:00DCVSB0RQcnzlwyTfqtxSm+DRr9hpYrHjNGiBHVQIg=
go.opentelemetry.io/otel/metric v1.32.0 h1:xV2umtmNcThh2/a/aCP+h64Xx5wsj8qqnkYZktzNa0M=
go.opentelemetry.io/otel/metric v1.32.0/go.mod h1:jH7CIbbK6SH2V2wE16W05BHCtIDzauciCRLoc/SyMv8=
go.opentelemetry.io/otel/trace v1.32.0 h1:WIC9mYrXf8TmY/EXuULKc8hR17vE+Hjv2cssQDe03fM=
go.opentelemetry.io/otel/trace v1.32.0/go.mod h1:+i4rkvCraA+tG6AzwloGaCtkx53Fa+L+V8e9a7YvhT8=
go.opentelemetry.io/auto/sdk v1.2.1 h1:jXsnJ4Lmnqd11kwkBV2LgLoFMZKizbCi5fNZ/ipaZ64=
go.opentelemetry.io/auto/sdk v1.2.1/go.mod h1:KRTj+aOaElaLi+wW1kO/DZRXwkF4C5xPbEe3ZiIhN7Y=
go.opentelemetry.io/otel v1.39.0 h1:8yPrr/S0ND9QEfTfdP9V+SiwT4E0G7Y5MO7p85nis48=
go.opentelemetry.io/otel v1.39.0/go.mod h1:kLlFTywNWrFyEdH0oj2xK0bFYZtHRYUdv1NklR/tgc8=
go.opentelemetry.io/otel/metric v1.39.0 h1:d1UzonvEZriVfpNKEVmHXbdf909uGTOQjA0HF0Ls5Q0=
go.opentelemetry.io/otel/metric v1.39.0/go.mod h1:jrZSWL33sD7bBxg1xjrqyDjnuzTUB0x1nBERXd7Ftcs=
go.opentelemetry.io/otel/trace v1.39.0 h1:2d2vfpEDmCJ5zVYz7ijaJdOF59xLomrvj7bjt6/qCJI=
go.opentelemetry.io/otel/trace v1.39.0/go.mod h1:88w4/PnZSazkGzz/w84VHpQafiU4EtqqlVdxWy+rNOA=
go.opentelemetry.io/proto/otlp v0.7.0/go.mod h1:PqfVotwruBrMGOCsRd/89rSnXhoiJIqeYNgFYFoEGnI=
go.uber.org/goleak v1.3.0 h1:2K3zAYmnTNqV73imy9J1T3WC+gmCePx2hEGkimedGto=
go.uber.org/goleak v1.3.0/go.mod h1:CoHD4mav9JJNrW/WLlf7HGZPjdw8EucARQHekz1X6bE=
@@ -234,6 +263,8 @@ go.uber.org/multierr v1.11.0 h1:blXXJkSxSSfBVBlC76pxqeO+LN3aDfLQo+309xJstO0=
go.uber.org/multierr v1.11.0/go.mod h1:20+QtiLqy0Nd6FdQB9TLXag12DsQkrbs3htMFfDN80Y=
go.uber.org/zap v1.27.0 h1:aJMhYGrd5QSmlpLMr2MftRKl7t8J8PTZPA732ud/XR8=
go.uber.org/zap v1.27.0/go.mod h1:GB2qFLM7cTU87MWRP2mPIjqfIDnGu+VIO4V/SdhGo2E=
go.uber.org/zap v1.27.1 h1:08RqriUEv8+ArZRYSTXy1LeBScaMpVSTBhCeaZYfMYc=
go.uber.org/zap v1.27.1/go.mod h1:GB2qFLM7cTU87MWRP2mPIjqfIDnGu+VIO4V/SdhGo2E=
golang.org/x/arch v0.0.0-20210923205945-b76863e36670/go.mod h1:5om86z9Hs0C8fWVUuoMHwpExlXzs5Tkyp9hOrfG7pp8=
golang.org/x/arch v0.8.0 h1:3wRIsP3pM4yUptoR96otTUOXI367OS0+c9eeRi9doIc=
golang.org/x/arch v0.8.0/go.mod h1:FEVrYAQjsQXMVJ1nsMoVVXPZg6p2JE2mx8psSWTDQys=
@@ -242,15 +273,21 @@ golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPh
golang.org/x/crypto v0.0.0-20210921155107-089bfa567519/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
golang.org/x/crypto v0.32.0 h1:euUpcYgM8WcP71gNpTqQCn6rC2t6ULUPiOzfWaXVVfc=
golang.org/x/crypto v0.32.0/go.mod h1:ZnnJkOaASj8g0AjIduWNlq2NRxL0PlBrbKVyZ6V/Ugc=
golang.org/x/crypto v0.46.0 h1:cKRW/pmt1pKAfetfu+RCEvjvZkA9RimPbh7bhFjGVBU=
golang.org/x/crypto v0.46.0/go.mod h1:Evb/oLKmMraqjZ2iQTwDwvCtJkczlDuTmdJXoZVzqU0=
golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/exp v0.0.0-20240531132922-fd00a4e0eefc h1:O9NuF4s+E/PvMIy+9IUZB9znFwUIXEWSstNjek6VpVg=
golang.org/x/exp v0.0.0-20240531132922-fd00a4e0eefc/go.mod h1:XtvwrStGgqGPLc4cjQfWqZHG1YFdYs6swckp8vpsjnc=
golang.org/x/exp v0.0.0-20251219203646-944ab1f22d93 h1:fQsdNF2N+/YewlRZiricy4P1iimyPKZ/xwniHj8Q2a0=
golang.org/x/exp v0.0.0-20251219203646-944ab1f22d93/go.mod h1:EPRbTFwzwjXj9NpYyyrvenVh9Y+GFeEvMNh7Xuz7xgU=
golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
golang.org/x/lint v0.0.0-20190227174305-5b3e6a55c961/go.mod h1:wehouNa3lNwaWXcvxsM5YxQ5yQlVC4a0KAMCusXpPoU=
golang.org/x/lint v0.0.0-20190313153728-d0100b6bd8b3/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4/go.mod h1:jJ57K6gSWd91VN4djpZkiMVwK6gcyfeH4XE8wZrZaV4=
golang.org/x/mod v0.17.0 h1:zY54UmvipHiNd+pm+m0x9KhZ9hl1/7QNMyxXbc6ICqA=
golang.org/x/mod v0.17.0/go.mod h1:hTbmBsO62+eylJbnUtE2MGJUyE7QWk4xUqPFrRgJ+7c=
golang.org/x/mod v0.31.0 h1:HaW9xtz0+kOcWKwli0ZXy79Ix+UW/vOfmWI5QVd2tgI=
golang.org/x/mod v0.31.0/go.mod h1:43JraMp9cGx1Rx3AqioxrbrhNsLl2l/iNAvuBkrezpg=
golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190108225652-1e06a53dbb7e/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
@@ -266,6 +303,8 @@ golang.org/x/net v0.0.0-20220722155237-a158d28d115b/go.mod h1:XRhObCWvk6IyKnWLug
golang.org/x/net v0.7.0/go.mod h1:2Tu9+aMcznHK/AK1HMvgo6xiTLG5rD5rZLDS+rp2Bjs=
golang.org/x/net v0.34.0 h1:Mb7Mrk043xzHgnRM88suvJFwzVrRfHEHJEl5/71CKw0=
golang.org/x/net v0.34.0/go.mod h1:di0qlW3YNM5oh6GqDGQr92MyTozJPmybPK4Ev/Gm31k=
golang.org/x/net v0.48.0 h1:zyQRTTrjc33Lhh0fBgT/H3oZq9WuvRR5gPC70xpDiQU=
golang.org/x/net v0.48.0/go.mod h1:+ndRgGjkh8FGtu1w1FGbEC31if4VrNVMuKTgcAAnQRY=
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
golang.org/x/oauth2 v0.0.0-20200107190931-bf48bf16ab8d/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
@@ -275,6 +314,8 @@ golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJ
golang.org/x/sync v0.0.0-20220722155255-886fb9371eb4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.12.0 h1:MHc5BpPuC30uJk597Ri8TV3CNZcTLu6B6z4lJy+g6Jw=
golang.org/x/sync v0.12.0/go.mod h1:1dzgHSNfp02xaA81J2MS99Qcpr2w7fw1gpm99rleRqA=
golang.org/x/sync v0.19.0 h1:vV+1eWNmZ5geRlYjzm2adRgW2/mcpevXNg50YZtPCE4=
golang.org/x/sync v0.19.0/go.mod h1:9KTHXmSnoGruLpwFjVSX0lNNA75CykiMECbovNTZqGI=
golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
@@ -293,6 +334,8 @@ golang.org/x/sys v0.5.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.29.0 h1:TPYlXGxvx1MGTn2GiZDhnjPA9wZzZeGKHHmKhHYvgaU=
golang.org/x/sys v0.29.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/sys v0.39.0 h1:CvCKL8MeisomCi6qNZ+wbb0DN9E5AATixKsvNtMoMFk=
golang.org/x/sys v0.39.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
golang.org/x/term v0.5.0/go.mod h1:jMB1sMXY+tzblOD4FWmEbocvup2/aLOaQEp7JmGp78k=
@@ -303,6 +346,8 @@ golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ=
golang.org/x/text v0.7.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8=
golang.org/x/text v0.23.0 h1:D71I7dUrlY+VX0gQShAThNGHFxZ13dGLBHQLVl1mJlY=
golang.org/x/text v0.23.0/go.mod h1:/BLNzu4aZCJ1+kcD0DNRotWKage4q2rGVAg4o22unh4=
golang.org/x/text v0.32.0 h1:ZD01bjUt1FQ9WJ0ClOL5vxgxOI/sVCNgX1YtKwcY0mU=
golang.org/x/text v0.32.0/go.mod h1:o/rUWzghvpD5TXrTIBuJU77MTaN0ljMWE47kxGJQ7jY=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190114222345-bf090417da8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190226205152-f727befe758c/go.mod h1:9Yl7xja0Znq3iFh3HoIrodX9oNMXvdceNzlUR8zjMvY=
@@ -312,6 +357,8 @@ golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtn
golang.org/x/tools v0.1.12/go.mod h1:hNGJHUnrk76NpqgfD5Aqm5Crs+Hm0VOH/i9J2+nxYbc=
golang.org/x/tools v0.21.1-0.20240508182429-e35e4ccd0d2d h1:vU5i/LfpvrRCpgM/VPfJLg5KjxD3E+hfT1SH+d9zLwg=
golang.org/x/tools v0.21.1-0.20240508182429-e35e4ccd0d2d/go.mod h1:aiJjzUbINMkxbQROHiO6hDPo2LHcIPhhQsa9DLh0yGk=
golang.org/x/tools v0.40.0 h1:yLkxfA+Qnul4cs9QA3KnlFu0lVmd8JJfoq+E41uSutA=
golang.org/x/tools v0.40.0/go.mod h1:Ik/tzLRlbscWpqqMRjyWYDisX8bG13FrdXp3o4Sr9lc=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=

493
kind-setup.sh Executable file
View File

@@ -0,0 +1,493 @@
#!/usr/bin/env bash
set -euo pipefail
# Configuration
CLUSTER_NAME="${KIND_CLUSTER_NAME:-maps-service-local}"
INFRA_PATH="${INFRA_PATH:-$HOME/Documents/Projects/infra}"
NAMESPACE="${NAMESPACE:-default}"
REGISTRY_NAME="kind-registry"
REGISTRY_PORT="5001"
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
NC='\033[0m' # No Color
log_info() {
echo -e "${GREEN}[INFO]${NC} $1"
}
log_warn() {
echo -e "${YELLOW}[WARN]${NC} $1"
}
log_error() {
echo -e "${RED}[ERROR]${NC} $1"
}
# Check dependencies
check_dependencies() {
log_info "Checking dependencies..."
local deps=("kind" "kubectl" "docker")
for dep in "${deps[@]}"; do
if ! command -v "$dep" &> /dev/null; then
log_error "$dep is not installed. Please install it first."
exit 1
fi
done
log_info "All dependencies are installed"
}
# Create local container registry
create_registry() {
if [ "$(docker ps -q -f name=${REGISTRY_NAME})" ]; then
log_info "Registry ${REGISTRY_NAME} already running"
return 0
fi
if [ "$(docker ps -aq -f name=${REGISTRY_NAME})" ]; then
log_info "Starting existing registry ${REGISTRY_NAME}"
docker start ${REGISTRY_NAME}
return 0
fi
log_info "Creating local registry ${REGISTRY_NAME}..."
docker run -d --restart=always -p "127.0.0.1:${REGISTRY_PORT}:5000" --name "${REGISTRY_NAME}" registry:2
}
# Create KIND cluster with registry
create_cluster() {
if kind get clusters | grep -q "^${CLUSTER_NAME}$"; then
log_warn "Cluster ${CLUSTER_NAME} already exists"
read -p "Do you want to delete and recreate it? (y/N): " -n 1 -r
echo
if [[ $REPLY =~ ^[Yy]$ ]]; then
log_info "Deleting existing cluster..."
kind delete cluster --name "${CLUSTER_NAME}"
else
log_info "Using existing cluster"
kubectl config use-context "kind-${CLUSTER_NAME}"
return 0
fi
fi
log_info "Creating KIND cluster ${CLUSTER_NAME}..."
cat <<EOF | kind create cluster --name "${CLUSTER_NAME}" --config=-
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
containerdConfigPatches:
- |-
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."localhost:${REGISTRY_PORT}"]
endpoint = ["http://${REGISTRY_NAME}:5000"]
nodes:
- role: control-plane
kubeadmConfigPatches:
- |
kind: InitConfiguration
nodeRegistration:
kubeletExtraArgs:
node-labels: "ingress-ready=true"
extraPortMappings:
- containerPort: 80
hostPort: 80
protocol: TCP
- containerPort: 443
hostPort: 443
protocol: TCP
- containerPort: 8080
hostPort: 8080
protocol: TCP
- containerPort: 3000
hostPort: 3000
protocol: TCP
EOF
# Connect the registry to the cluster network
if [ "$(docker inspect -f='{{json .NetworkSettings.Networks.kind}}' "${REGISTRY_NAME}")" = 'null' ]; then
log_info "Connecting registry to cluster network..."
docker network connect "kind" "${REGISTRY_NAME}"
fi
# Document the local registry
kubectl apply -f - <<EOF
apiVersion: v1
kind: ConfigMap
metadata:
name: local-registry-hosting
namespace: kube-public
data:
localRegistryHosting.v1: |
host: "localhost:${REGISTRY_PORT}"
help: "https://kind.sigs.k8s.io/docs/user/local-registry/"
EOF
log_info "KIND cluster created successfully"
}
# Build Docker images
build_images() {
log_info "Building Docker images..."
log_info "Building backend..."
make build-backend
docker build -t localhost:${REGISTRY_PORT}/maptest-api:local .
docker push localhost:${REGISTRY_PORT}/maptest-api:local
log_info "Building validator..."
make build-validator
docker build -f validation/Containerfile -t localhost:${REGISTRY_PORT}/maptest-validator:local .
docker push localhost:${REGISTRY_PORT}/maptest-validator:local
log_info "Building frontend..."
docker build web -f web/Containerfile -t localhost:${REGISTRY_PORT}/maptest-frontend:local .
docker push localhost:${REGISTRY_PORT}/maptest-frontend:local
log_info "All images built and pushed to local registry"
}
# Create secrets
create_secrets() {
log_info "Creating Kubernetes secrets..."
# Create dummy secrets for local development
kubectl create secret generic cockroach-qtdb \
--from-literal=HOST=data-postgres \
--from-literal=PORT=5432 \
--from-literal=USER=postgres \
--from-literal=PASS=localpassword \
--dry-run=client -o yaml | kubectl apply -f -
kubectl create secret generic maptest-cookie \
--from-literal=api=dummy-api-key \
--dry-run=client -o yaml | kubectl apply -f -
kubectl create secret generic auth-service-secrets \
--from-literal=DISCORD_CLIENT_ID=dummy \
--from-literal=DISCORD_CLIENT_SECRET=dummy \
--from-literal=RBX_API_KEY=dummy \
--dry-run=client -o yaml | kubectl apply -f -
log_info "Secrets created"
}
# Deploy dependencies
deploy_dependencies() {
log_info "Deploying dependencies..."
# Deploy PostgreSQL (manual deployment)
log_info "Deploying PostgreSQL..."
kubectl apply -f - <<EOF
apiVersion: v1
kind: Service
metadata:
name: data-postgres
spec:
ports:
- port: 5432
targetPort: 5432
selector:
app: data-postgres
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: data-postgres
spec:
replicas: 1
selector:
matchLabels:
app: data-postgres
template:
metadata:
labels:
app: data-postgres
spec:
containers:
- name: postgres
image: postgres:15
ports:
- containerPort: 5432
env:
- name: POSTGRES_USER
value: postgres
- name: POSTGRES_PASSWORD
value: localpassword
- name: POSTGRES_DB
value: postgres
volumeMounts:
- name: postgres-storage
mountPath: /var/lib/postgresql/data
volumes:
- name: postgres-storage
emptyDir: {}
EOF
# Deploy Redis (using a simple deployment)
log_info "Deploying Redis..."
kubectl apply -f - <<EOF
apiVersion: v1
kind: Service
metadata:
name: redis-master
spec:
ports:
- port: 6379
targetPort: 6379
selector:
app: redis
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis
spec:
replicas: 1
selector:
matchLabels:
app: redis
template:
metadata:
labels:
app: redis
spec:
containers:
- name: redis
image: redis:latest
ports:
- containerPort: 6379
command: ["redis-server", "--appendonly", "yes"]
EOF
# Deploy NATS
log_info "Deploying NATS..."
kubectl apply -f - <<EOF
apiVersion: v1
kind: Service
metadata:
name: nats
spec:
ports:
- port: 4222
targetPort: 4222
selector:
app: nats
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nats
spec:
replicas: 1
selector:
matchLabels:
app: nats
template:
metadata:
labels:
app: nats
spec:
containers:
- name: nats
image: nats:latest
args: ["-js"]
ports:
- containerPort: 4222
EOF
# Deploy Auth Service (if needed)
if [ -d "${INFRA_PATH}/applications/auth-service/base" ]; then
log_info "Deploying auth-service..."
kubectl apply -k "${INFRA_PATH}/applications/auth-service/base" || log_warn "Auth service deployment failed, continuing..."
fi
# Deploy Data Service (if needed)
if [ -d "${INFRA_PATH}/applications/data-service/base" ]; then
log_info "Deploying data-service..."
kubectl apply -k "${INFRA_PATH}/applications/data-service/base" || log_warn "Data service deployment failed, continuing..."
fi
log_info "Waiting for dependencies to be ready..."
kubectl wait --for=condition=ready pod -l app=data-postgres --timeout=120s || log_warn "PostgreSQL not ready yet"
kubectl wait --for=condition=ready pod -l app=nats --timeout=60s || log_warn "NATS not ready yet"
}
# Deploy maps-service
deploy_maps_service() {
log_info "Deploying maps-service..."
# Create a local overlay for development
local temp_dir=$(mktemp -d)
trap "rm -rf ${temp_dir}" EXIT
cp -r "${INFRA_PATH}/applications/maps-services/base" "${temp_dir}/"
# Create a custom kustomization for local development
cat > "${temp_dir}/base/kustomization.yaml" <<EOF
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
commonLabels:
service: maps-service
resources:
- api.yaml
- configmap.yaml
- frontend.yaml
- validator.yaml
images:
- name: registry.itzana.me/strafesnet/maptest-api
newName: localhost:${REGISTRY_PORT}/maptest-api
newTag: local
- name: registry.itzana.me/strafesnet/maptest-frontend
newName: localhost:${REGISTRY_PORT}/maptest-frontend
newTag: local
- name: registry.itzana.me/strafesnet/maptest-validator
newName: localhost:${REGISTRY_PORT}/maptest-validator
newTag: local
patches:
- target:
kind: Deployment
patch: |-
- op: remove
path: /spec/template/spec/imagePullSecrets
EOF
kubectl apply -k "${temp_dir}/base" || {
log_error "Failed to deploy maps-service"
return 1
}
log_info "Waiting for maps-service to be ready..."
kubectl wait --for=condition=ready pod -l app=maptest-api --timeout=120s || log_warn "API not ready yet"
kubectl wait --for=condition=ready pod -l app=maptest-frontend --timeout=120s || log_warn "Frontend not ready yet"
kubectl wait --for=condition=ready pod -l app=maptest-validator --timeout=120s || log_warn "Validator not ready yet"
}
# Port forwarding
setup_port_forwarding() {
log_info "Setting up port forwarding..."
log_info "Port forwarding for API (8080)..."
kubectl port-forward svc/maptest-api 8080:8080 &
log_info "Port forwarding for Frontend (3000)..."
kubectl port-forward svc/maptest-frontend 3000:3000 &
log_info "Port forwarding setup complete"
log_info "You may need to manually manage these port-forwards or run them in separate terminals"
}
# Display cluster info
display_info() {
log_info "======================================"
log_info "KIND Cluster Setup Complete!"
log_info "======================================"
echo
log_info "Cluster name: ${CLUSTER_NAME}"
log_info "Local registry: localhost:${REGISTRY_PORT}"
echo
log_info "Services:"
kubectl get svc
echo
log_info "Pods:"
kubectl get pods
echo
log_info "Access your application:"
log_info " - Frontend: http://localhost:3000"
log_info " - API: http://localhost:8080"
echo
log_info "Useful commands:"
log_info " - View logs: kubectl logs -f <pod-name>"
log_info " - Get pods: kubectl get pods"
log_info " - Delete cluster: kind delete cluster --name ${CLUSTER_NAME}"
log_info " - Rebuild and redeploy: ./kind-setup.sh --rebuild"
}
# Cleanup function
cleanup() {
log_info "Cleaning up..."
kind delete cluster --name "${CLUSTER_NAME}"
docker stop ${REGISTRY_NAME} && docker rm ${REGISTRY_NAME}
log_info "Cleanup complete"
}
# Main function
main() {
local rebuild=false
local cleanup_only=false
# Parse arguments
while [[ $# -gt 0 ]]; do
case $1 in
--rebuild)
rebuild=true
shift
;;
--cleanup)
cleanup_only=true
shift
;;
--infra-path)
INFRA_PATH="$2"
shift 2
;;
--help)
echo "Usage: $0 [OPTIONS]"
echo "Options:"
echo " --rebuild Rebuild and push Docker images"
echo " --cleanup Delete the cluster and registry"
echo " --infra-path PATH Path to infra directory (default: ~/Documents/Projects/infra)"
echo " --help Show this help message"
exit 0
;;
*)
log_error "Unknown option: $1"
exit 1
;;
esac
done
if [ "$cleanup_only" = true ]; then
cleanup
exit 0
fi
# Validate infra path
if [ ! -d "$INFRA_PATH" ]; then
log_error "Infra path does not exist: $INFRA_PATH"
log_error "Please provide a valid path using --infra-path"
exit 1
fi
if [ ! -d "$INFRA_PATH/applications/maps-services" ]; then
log_error "maps-services not found in infra path: $INFRA_PATH/applications/maps-services"
exit 1
fi
log_info "Using infra path: $INFRA_PATH"
check_dependencies
create_registry
create_cluster
if [ "$rebuild" = true ]; then
build_images
fi
create_secrets
deploy_dependencies
deploy_maps_service
display_info
log_info "Setup complete! Press Ctrl+C to stop port forwarding and exit."
log_warn "Note: You may want to set up port-forwarding manually in separate terminals:"
log_info " kubectl port-forward svc/maptest-api 8080:8080"
log_info " kubectl port-forward svc/maptest-frontend 3000:3000"
}
# Run main function
main "$@"

View File

@@ -6,6 +6,8 @@ info:
servers:
- url: https://submissions.strafes.net/v1
tags:
- name: AOR
description: AOR (Accept or Reject) event operations
- name: Mapfixes
description: Mapfix operations
- name: Maps
@@ -14,15 +16,41 @@ tags:
description: Long-running operations
- name: Session
description: Session queries
- name: Stats
description: Statistics queries
- name: Submissions
description: Submission operations
- name: Scripts
description: Script operations
- name: ScriptPolicy
description: Script policy operations
- name: Thumbnails
description: Thumbnail operations
- name: Users
description: User operations
security:
- cookieAuth: []
paths:
/stats:
get:
summary: Get aggregate statistics
operationId: getStats
tags:
- Stats
security: []
responses:
"200":
description: Successful response
content:
application/json:
schema:
$ref: "#/components/schemas/Stats"
default:
description: General Error
content:
application/json:
schema:
$ref: "#/components/schemas/Error"
/session/user:
get:
summary: Get information about the currently logged in user
@@ -244,6 +272,12 @@ paths:
type: integer
format: int64
minimum: 0
- name: AssetVersion
in: query
schema:
type: integer
format: int64
minimum: 0
- name: TargetAssetID
in: query
schema:
@@ -415,6 +449,30 @@ paths:
application/json:
schema:
$ref: "#/components/schemas/Error"
/mapfixes/{MapfixID}/description:
patch:
summary: Update description (submitter only)
operationId: updateMapfixDescription
tags:
- Mapfixes
parameters:
- $ref: '#/components/parameters/MapfixID'
requestBody:
required: true
content:
text/plain:
schema:
type: string
maxLength: 256
responses:
"204":
description: Successful response
default:
description: General Error
content:
application/json:
schema:
$ref: "#/components/schemas/Error"
/mapfixes/{MapfixID}/completed:
post:
summary: Called by maptest when a player completes the map
@@ -587,7 +645,7 @@ paths:
$ref: "#/components/schemas/Error"
/mapfixes/{MapfixID}/status/trigger-upload:
post:
summary: Role Admin changes status from Validated -> Uploading
summary: Role MapfixUpload changes status from Validated -> Uploading
operationId: actionMapfixTriggerUpload
tags:
- Mapfixes
@@ -604,7 +662,7 @@ paths:
$ref: "#/components/schemas/Error"
/mapfixes/{MapfixID}/status/reset-uploading:
post:
summary: Role Admin manually resets uploading softlock and changes status from Uploading -> Validated
summary: Role MapfixUpload manually resets uploading softlock and changes status from Uploading -> Validated
operationId: actionMapfixValidated
tags:
- Mapfixes
@@ -619,6 +677,40 @@ paths:
application/json:
schema:
$ref: "#/components/schemas/Error"
/mapfixes/{MapfixID}/status/trigger-release:
post:
summary: Role MapfixUpload changes status from Uploaded -> Releasing
operationId: actionMapfixTriggerRelease
tags:
- Mapfixes
parameters:
- $ref: '#/components/parameters/MapfixID'
responses:
"204":
description: Successful response
default:
description: General Error
content:
application/json:
schema:
$ref: "#/components/schemas/Error"
/mapfixes/{MapfixID}/status/reset-releasing:
post:
summary: Role MapfixUpload manually resets releasing softlock and changes status from Releasing -> Uploaded
operationId: actionMapfixUploaded
tags:
- Mapfixes
parameters:
- $ref: '#/components/parameters/MapfixID'
responses:
"204":
description: Successful response
default:
description: General Error
content:
application/json:
schema:
$ref: "#/components/schemas/Error"
/operations/{OperationID}:
get:
summary: Retrieve operation with ID
@@ -698,6 +790,12 @@ paths:
type: integer
format: int64
minimum: 0
- name: AssetVersion
in: query
schema:
type: integer
format: int64
minimum: 0
- name: UploadedAssetID
in: query
schema:
@@ -864,6 +962,89 @@ paths:
application/json:
schema:
$ref: "#/components/schemas/Error"
/submissions/{SubmissionID}/reviews:
get:
summary: Get all reviews for a submission
operationId: listSubmissionReviews
tags:
- Submissions
parameters:
- $ref: '#/components/parameters/SubmissionID'
responses:
"200":
description: Successful response
content:
application/json:
schema:
type: array
items:
$ref: "#/components/schemas/SubmissionReview"
default:
description: General Error
content:
application/json:
schema:
$ref: "#/components/schemas/Error"
post:
summary: Create a review for a submission
operationId: createSubmissionReview
tags:
- Submissions
parameters:
- $ref: '#/components/parameters/SubmissionID'
requestBody:
required: true
content:
application/json:
schema:
$ref: "#/components/schemas/SubmissionReviewCreate"
responses:
"200":
description: Successful response
content:
application/json:
schema:
$ref: "#/components/schemas/SubmissionReview"
default:
description: General Error
content:
application/json:
schema:
$ref: "#/components/schemas/Error"
/submissions/{SubmissionID}/reviews/{ReviewID}:
patch:
summary: Update an existing review
operationId: updateSubmissionReview
tags:
- Submissions
parameters:
- $ref: '#/components/parameters/SubmissionID'
- name: ReviewID
in: path
required: true
schema:
type: integer
format: int64
minimum: 0
requestBody:
required: true
content:
application/json:
schema:
$ref: "#/components/schemas/SubmissionReviewCreate"
responses:
"200":
description: Successful response
content:
application/json:
schema:
$ref: "#/components/schemas/SubmissionReview"
default:
description: General Error
content:
application/json:
schema:
$ref: "#/components/schemas/Error"
/submissions/{SubmissionID}/model:
post:
summary: Update model following role restrictions
@@ -1067,7 +1248,7 @@ paths:
$ref: "#/components/schemas/Error"
/submissions/{SubmissionID}/status/trigger-upload:
post:
summary: Role Admin changes status from Validated -> Uploading
summary: Role SubmissionUpload changes status from Validated -> Uploading
operationId: actionSubmissionTriggerUpload
tags:
- Submissions
@@ -1084,7 +1265,7 @@ paths:
$ref: "#/components/schemas/Error"
/submissions/{SubmissionID}/status/reset-uploading:
post:
summary: Role Admin manually resets uploading softlock and changes status from Uploading -> Validated
summary: Role SubmissionUpload manually resets uploading softlock and changes status from Uploading -> Validated
operationId: actionSubmissionValidated
tags:
- Submissions
@@ -1101,7 +1282,7 @@ paths:
$ref: "#/components/schemas/Error"
/release-submissions:
post:
summary: Release a set of uploaded maps
summary: Release a set of uploaded maps. Role SubmissionRelease
operationId: releaseSubmissions
tags:
- Submissions
@@ -1118,6 +1299,113 @@ paths:
responses:
"201":
description: Successful response
content:
application/json:
schema:
$ref: "#/components/schemas/OperationID"
default:
description: General Error
content:
application/json:
schema:
$ref: "#/components/schemas/Error"
/aor-events:
get:
summary: Get list of AOR events
operationId: listAOREvents
tags:
- AOR
security: []
parameters:
- $ref: "#/components/parameters/Page"
- $ref: "#/components/parameters/Limit"
responses:
"200":
description: Successful response
content:
application/json:
schema:
type: array
items:
$ref: "#/components/schemas/AOREvent"
default:
description: General Error
content:
application/json:
schema:
$ref: "#/components/schemas/Error"
/aor-events/active:
get:
summary: Get the currently active AOR event
operationId: getActiveAOREvent
tags:
- AOR
security: []
responses:
"200":
description: Successful response
content:
application/json:
schema:
$ref: "#/components/schemas/AOREvent"
default:
description: General Error
content:
application/json:
schema:
$ref: "#/components/schemas/Error"
/aor-events/{AOREventID}:
get:
summary: Get a specific AOR event
operationId: getAOREvent
tags:
- AOR
security: []
parameters:
- name: AOREventID
in: path
required: true
schema:
type: integer
format: int64
minimum: 1
responses:
"200":
description: Successful response
content:
application/json:
schema:
$ref: "#/components/schemas/AOREvent"
default:
description: General Error
content:
application/json:
schema:
$ref: "#/components/schemas/Error"
/aor-events/{AOREventID}/submissions:
get:
summary: Get all submissions for a specific AOR event
operationId: getAOREventSubmissions
tags:
- AOR
security: []
parameters:
- name: AOREventID
in: path
required: true
schema:
type: integer
format: int64
minimum: 1
responses:
"200":
description: Successful response
content:
application/json:
schema:
type: array
items:
$ref: "#/components/schemas/Submission"
default:
description: General Error
content:
@@ -1388,6 +1676,222 @@ paths:
application/json:
schema:
$ref: "#/components/schemas/Error"
/thumbnails/assets:
post:
summary: Batch fetch asset thumbnails
operationId: batchAssetThumbnails
tags:
- Thumbnails
security: []
requestBody:
required: true
content:
application/json:
schema:
type: object
required:
- assetIds
properties:
assetIds:
type: array
items:
type: integer
format: uint64
maxItems: 100
description: Array of asset IDs (max 100)
size:
type: string
enum:
- "150x150"
- "420x420"
- "768x432"
default: "420x420"
description: Thumbnail size
responses:
"200":
description: Successful response
content:
application/json:
schema:
type: object
properties:
thumbnails:
type: object
additionalProperties:
type: string
description: Map of asset ID to thumbnail URL
default:
description: General Error
content:
application/json:
schema:
$ref: "#/components/schemas/Error"
/thumbnails/asset/{AssetID}:
get:
summary: Get single asset thumbnail
operationId: getAssetThumbnail
tags:
- Thumbnails
security: []
parameters:
- name: AssetID
in: path
required: true
schema:
type: integer
format: uint64
- name: size
in: query
schema:
type: string
enum:
- "150x150"
- "420x420"
- "768x432"
default: "420x420"
responses:
"302":
description: Redirect to thumbnail URL
headers:
Location:
description: URL to redirect to
schema:
type: string
default:
description: General Error
content:
application/json:
schema:
$ref: "#/components/schemas/Error"
/thumbnails/users:
post:
summary: Batch fetch user avatar thumbnails
operationId: batchUserThumbnails
tags:
- Thumbnails
security: []
requestBody:
required: true
content:
application/json:
schema:
type: object
required:
- userIds
properties:
userIds:
type: array
items:
type: integer
format: uint64
maxItems: 100
description: Array of user IDs (max 100)
size:
type: string
enum:
- "150x150"
- "420x420"
- "768x432"
default: "150x150"
description: Thumbnail size
responses:
"200":
description: Successful response
content:
application/json:
schema:
type: object
properties:
thumbnails:
type: object
additionalProperties:
type: string
description: Map of user ID to thumbnail URL
default:
description: General Error
content:
application/json:
schema:
$ref: "#/components/schemas/Error"
/thumbnails/user/{UserID}:
get:
summary: Get single user avatar thumbnail
operationId: getUserThumbnail
tags:
- Thumbnails
security: []
parameters:
- name: UserID
in: path
required: true
schema:
type: integer
format: uint64
- name: size
in: query
schema:
type: string
enum:
- "150x150"
- "420x420"
- "768x432"
default: "150x150"
responses:
"302":
description: Redirect to thumbnail URL
headers:
Location:
description: URL to redirect to
schema:
type: string
default:
description: General Error
content:
application/json:
schema:
$ref: "#/components/schemas/Error"
/usernames:
post:
summary: Batch fetch usernames
operationId: batchUsernames
tags:
- Users
security: []
requestBody:
required: true
content:
application/json:
schema:
type: object
required:
- userIds
properties:
userIds:
type: array
items:
type: integer
format: uint64
maxItems: 100
description: Array of user IDs (max 100)
responses:
"200":
description: Successful response
content:
application/json:
schema:
type: object
properties:
usernames:
type: object
additionalProperties:
type: string
description: Map of user ID to username
default:
description: General Error
content:
application/json:
schema:
$ref: "#/components/schemas/Error"
components:
securitySchemes:
cookieAuth:
@@ -1467,6 +1971,56 @@ components:
minimum: 0
maximum: 100
schemas:
AOREvent:
type: object
required:
- ID
- StartDate
- FreezeDate
- SelectionDate
- DecisionDate
- Status
- CreatedAt
- UpdatedAt
properties:
ID:
type: integer
format: int64
StartDate:
type: integer
format: int64
description: Unix timestamp for the 1st day of AOR month
FreezeDate:
type: integer
format: int64
description: Unix timestamp when submissions are frozen
SelectionDate:
type: integer
format: int64
description: Unix timestamp when automatic selection occurs (end of week 1)
DecisionDate:
type: integer
format: int64
description: Unix timestamp when final accept/reject decisions are made (end of month)
Status:
type: integer
format: int32
minimum: 0
maximum: 5
description: >
AOR Event Status:
* `0` - Scheduled
* `1` - Open
* `2` - Frozen
* `3` - Selected
* `4` - Completed
* `5` - Closed
CreatedAt:
type: integer
format: int64
UpdatedAt:
type: integer
format: int64
AuditEvent:
type: object
required:
@@ -1624,6 +2178,8 @@ components:
- Submitter
- AssetID
- AssetVersion
# - ValidatedAssetID
# - ValidatedAssetVersion
- Completed
- TargetAssetID
- StatusID
@@ -1664,6 +2220,14 @@ components:
type: integer
format: int64
minimum: 0
ValidatedAssetID:
type: integer
format: int64
minimum: 0
ValidatedAssetVersion:
type: integer
format: int64
minimum: 0
Completed:
type: boolean
TargetAssetID:
@@ -1902,7 +2466,7 @@ components:
properties:
Name:
type: string
maxLength: 128
maxLength: 256
Source:
type: string
maxLength: 1048576
@@ -2001,6 +2565,102 @@ components:
type: integer
format: int32
minimum: 0
Stats:
description: Aggregate statistics for submissions and mapfixes
type: object
properties:
TotalSubmissions:
type: integer
format: int64
minimum: 0
description: Total number of submissions
TotalMapfixes:
type: integer
format: int64
minimum: 0
description: Total number of mapfixes
ReleasedSubmissions:
type: integer
format: int64
minimum: 0
description: Number of released submissions
ReleasedMapfixes:
type: integer
format: int64
minimum: 0
description: Number of released mapfixes
SubmittedSubmissions:
type: integer
format: int64
minimum: 0
description: Number of submissions under review
SubmittedMapfixes:
type: integer
format: int64
minimum: 0
description: Number of mapfixes under review
required:
- TotalSubmissions
- TotalMapfixes
- ReleasedSubmissions
- ReleasedMapfixes
- SubmittedSubmissions
- SubmittedMapfixes
SubmissionReview:
required:
- ID
- SubmissionID
- ReviewerID
- Recommend
- Description
- Outdated
- CreatedAt
- UpdatedAt
type: object
properties:
ID:
type: integer
format: int64
minimum: 0
SubmissionID:
type: integer
format: int64
minimum: 0
ReviewerID:
type: integer
format: int64
minimum: 0
Recommend:
type: boolean
description: Whether the reviewer recommends accepting the submission
Description:
type: string
maxLength: 2048
description: Text description of the review reasoning
Outdated:
type: boolean
description: Flag indicating if the review is outdated due to submission changes
CreatedAt:
type: integer
format: int64
minimum: 0
UpdatedAt:
type: integer
format: int64
minimum: 0
SubmissionReviewCreate:
required:
- Recommend
- Description
type: object
properties:
Recommend:
type: boolean
description: Whether the reviewer recommends accepting the submission
Description:
type: string
maxLength: 2048
description: Text description of the review reasoning
Error:
description: Represents error object
type: object

View File

@@ -5,14 +5,14 @@ package api
import (
"net/http"
"go.opentelemetry.io/otel"
"go.opentelemetry.io/otel/metric"
"go.opentelemetry.io/otel/trace"
ht "github.com/ogen-go/ogen/http"
"github.com/ogen-go/ogen/middleware"
"github.com/ogen-go/ogen/ogenerrors"
"github.com/ogen-go/ogen/otelogen"
"go.opentelemetry.io/otel"
"go.opentelemetry.io/otel/attribute"
"go.opentelemetry.io/otel/metric"
"go.opentelemetry.io/otel/trace"
)
var (
@@ -32,6 +32,7 @@ type otelConfig struct {
Tracer trace.Tracer
MeterProvider metric.MeterProvider
Meter metric.Meter
Attributes []attribute.KeyValue
}
func (cfg *otelConfig) initOTEL() {
@@ -215,6 +216,13 @@ func WithMeterProvider(provider metric.MeterProvider) Option {
})
}
// WithAttributes specifies default otel attributes.
func WithAttributes(attributes ...attribute.KeyValue) Option {
return otelOptionFunc(func(cfg *otelConfig) {
cfg.Attributes = attributes
})
}
// WithClient specifies http client to use.
func WithClient(client ht.Client) ClientOption {
return optionFunc[clientConfig](func(cfg *clientConfig) {

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,19 @@
// Code generated by ogen, DO NOT EDIT.
package api
// setDefaults set default value of fields.
func (s *BatchAssetThumbnailsReq) setDefaults() {
{
val := BatchAssetThumbnailsReqSize("420x420")
s.Size.SetTo(val)
}
}
// setDefaults set default value of fields.
func (s *BatchUserThumbnailsReq) setDefaults() {
{
val := BatchUserThumbnailsReqSize("150x150")
s.Size.SetTo(val)
}
}

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -12,10 +12,12 @@ const (
ActionMapfixResetSubmittingOperation OperationName = "ActionMapfixResetSubmitting"
ActionMapfixRetryValidateOperation OperationName = "ActionMapfixRetryValidate"
ActionMapfixRevokeOperation OperationName = "ActionMapfixRevoke"
ActionMapfixTriggerReleaseOperation OperationName = "ActionMapfixTriggerRelease"
ActionMapfixTriggerSubmitOperation OperationName = "ActionMapfixTriggerSubmit"
ActionMapfixTriggerSubmitUncheckedOperation OperationName = "ActionMapfixTriggerSubmitUnchecked"
ActionMapfixTriggerUploadOperation OperationName = "ActionMapfixTriggerUpload"
ActionMapfixTriggerValidateOperation OperationName = "ActionMapfixTriggerValidate"
ActionMapfixUploadedOperation OperationName = "ActionMapfixUploaded"
ActionMapfixValidatedOperation OperationName = "ActionMapfixValidated"
ActionSubmissionAcceptedOperation OperationName = "ActionSubmissionAccepted"
ActionSubmissionRejectOperation OperationName = "ActionSubmissionReject"
@@ -28,6 +30,9 @@ const (
ActionSubmissionTriggerUploadOperation OperationName = "ActionSubmissionTriggerUpload"
ActionSubmissionTriggerValidateOperation OperationName = "ActionSubmissionTriggerValidate"
ActionSubmissionValidatedOperation OperationName = "ActionSubmissionValidated"
BatchAssetThumbnailsOperation OperationName = "BatchAssetThumbnails"
BatchUserThumbnailsOperation OperationName = "BatchUserThumbnails"
BatchUsernamesOperation OperationName = "BatchUsernames"
CreateMapfixOperation OperationName = "CreateMapfix"
CreateMapfixAuditCommentOperation OperationName = "CreateMapfixAuditComment"
CreateScriptOperation OperationName = "CreateScript"
@@ -35,21 +40,30 @@ const (
CreateSubmissionOperation OperationName = "CreateSubmission"
CreateSubmissionAdminOperation OperationName = "CreateSubmissionAdmin"
CreateSubmissionAuditCommentOperation OperationName = "CreateSubmissionAuditComment"
CreateSubmissionReviewOperation OperationName = "CreateSubmissionReview"
DeleteScriptOperation OperationName = "DeleteScript"
DeleteScriptPolicyOperation OperationName = "DeleteScriptPolicy"
DownloadMapAssetOperation OperationName = "DownloadMapAsset"
GetAOREventOperation OperationName = "GetAOREvent"
GetAOREventSubmissionsOperation OperationName = "GetAOREventSubmissions"
GetActiveAOREventOperation OperationName = "GetActiveAOREvent"
GetAssetThumbnailOperation OperationName = "GetAssetThumbnail"
GetMapOperation OperationName = "GetMap"
GetMapfixOperation OperationName = "GetMapfix"
GetOperationOperation OperationName = "GetOperation"
GetScriptOperation OperationName = "GetScript"
GetScriptPolicyOperation OperationName = "GetScriptPolicy"
GetStatsOperation OperationName = "GetStats"
GetSubmissionOperation OperationName = "GetSubmission"
GetUserThumbnailOperation OperationName = "GetUserThumbnail"
ListAOREventsOperation OperationName = "ListAOREvents"
ListMapfixAuditEventsOperation OperationName = "ListMapfixAuditEvents"
ListMapfixesOperation OperationName = "ListMapfixes"
ListMapsOperation OperationName = "ListMaps"
ListScriptPolicyOperation OperationName = "ListScriptPolicy"
ListScriptsOperation OperationName = "ListScripts"
ListSubmissionAuditEventsOperation OperationName = "ListSubmissionAuditEvents"
ListSubmissionReviewsOperation OperationName = "ListSubmissionReviews"
ListSubmissionsOperation OperationName = "ListSubmissions"
ReleaseSubmissionsOperation OperationName = "ReleaseSubmissions"
SessionRolesOperation OperationName = "SessionRoles"
@@ -57,8 +71,10 @@ const (
SessionValidateOperation OperationName = "SessionValidate"
SetMapfixCompletedOperation OperationName = "SetMapfixCompleted"
SetSubmissionCompletedOperation OperationName = "SetSubmissionCompleted"
UpdateMapfixDescriptionOperation OperationName = "UpdateMapfixDescription"
UpdateMapfixModelOperation OperationName = "UpdateMapfixModel"
UpdateScriptOperation OperationName = "UpdateScript"
UpdateScriptPolicyOperation OperationName = "UpdateScriptPolicy"
UpdateSubmissionModelOperation OperationName = "UpdateSubmissionModel"
UpdateSubmissionReviewOperation OperationName = "UpdateSubmissionReview"
)

File diff suppressed because it is too large Load Diff

View File

@@ -3,6 +3,7 @@
package api
import (
"bytes"
"fmt"
"io"
"mime"
@@ -10,13 +11,13 @@ import (
"github.com/go-faster/errors"
"github.com/go-faster/jx"
"github.com/ogen-go/ogen/ogenerrors"
"github.com/ogen-go/ogen/validate"
)
func (s *Server) decodeCreateMapfixRequest(r *http.Request) (
req *MapfixTriggerCreate,
func (s *Server) decodeBatchAssetThumbnailsRequest(r *http.Request) (
req *BatchAssetThumbnailsReq,
rawBody []byte,
close func() error,
rerr error,
) {
@@ -37,22 +38,266 @@ func (s *Server) decodeCreateMapfixRequest(r *http.Request) (
}()
ct, _, err := mime.ParseMediaType(r.Header.Get("Content-Type"))
if err != nil {
return req, close, errors.Wrap(err, "parse media type")
return req, rawBody, close, errors.Wrap(err, "parse media type")
}
switch {
case ct == "application/json":
if r.ContentLength == 0 {
return req, close, validate.ErrBodyRequired
return req, rawBody, close, validate.ErrBodyRequired
}
buf, err := io.ReadAll(r.Body)
defer func() {
_ = r.Body.Close()
}()
if err != nil {
return req, close, err
return req, rawBody, close, err
}
// Reset the body to allow for downstream reading.
r.Body = io.NopCloser(bytes.NewBuffer(buf))
if len(buf) == 0 {
return req, close, validate.ErrBodyRequired
return req, rawBody, close, validate.ErrBodyRequired
}
rawBody = append(rawBody, buf...)
d := jx.DecodeBytes(buf)
var request BatchAssetThumbnailsReq
if err := func() error {
if err := request.Decode(d); err != nil {
return err
}
if err := d.Skip(); err != io.EOF {
return errors.New("unexpected trailing data")
}
return nil
}(); err != nil {
err = &ogenerrors.DecodeBodyError{
ContentType: ct,
Body: buf,
Err: err,
}
return req, rawBody, close, err
}
if err := func() error {
if err := request.Validate(); err != nil {
return err
}
return nil
}(); err != nil {
return req, rawBody, close, errors.Wrap(err, "validate")
}
return &request, rawBody, close, nil
default:
return req, rawBody, close, validate.InvalidContentType(ct)
}
}
func (s *Server) decodeBatchUserThumbnailsRequest(r *http.Request) (
req *BatchUserThumbnailsReq,
rawBody []byte,
close func() error,
rerr error,
) {
var closers []func() error
close = func() error {
var merr error
// Close in reverse order, to match defer behavior.
for i := len(closers) - 1; i >= 0; i-- {
c := closers[i]
merr = errors.Join(merr, c())
}
return merr
}
defer func() {
if rerr != nil {
rerr = errors.Join(rerr, close())
}
}()
ct, _, err := mime.ParseMediaType(r.Header.Get("Content-Type"))
if err != nil {
return req, rawBody, close, errors.Wrap(err, "parse media type")
}
switch {
case ct == "application/json":
if r.ContentLength == 0 {
return req, rawBody, close, validate.ErrBodyRequired
}
buf, err := io.ReadAll(r.Body)
defer func() {
_ = r.Body.Close()
}()
if err != nil {
return req, rawBody, close, err
}
// Reset the body to allow for downstream reading.
r.Body = io.NopCloser(bytes.NewBuffer(buf))
if len(buf) == 0 {
return req, rawBody, close, validate.ErrBodyRequired
}
rawBody = append(rawBody, buf...)
d := jx.DecodeBytes(buf)
var request BatchUserThumbnailsReq
if err := func() error {
if err := request.Decode(d); err != nil {
return err
}
if err := d.Skip(); err != io.EOF {
return errors.New("unexpected trailing data")
}
return nil
}(); err != nil {
err = &ogenerrors.DecodeBodyError{
ContentType: ct,
Body: buf,
Err: err,
}
return req, rawBody, close, err
}
if err := func() error {
if err := request.Validate(); err != nil {
return err
}
return nil
}(); err != nil {
return req, rawBody, close, errors.Wrap(err, "validate")
}
return &request, rawBody, close, nil
default:
return req, rawBody, close, validate.InvalidContentType(ct)
}
}
func (s *Server) decodeBatchUsernamesRequest(r *http.Request) (
req *BatchUsernamesReq,
rawBody []byte,
close func() error,
rerr error,
) {
var closers []func() error
close = func() error {
var merr error
// Close in reverse order, to match defer behavior.
for i := len(closers) - 1; i >= 0; i-- {
c := closers[i]
merr = errors.Join(merr, c())
}
return merr
}
defer func() {
if rerr != nil {
rerr = errors.Join(rerr, close())
}
}()
ct, _, err := mime.ParseMediaType(r.Header.Get("Content-Type"))
if err != nil {
return req, rawBody, close, errors.Wrap(err, "parse media type")
}
switch {
case ct == "application/json":
if r.ContentLength == 0 {
return req, rawBody, close, validate.ErrBodyRequired
}
buf, err := io.ReadAll(r.Body)
defer func() {
_ = r.Body.Close()
}()
if err != nil {
return req, rawBody, close, err
}
// Reset the body to allow for downstream reading.
r.Body = io.NopCloser(bytes.NewBuffer(buf))
if len(buf) == 0 {
return req, rawBody, close, validate.ErrBodyRequired
}
rawBody = append(rawBody, buf...)
d := jx.DecodeBytes(buf)
var request BatchUsernamesReq
if err := func() error {
if err := request.Decode(d); err != nil {
return err
}
if err := d.Skip(); err != io.EOF {
return errors.New("unexpected trailing data")
}
return nil
}(); err != nil {
err = &ogenerrors.DecodeBodyError{
ContentType: ct,
Body: buf,
Err: err,
}
return req, rawBody, close, err
}
if err := func() error {
if err := request.Validate(); err != nil {
return err
}
return nil
}(); err != nil {
return req, rawBody, close, errors.Wrap(err, "validate")
}
return &request, rawBody, close, nil
default:
return req, rawBody, close, validate.InvalidContentType(ct)
}
}
func (s *Server) decodeCreateMapfixRequest(r *http.Request) (
req *MapfixTriggerCreate,
rawBody []byte,
close func() error,
rerr error,
) {
var closers []func() error
close = func() error {
var merr error
// Close in reverse order, to match defer behavior.
for i := len(closers) - 1; i >= 0; i-- {
c := closers[i]
merr = errors.Join(merr, c())
}
return merr
}
defer func() {
if rerr != nil {
rerr = errors.Join(rerr, close())
}
}()
ct, _, err := mime.ParseMediaType(r.Header.Get("Content-Type"))
if err != nil {
return req, rawBody, close, errors.Wrap(err, "parse media type")
}
switch {
case ct == "application/json":
if r.ContentLength == 0 {
return req, rawBody, close, validate.ErrBodyRequired
}
buf, err := io.ReadAll(r.Body)
defer func() {
_ = r.Body.Close()
}()
if err != nil {
return req, rawBody, close, err
}
// Reset the body to allow for downstream reading.
r.Body = io.NopCloser(bytes.NewBuffer(buf))
if len(buf) == 0 {
return req, rawBody, close, validate.ErrBodyRequired
}
rawBody = append(rawBody, buf...)
d := jx.DecodeBytes(buf)
var request MapfixTriggerCreate
@@ -70,7 +315,7 @@ func (s *Server) decodeCreateMapfixRequest(r *http.Request) (
Body: buf,
Err: err,
}
return req, close, err
return req, rawBody, close, err
}
if err := func() error {
if err := request.Validate(); err != nil {
@@ -78,16 +323,17 @@ func (s *Server) decodeCreateMapfixRequest(r *http.Request) (
}
return nil
}(); err != nil {
return req, close, errors.Wrap(err, "validate")
return req, rawBody, close, errors.Wrap(err, "validate")
}
return &request, close, nil
return &request, rawBody, close, nil
default:
return req, close, validate.InvalidContentType(ct)
return req, rawBody, close, validate.InvalidContentType(ct)
}
}
func (s *Server) decodeCreateMapfixAuditCommentRequest(r *http.Request) (
req CreateMapfixAuditCommentReq,
rawBody []byte,
close func() error,
rerr error,
) {
@@ -108,20 +354,21 @@ func (s *Server) decodeCreateMapfixAuditCommentRequest(r *http.Request) (
}()
ct, _, err := mime.ParseMediaType(r.Header.Get("Content-Type"))
if err != nil {
return req, close, errors.Wrap(err, "parse media type")
return req, rawBody, close, errors.Wrap(err, "parse media type")
}
switch {
case ct == "text/plain":
reader := r.Body
request := CreateMapfixAuditCommentReq{Data: reader}
return request, close, nil
return request, rawBody, close, nil
default:
return req, close, validate.InvalidContentType(ct)
return req, rawBody, close, validate.InvalidContentType(ct)
}
}
func (s *Server) decodeCreateScriptRequest(r *http.Request) (
req *ScriptCreate,
rawBody []byte,
close func() error,
rerr error,
) {
@@ -142,22 +389,29 @@ func (s *Server) decodeCreateScriptRequest(r *http.Request) (
}()
ct, _, err := mime.ParseMediaType(r.Header.Get("Content-Type"))
if err != nil {
return req, close, errors.Wrap(err, "parse media type")
return req, rawBody, close, errors.Wrap(err, "parse media type")
}
switch {
case ct == "application/json":
if r.ContentLength == 0 {
return req, close, validate.ErrBodyRequired
return req, rawBody, close, validate.ErrBodyRequired
}
buf, err := io.ReadAll(r.Body)
defer func() {
_ = r.Body.Close()
}()
if err != nil {
return req, close, err
return req, rawBody, close, err
}
// Reset the body to allow for downstream reading.
r.Body = io.NopCloser(bytes.NewBuffer(buf))
if len(buf) == 0 {
return req, close, validate.ErrBodyRequired
return req, rawBody, close, validate.ErrBodyRequired
}
rawBody = append(rawBody, buf...)
d := jx.DecodeBytes(buf)
var request ScriptCreate
@@ -175,7 +429,7 @@ func (s *Server) decodeCreateScriptRequest(r *http.Request) (
Body: buf,
Err: err,
}
return req, close, err
return req, rawBody, close, err
}
if err := func() error {
if err := request.Validate(); err != nil {
@@ -183,16 +437,17 @@ func (s *Server) decodeCreateScriptRequest(r *http.Request) (
}
return nil
}(); err != nil {
return req, close, errors.Wrap(err, "validate")
return req, rawBody, close, errors.Wrap(err, "validate")
}
return &request, close, nil
return &request, rawBody, close, nil
default:
return req, close, validate.InvalidContentType(ct)
return req, rawBody, close, validate.InvalidContentType(ct)
}
}
func (s *Server) decodeCreateScriptPolicyRequest(r *http.Request) (
req *ScriptPolicyCreate,
rawBody []byte,
close func() error,
rerr error,
) {
@@ -213,22 +468,29 @@ func (s *Server) decodeCreateScriptPolicyRequest(r *http.Request) (
}()
ct, _, err := mime.ParseMediaType(r.Header.Get("Content-Type"))
if err != nil {
return req, close, errors.Wrap(err, "parse media type")
return req, rawBody, close, errors.Wrap(err, "parse media type")
}
switch {
case ct == "application/json":
if r.ContentLength == 0 {
return req, close, validate.ErrBodyRequired
return req, rawBody, close, validate.ErrBodyRequired
}
buf, err := io.ReadAll(r.Body)
defer func() {
_ = r.Body.Close()
}()
if err != nil {
return req, close, err
return req, rawBody, close, err
}
// Reset the body to allow for downstream reading.
r.Body = io.NopCloser(bytes.NewBuffer(buf))
if len(buf) == 0 {
return req, close, validate.ErrBodyRequired
return req, rawBody, close, validate.ErrBodyRequired
}
rawBody = append(rawBody, buf...)
d := jx.DecodeBytes(buf)
var request ScriptPolicyCreate
@@ -246,7 +508,7 @@ func (s *Server) decodeCreateScriptPolicyRequest(r *http.Request) (
Body: buf,
Err: err,
}
return req, close, err
return req, rawBody, close, err
}
if err := func() error {
if err := request.Validate(); err != nil {
@@ -254,16 +516,17 @@ func (s *Server) decodeCreateScriptPolicyRequest(r *http.Request) (
}
return nil
}(); err != nil {
return req, close, errors.Wrap(err, "validate")
return req, rawBody, close, errors.Wrap(err, "validate")
}
return &request, close, nil
return &request, rawBody, close, nil
default:
return req, close, validate.InvalidContentType(ct)
return req, rawBody, close, validate.InvalidContentType(ct)
}
}
func (s *Server) decodeCreateSubmissionRequest(r *http.Request) (
req *SubmissionTriggerCreate,
rawBody []byte,
close func() error,
rerr error,
) {
@@ -284,22 +547,29 @@ func (s *Server) decodeCreateSubmissionRequest(r *http.Request) (
}()
ct, _, err := mime.ParseMediaType(r.Header.Get("Content-Type"))
if err != nil {
return req, close, errors.Wrap(err, "parse media type")
return req, rawBody, close, errors.Wrap(err, "parse media type")
}
switch {
case ct == "application/json":
if r.ContentLength == 0 {
return req, close, validate.ErrBodyRequired
return req, rawBody, close, validate.ErrBodyRequired
}
buf, err := io.ReadAll(r.Body)
defer func() {
_ = r.Body.Close()
}()
if err != nil {
return req, close, err
return req, rawBody, close, err
}
// Reset the body to allow for downstream reading.
r.Body = io.NopCloser(bytes.NewBuffer(buf))
if len(buf) == 0 {
return req, close, validate.ErrBodyRequired
return req, rawBody, close, validate.ErrBodyRequired
}
rawBody = append(rawBody, buf...)
d := jx.DecodeBytes(buf)
var request SubmissionTriggerCreate
@@ -317,7 +587,7 @@ func (s *Server) decodeCreateSubmissionRequest(r *http.Request) (
Body: buf,
Err: err,
}
return req, close, err
return req, rawBody, close, err
}
if err := func() error {
if err := request.Validate(); err != nil {
@@ -325,16 +595,17 @@ func (s *Server) decodeCreateSubmissionRequest(r *http.Request) (
}
return nil
}(); err != nil {
return req, close, errors.Wrap(err, "validate")
return req, rawBody, close, errors.Wrap(err, "validate")
}
return &request, close, nil
return &request, rawBody, close, nil
default:
return req, close, validate.InvalidContentType(ct)
return req, rawBody, close, validate.InvalidContentType(ct)
}
}
func (s *Server) decodeCreateSubmissionAdminRequest(r *http.Request) (
req *SubmissionTriggerCreate,
rawBody []byte,
close func() error,
rerr error,
) {
@@ -355,22 +626,29 @@ func (s *Server) decodeCreateSubmissionAdminRequest(r *http.Request) (
}()
ct, _, err := mime.ParseMediaType(r.Header.Get("Content-Type"))
if err != nil {
return req, close, errors.Wrap(err, "parse media type")
return req, rawBody, close, errors.Wrap(err, "parse media type")
}
switch {
case ct == "application/json":
if r.ContentLength == 0 {
return req, close, validate.ErrBodyRequired
return req, rawBody, close, validate.ErrBodyRequired
}
buf, err := io.ReadAll(r.Body)
defer func() {
_ = r.Body.Close()
}()
if err != nil {
return req, close, err
return req, rawBody, close, err
}
// Reset the body to allow for downstream reading.
r.Body = io.NopCloser(bytes.NewBuffer(buf))
if len(buf) == 0 {
return req, close, validate.ErrBodyRequired
return req, rawBody, close, validate.ErrBodyRequired
}
rawBody = append(rawBody, buf...)
d := jx.DecodeBytes(buf)
var request SubmissionTriggerCreate
@@ -388,7 +666,7 @@ func (s *Server) decodeCreateSubmissionAdminRequest(r *http.Request) (
Body: buf,
Err: err,
}
return req, close, err
return req, rawBody, close, err
}
if err := func() error {
if err := request.Validate(); err != nil {
@@ -396,16 +674,17 @@ func (s *Server) decodeCreateSubmissionAdminRequest(r *http.Request) (
}
return nil
}(); err != nil {
return req, close, errors.Wrap(err, "validate")
return req, rawBody, close, errors.Wrap(err, "validate")
}
return &request, close, nil
return &request, rawBody, close, nil
default:
return req, close, validate.InvalidContentType(ct)
return req, rawBody, close, validate.InvalidContentType(ct)
}
}
func (s *Server) decodeCreateSubmissionAuditCommentRequest(r *http.Request) (
req CreateSubmissionAuditCommentReq,
rawBody []byte,
close func() error,
rerr error,
) {
@@ -426,20 +705,21 @@ func (s *Server) decodeCreateSubmissionAuditCommentRequest(r *http.Request) (
}()
ct, _, err := mime.ParseMediaType(r.Header.Get("Content-Type"))
if err != nil {
return req, close, errors.Wrap(err, "parse media type")
return req, rawBody, close, errors.Wrap(err, "parse media type")
}
switch {
case ct == "text/plain":
reader := r.Body
request := CreateSubmissionAuditCommentReq{Data: reader}
return request, close, nil
return request, rawBody, close, nil
default:
return req, close, validate.InvalidContentType(ct)
return req, rawBody, close, validate.InvalidContentType(ct)
}
}
func (s *Server) decodeReleaseSubmissionsRequest(r *http.Request) (
req []ReleaseInfo,
func (s *Server) decodeCreateSubmissionReviewRequest(r *http.Request) (
req *SubmissionReviewCreate,
rawBody []byte,
close func() error,
rerr error,
) {
@@ -460,22 +740,108 @@ func (s *Server) decodeReleaseSubmissionsRequest(r *http.Request) (
}()
ct, _, err := mime.ParseMediaType(r.Header.Get("Content-Type"))
if err != nil {
return req, close, errors.Wrap(err, "parse media type")
return req, rawBody, close, errors.Wrap(err, "parse media type")
}
switch {
case ct == "application/json":
if r.ContentLength == 0 {
return req, close, validate.ErrBodyRequired
return req, rawBody, close, validate.ErrBodyRequired
}
buf, err := io.ReadAll(r.Body)
defer func() {
_ = r.Body.Close()
}()
if err != nil {
return req, close, err
return req, rawBody, close, err
}
// Reset the body to allow for downstream reading.
r.Body = io.NopCloser(bytes.NewBuffer(buf))
if len(buf) == 0 {
return req, close, validate.ErrBodyRequired
return req, rawBody, close, validate.ErrBodyRequired
}
rawBody = append(rawBody, buf...)
d := jx.DecodeBytes(buf)
var request SubmissionReviewCreate
if err := func() error {
if err := request.Decode(d); err != nil {
return err
}
if err := d.Skip(); err != io.EOF {
return errors.New("unexpected trailing data")
}
return nil
}(); err != nil {
err = &ogenerrors.DecodeBodyError{
ContentType: ct,
Body: buf,
Err: err,
}
return req, rawBody, close, err
}
if err := func() error {
if err := request.Validate(); err != nil {
return err
}
return nil
}(); err != nil {
return req, rawBody, close, errors.Wrap(err, "validate")
}
return &request, rawBody, close, nil
default:
return req, rawBody, close, validate.InvalidContentType(ct)
}
}
func (s *Server) decodeReleaseSubmissionsRequest(r *http.Request) (
req []ReleaseInfo,
rawBody []byte,
close func() error,
rerr error,
) {
var closers []func() error
close = func() error {
var merr error
// Close in reverse order, to match defer behavior.
for i := len(closers) - 1; i >= 0; i-- {
c := closers[i]
merr = errors.Join(merr, c())
}
return merr
}
defer func() {
if rerr != nil {
rerr = errors.Join(rerr, close())
}
}()
ct, _, err := mime.ParseMediaType(r.Header.Get("Content-Type"))
if err != nil {
return req, rawBody, close, errors.Wrap(err, "parse media type")
}
switch {
case ct == "application/json":
if r.ContentLength == 0 {
return req, rawBody, close, validate.ErrBodyRequired
}
buf, err := io.ReadAll(r.Body)
defer func() {
_ = r.Body.Close()
}()
if err != nil {
return req, rawBody, close, err
}
// Reset the body to allow for downstream reading.
r.Body = io.NopCloser(bytes.NewBuffer(buf))
if len(buf) == 0 {
return req, rawBody, close, validate.ErrBodyRequired
}
rawBody = append(rawBody, buf...)
d := jx.DecodeBytes(buf)
var request []ReleaseInfo
@@ -501,7 +867,7 @@ func (s *Server) decodeReleaseSubmissionsRequest(r *http.Request) (
Body: buf,
Err: err,
}
return req, close, err
return req, rawBody, close, err
}
if err := func() error {
if request == nil {
@@ -534,16 +900,17 @@ func (s *Server) decodeReleaseSubmissionsRequest(r *http.Request) (
}
return nil
}(); err != nil {
return req, close, errors.Wrap(err, "validate")
return req, rawBody, close, errors.Wrap(err, "validate")
}
return request, close, nil
return request, rawBody, close, nil
default:
return req, close, validate.InvalidContentType(ct)
return req, rawBody, close, validate.InvalidContentType(ct)
}
}
func (s *Server) decodeUpdateScriptRequest(r *http.Request) (
req *ScriptUpdate,
func (s *Server) decodeUpdateMapfixDescriptionRequest(r *http.Request) (
req UpdateMapfixDescriptionReq,
rawBody []byte,
close func() error,
rerr error,
) {
@@ -564,22 +931,64 @@ func (s *Server) decodeUpdateScriptRequest(r *http.Request) (
}()
ct, _, err := mime.ParseMediaType(r.Header.Get("Content-Type"))
if err != nil {
return req, close, errors.Wrap(err, "parse media type")
return req, rawBody, close, errors.Wrap(err, "parse media type")
}
switch {
case ct == "text/plain":
reader := r.Body
request := UpdateMapfixDescriptionReq{Data: reader}
return request, rawBody, close, nil
default:
return req, rawBody, close, validate.InvalidContentType(ct)
}
}
func (s *Server) decodeUpdateScriptRequest(r *http.Request) (
req *ScriptUpdate,
rawBody []byte,
close func() error,
rerr error,
) {
var closers []func() error
close = func() error {
var merr error
// Close in reverse order, to match defer behavior.
for i := len(closers) - 1; i >= 0; i-- {
c := closers[i]
merr = errors.Join(merr, c())
}
return merr
}
defer func() {
if rerr != nil {
rerr = errors.Join(rerr, close())
}
}()
ct, _, err := mime.ParseMediaType(r.Header.Get("Content-Type"))
if err != nil {
return req, rawBody, close, errors.Wrap(err, "parse media type")
}
switch {
case ct == "application/json":
if r.ContentLength == 0 {
return req, close, validate.ErrBodyRequired
return req, rawBody, close, validate.ErrBodyRequired
}
buf, err := io.ReadAll(r.Body)
defer func() {
_ = r.Body.Close()
}()
if err != nil {
return req, close, err
return req, rawBody, close, err
}
// Reset the body to allow for downstream reading.
r.Body = io.NopCloser(bytes.NewBuffer(buf))
if len(buf) == 0 {
return req, close, validate.ErrBodyRequired
return req, rawBody, close, validate.ErrBodyRequired
}
rawBody = append(rawBody, buf...)
d := jx.DecodeBytes(buf)
var request ScriptUpdate
@@ -597,7 +1006,7 @@ func (s *Server) decodeUpdateScriptRequest(r *http.Request) (
Body: buf,
Err: err,
}
return req, close, err
return req, rawBody, close, err
}
if err := func() error {
if err := request.Validate(); err != nil {
@@ -605,16 +1014,17 @@ func (s *Server) decodeUpdateScriptRequest(r *http.Request) (
}
return nil
}(); err != nil {
return req, close, errors.Wrap(err, "validate")
return req, rawBody, close, errors.Wrap(err, "validate")
}
return &request, close, nil
return &request, rawBody, close, nil
default:
return req, close, validate.InvalidContentType(ct)
return req, rawBody, close, validate.InvalidContentType(ct)
}
}
func (s *Server) decodeUpdateScriptPolicyRequest(r *http.Request) (
req *ScriptPolicyUpdate,
rawBody []byte,
close func() error,
rerr error,
) {
@@ -635,22 +1045,29 @@ func (s *Server) decodeUpdateScriptPolicyRequest(r *http.Request) (
}()
ct, _, err := mime.ParseMediaType(r.Header.Get("Content-Type"))
if err != nil {
return req, close, errors.Wrap(err, "parse media type")
return req, rawBody, close, errors.Wrap(err, "parse media type")
}
switch {
case ct == "application/json":
if r.ContentLength == 0 {
return req, close, validate.ErrBodyRequired
return req, rawBody, close, validate.ErrBodyRequired
}
buf, err := io.ReadAll(r.Body)
defer func() {
_ = r.Body.Close()
}()
if err != nil {
return req, close, err
return req, rawBody, close, err
}
// Reset the body to allow for downstream reading.
r.Body = io.NopCloser(bytes.NewBuffer(buf))
if len(buf) == 0 {
return req, close, validate.ErrBodyRequired
return req, rawBody, close, validate.ErrBodyRequired
}
rawBody = append(rawBody, buf...)
d := jx.DecodeBytes(buf)
var request ScriptPolicyUpdate
@@ -668,7 +1085,7 @@ func (s *Server) decodeUpdateScriptPolicyRequest(r *http.Request) (
Body: buf,
Err: err,
}
return req, close, err
return req, rawBody, close, err
}
if err := func() error {
if err := request.Validate(); err != nil {
@@ -676,10 +1093,89 @@ func (s *Server) decodeUpdateScriptPolicyRequest(r *http.Request) (
}
return nil
}(); err != nil {
return req, close, errors.Wrap(err, "validate")
return req, rawBody, close, errors.Wrap(err, "validate")
}
return &request, close, nil
return &request, rawBody, close, nil
default:
return req, close, validate.InvalidContentType(ct)
return req, rawBody, close, validate.InvalidContentType(ct)
}
}
func (s *Server) decodeUpdateSubmissionReviewRequest(r *http.Request) (
req *SubmissionReviewCreate,
rawBody []byte,
close func() error,
rerr error,
) {
var closers []func() error
close = func() error {
var merr error
// Close in reverse order, to match defer behavior.
for i := len(closers) - 1; i >= 0; i-- {
c := closers[i]
merr = errors.Join(merr, c())
}
return merr
}
defer func() {
if rerr != nil {
rerr = errors.Join(rerr, close())
}
}()
ct, _, err := mime.ParseMediaType(r.Header.Get("Content-Type"))
if err != nil {
return req, rawBody, close, errors.Wrap(err, "parse media type")
}
switch {
case ct == "application/json":
if r.ContentLength == 0 {
return req, rawBody, close, validate.ErrBodyRequired
}
buf, err := io.ReadAll(r.Body)
defer func() {
_ = r.Body.Close()
}()
if err != nil {
return req, rawBody, close, err
}
// Reset the body to allow for downstream reading.
r.Body = io.NopCloser(bytes.NewBuffer(buf))
if len(buf) == 0 {
return req, rawBody, close, validate.ErrBodyRequired
}
rawBody = append(rawBody, buf...)
d := jx.DecodeBytes(buf)
var request SubmissionReviewCreate
if err := func() error {
if err := request.Decode(d); err != nil {
return err
}
if err := d.Skip(); err != io.EOF {
return errors.New("unexpected trailing data")
}
return nil
}(); err != nil {
err = &ogenerrors.DecodeBodyError{
ContentType: ct,
Body: buf,
Err: err,
}
return req, rawBody, close, err
}
if err := func() error {
if err := request.Validate(); err != nil {
return err
}
return nil
}(); err != nil {
return req, rawBody, close, errors.Wrap(err, "validate")
}
return &request, rawBody, close, nil
default:
return req, rawBody, close, validate.InvalidContentType(ct)
}
}

View File

@@ -7,10 +7,51 @@ import (
"net/http"
"github.com/go-faster/jx"
ht "github.com/ogen-go/ogen/http"
)
func encodeBatchAssetThumbnailsRequest(
req *BatchAssetThumbnailsReq,
r *http.Request,
) error {
const contentType = "application/json"
e := new(jx.Encoder)
{
req.Encode(e)
}
encoded := e.Bytes()
ht.SetBody(r, bytes.NewReader(encoded), contentType)
return nil
}
func encodeBatchUserThumbnailsRequest(
req *BatchUserThumbnailsReq,
r *http.Request,
) error {
const contentType = "application/json"
e := new(jx.Encoder)
{
req.Encode(e)
}
encoded := e.Bytes()
ht.SetBody(r, bytes.NewReader(encoded), contentType)
return nil
}
func encodeBatchUsernamesRequest(
req *BatchUsernamesReq,
r *http.Request,
) error {
const contentType = "application/json"
e := new(jx.Encoder)
{
req.Encode(e)
}
encoded := e.Bytes()
ht.SetBody(r, bytes.NewReader(encoded), contentType)
return nil
}
func encodeCreateMapfixRequest(
req *MapfixTriggerCreate,
r *http.Request,
@@ -101,6 +142,20 @@ func encodeCreateSubmissionAuditCommentRequest(
return nil
}
func encodeCreateSubmissionReviewRequest(
req *SubmissionReviewCreate,
r *http.Request,
) error {
const contentType = "application/json"
e := new(jx.Encoder)
{
req.Encode(e)
}
encoded := e.Bytes()
ht.SetBody(r, bytes.NewReader(encoded), contentType)
return nil
}
func encodeReleaseSubmissionsRequest(
req []ReleaseInfo,
r *http.Request,
@@ -119,6 +174,16 @@ func encodeReleaseSubmissionsRequest(
return nil
}
func encodeUpdateMapfixDescriptionRequest(
req UpdateMapfixDescriptionReq,
r *http.Request,
) error {
const contentType = "text/plain"
body := req
ht.SetBody(r, body, contentType)
return nil
}
func encodeUpdateScriptRequest(
req *ScriptUpdate,
r *http.Request,
@@ -146,3 +211,17 @@ func encodeUpdateScriptPolicyRequest(
ht.SetBody(r, bytes.NewReader(encoded), contentType)
return nil
}
func encodeUpdateSubmissionReviewRequest(
req *SubmissionReviewCreate,
r *http.Request,
) error {
const contentType = "application/json"
e := new(jx.Encoder)
{
req.Encode(e)
}
encoded := e.Bytes()
ht.SetBody(r, bytes.NewReader(encoded), contentType)
return nil
}

File diff suppressed because it is too large Load Diff

View File

@@ -8,10 +8,11 @@ import (
"github.com/go-faster/errors"
"github.com/go-faster/jx"
"github.com/ogen-go/ogen/conv"
ht "github.com/ogen-go/ogen/http"
"github.com/ogen-go/ogen/uri"
"go.opentelemetry.io/otel/codes"
"go.opentelemetry.io/otel/trace"
ht "github.com/ogen-go/ogen/http"
)
func encodeActionMapfixAcceptedResponse(response *ActionMapfixAcceptedNoContent, w http.ResponseWriter, span trace.Span) error {
@@ -56,6 +57,13 @@ func encodeActionMapfixRevokeResponse(response *ActionMapfixRevokeNoContent, w h
return nil
}
func encodeActionMapfixTriggerReleaseResponse(response *ActionMapfixTriggerReleaseNoContent, w http.ResponseWriter, span trace.Span) error {
w.WriteHeader(204)
span.SetStatus(codes.Ok, http.StatusText(204))
return nil
}
func encodeActionMapfixTriggerSubmitResponse(response *ActionMapfixTriggerSubmitNoContent, w http.ResponseWriter, span trace.Span) error {
w.WriteHeader(204)
span.SetStatus(codes.Ok, http.StatusText(204))
@@ -84,6 +92,13 @@ func encodeActionMapfixTriggerValidateResponse(response *ActionMapfixTriggerVali
return nil
}
func encodeActionMapfixUploadedResponse(response *ActionMapfixUploadedNoContent, w http.ResponseWriter, span trace.Span) error {
w.WriteHeader(204)
span.SetStatus(codes.Ok, http.StatusText(204))
return nil
}
func encodeActionMapfixValidatedResponse(response *ActionMapfixValidatedNoContent, w http.ResponseWriter, span trace.Span) error {
w.WriteHeader(204)
span.SetStatus(codes.Ok, http.StatusText(204))
@@ -168,6 +183,48 @@ func encodeActionSubmissionValidatedResponse(response *ActionSubmissionValidated
return nil
}
func encodeBatchAssetThumbnailsResponse(response *BatchAssetThumbnailsOK, w http.ResponseWriter, span trace.Span) error {
w.Header().Set("Content-Type", "application/json; charset=utf-8")
w.WriteHeader(200)
span.SetStatus(codes.Ok, http.StatusText(200))
e := new(jx.Encoder)
response.Encode(e)
if _, err := e.WriteTo(w); err != nil {
return errors.Wrap(err, "write")
}
return nil
}
func encodeBatchUserThumbnailsResponse(response *BatchUserThumbnailsOK, w http.ResponseWriter, span trace.Span) error {
w.Header().Set("Content-Type", "application/json; charset=utf-8")
w.WriteHeader(200)
span.SetStatus(codes.Ok, http.StatusText(200))
e := new(jx.Encoder)
response.Encode(e)
if _, err := e.WriteTo(w); err != nil {
return errors.Wrap(err, "write")
}
return nil
}
func encodeBatchUsernamesResponse(response *BatchUsernamesOK, w http.ResponseWriter, span trace.Span) error {
w.Header().Set("Content-Type", "application/json; charset=utf-8")
w.WriteHeader(200)
span.SetStatus(codes.Ok, http.StatusText(200))
e := new(jx.Encoder)
response.Encode(e)
if _, err := e.WriteTo(w); err != nil {
return errors.Wrap(err, "write")
}
return nil
}
func encodeCreateMapfixResponse(response *OperationID, w http.ResponseWriter, span trace.Span) error {
w.Header().Set("Content-Type", "application/json; charset=utf-8")
w.WriteHeader(201)
@@ -252,6 +309,20 @@ func encodeCreateSubmissionAuditCommentResponse(response *CreateSubmissionAuditC
return nil
}
func encodeCreateSubmissionReviewResponse(response *SubmissionReview, w http.ResponseWriter, span trace.Span) error {
w.Header().Set("Content-Type", "application/json; charset=utf-8")
w.WriteHeader(200)
span.SetStatus(codes.Ok, http.StatusText(200))
e := new(jx.Encoder)
response.Encode(e)
if _, err := e.WriteTo(w); err != nil {
return errors.Wrap(err, "write")
}
return nil
}
func encodeDeleteScriptResponse(response *DeleteScriptNoContent, w http.ResponseWriter, span trace.Span) error {
w.WriteHeader(204)
span.SetStatus(codes.Ok, http.StatusText(204))
@@ -282,6 +353,78 @@ func encodeDownloadMapAssetResponse(response DownloadMapAssetOK, w http.Response
return nil
}
func encodeGetAOREventResponse(response *AOREvent, w http.ResponseWriter, span trace.Span) error {
w.Header().Set("Content-Type", "application/json; charset=utf-8")
w.WriteHeader(200)
span.SetStatus(codes.Ok, http.StatusText(200))
e := new(jx.Encoder)
response.Encode(e)
if _, err := e.WriteTo(w); err != nil {
return errors.Wrap(err, "write")
}
return nil
}
func encodeGetAOREventSubmissionsResponse(response []Submission, w http.ResponseWriter, span trace.Span) error {
w.Header().Set("Content-Type", "application/json; charset=utf-8")
w.WriteHeader(200)
span.SetStatus(codes.Ok, http.StatusText(200))
e := new(jx.Encoder)
e.ArrStart()
for _, elem := range response {
elem.Encode(e)
}
e.ArrEnd()
if _, err := e.WriteTo(w); err != nil {
return errors.Wrap(err, "write")
}
return nil
}
func encodeGetActiveAOREventResponse(response *AOREvent, w http.ResponseWriter, span trace.Span) error {
w.Header().Set("Content-Type", "application/json; charset=utf-8")
w.WriteHeader(200)
span.SetStatus(codes.Ok, http.StatusText(200))
e := new(jx.Encoder)
response.Encode(e)
if _, err := e.WriteTo(w); err != nil {
return errors.Wrap(err, "write")
}
return nil
}
func encodeGetAssetThumbnailResponse(response *GetAssetThumbnailFound, w http.ResponseWriter, span trace.Span) error {
// Encoding response headers.
{
h := uri.NewHeaderEncoder(w.Header())
// Encode "Location" header.
{
cfg := uri.HeaderParameterEncodingConfig{
Name: "Location",
Explode: false,
}
if err := h.EncodeParam(cfg, func(e uri.Encoder) error {
if val, ok := response.Location.Get(); ok {
return e.EncodeValue(conv.StringToString(val))
}
return nil
}); err != nil {
return errors.Wrap(err, "encode Location header")
}
}
}
w.WriteHeader(302)
span.SetStatus(codes.Ok, http.StatusText(302))
return nil
}
func encodeGetMapResponse(response *Map, w http.ResponseWriter, span trace.Span) error {
w.Header().Set("Content-Type", "application/json; charset=utf-8")
w.WriteHeader(200)
@@ -352,6 +495,20 @@ func encodeGetScriptPolicyResponse(response *ScriptPolicy, w http.ResponseWriter
return nil
}
func encodeGetStatsResponse(response *Stats, w http.ResponseWriter, span trace.Span) error {
w.Header().Set("Content-Type", "application/json; charset=utf-8")
w.WriteHeader(200)
span.SetStatus(codes.Ok, http.StatusText(200))
e := new(jx.Encoder)
response.Encode(e)
if _, err := e.WriteTo(w); err != nil {
return errors.Wrap(err, "write")
}
return nil
}
func encodeGetSubmissionResponse(response *Submission, w http.ResponseWriter, span trace.Span) error {
w.Header().Set("Content-Type", "application/json; charset=utf-8")
w.WriteHeader(200)
@@ -366,6 +523,50 @@ func encodeGetSubmissionResponse(response *Submission, w http.ResponseWriter, sp
return nil
}
func encodeGetUserThumbnailResponse(response *GetUserThumbnailFound, w http.ResponseWriter, span trace.Span) error {
// Encoding response headers.
{
h := uri.NewHeaderEncoder(w.Header())
// Encode "Location" header.
{
cfg := uri.HeaderParameterEncodingConfig{
Name: "Location",
Explode: false,
}
if err := h.EncodeParam(cfg, func(e uri.Encoder) error {
if val, ok := response.Location.Get(); ok {
return e.EncodeValue(conv.StringToString(val))
}
return nil
}); err != nil {
return errors.Wrap(err, "encode Location header")
}
}
}
w.WriteHeader(302)
span.SetStatus(codes.Ok, http.StatusText(302))
return nil
}
func encodeListAOREventsResponse(response []AOREvent, w http.ResponseWriter, span trace.Span) error {
w.Header().Set("Content-Type", "application/json; charset=utf-8")
w.WriteHeader(200)
span.SetStatus(codes.Ok, http.StatusText(200))
e := new(jx.Encoder)
e.ArrStart()
for _, elem := range response {
elem.Encode(e)
}
e.ArrEnd()
if _, err := e.WriteTo(w); err != nil {
return errors.Wrap(err, "write")
}
return nil
}
func encodeListMapfixAuditEventsResponse(response []AuditEvent, w http.ResponseWriter, span trace.Span) error {
w.Header().Set("Content-Type", "application/json; charset=utf-8")
w.WriteHeader(200)
@@ -470,6 +671,24 @@ func encodeListSubmissionAuditEventsResponse(response []AuditEvent, w http.Respo
return nil
}
func encodeListSubmissionReviewsResponse(response []SubmissionReview, w http.ResponseWriter, span trace.Span) error {
w.Header().Set("Content-Type", "application/json; charset=utf-8")
w.WriteHeader(200)
span.SetStatus(codes.Ok, http.StatusText(200))
e := new(jx.Encoder)
e.ArrStart()
for _, elem := range response {
elem.Encode(e)
}
e.ArrEnd()
if _, err := e.WriteTo(w); err != nil {
return errors.Wrap(err, "write")
}
return nil
}
func encodeListSubmissionsResponse(response *Submissions, w http.ResponseWriter, span trace.Span) error {
w.Header().Set("Content-Type", "application/json; charset=utf-8")
w.WriteHeader(200)
@@ -484,10 +703,17 @@ func encodeListSubmissionsResponse(response *Submissions, w http.ResponseWriter,
return nil
}
func encodeReleaseSubmissionsResponse(response *ReleaseSubmissionsCreated, w http.ResponseWriter, span trace.Span) error {
func encodeReleaseSubmissionsResponse(response *OperationID, w http.ResponseWriter, span trace.Span) error {
w.Header().Set("Content-Type", "application/json; charset=utf-8")
w.WriteHeader(201)
span.SetStatus(codes.Ok, http.StatusText(201))
e := new(jx.Encoder)
response.Encode(e)
if _, err := e.WriteTo(w); err != nil {
return errors.Wrap(err, "write")
}
return nil
}
@@ -547,6 +773,13 @@ func encodeSetSubmissionCompletedResponse(response *SetSubmissionCompletedNoCont
return nil
}
func encodeUpdateMapfixDescriptionResponse(response *UpdateMapfixDescriptionNoContent, w http.ResponseWriter, span trace.Span) error {
w.WriteHeader(204)
span.SetStatus(codes.Ok, http.StatusText(204))
return nil
}
func encodeUpdateMapfixModelResponse(response *UpdateMapfixModelNoContent, w http.ResponseWriter, span trace.Span) error {
w.WriteHeader(204)
span.SetStatus(codes.Ok, http.StatusText(204))
@@ -575,6 +808,20 @@ func encodeUpdateSubmissionModelResponse(response *UpdateSubmissionModelNoConten
return nil
}
func encodeUpdateSubmissionReviewResponse(response *SubmissionReview, w http.ResponseWriter, span trace.Span) error {
w.Header().Set("Content-Type", "application/json; charset=utf-8")
w.WriteHeader(200)
span.SetStatus(codes.Ok, http.StatusText(200))
e := new(jx.Encoder)
response.Encode(e)
if _, err := e.WriteTo(w); err != nil {
return errors.Wrap(err, "write")
}
return nil
}
func encodeErrorResponse(response *ErrorStatusCode, w http.ResponseWriter, span trace.Span) error {
w.Header().Set("Content-Type", "application/json; charset=utf-8")
code := response.StatusCode

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -8,7 +8,6 @@ import (
"strings"
"github.com/go-faster/errors"
"github.com/ogen-go/ogen/ogenerrors"
)
@@ -40,10 +39,12 @@ var operationRolesCookieAuth = map[string][]string{
ActionMapfixResetSubmittingOperation: []string{},
ActionMapfixRetryValidateOperation: []string{},
ActionMapfixRevokeOperation: []string{},
ActionMapfixTriggerReleaseOperation: []string{},
ActionMapfixTriggerSubmitOperation: []string{},
ActionMapfixTriggerSubmitUncheckedOperation: []string{},
ActionMapfixTriggerUploadOperation: []string{},
ActionMapfixTriggerValidateOperation: []string{},
ActionMapfixUploadedOperation: []string{},
ActionMapfixValidatedOperation: []string{},
ActionSubmissionAcceptedOperation: []string{},
ActionSubmissionRejectOperation: []string{},
@@ -63,20 +64,24 @@ var operationRolesCookieAuth = map[string][]string{
CreateSubmissionOperation: []string{},
CreateSubmissionAdminOperation: []string{},
CreateSubmissionAuditCommentOperation: []string{},
CreateSubmissionReviewOperation: []string{},
DeleteScriptOperation: []string{},
DeleteScriptPolicyOperation: []string{},
DownloadMapAssetOperation: []string{},
GetOperationOperation: []string{},
ListSubmissionReviewsOperation: []string{},
ReleaseSubmissionsOperation: []string{},
SessionRolesOperation: []string{},
SessionUserOperation: []string{},
SessionValidateOperation: []string{},
SetMapfixCompletedOperation: []string{},
SetSubmissionCompletedOperation: []string{},
UpdateMapfixDescriptionOperation: []string{},
UpdateMapfixModelOperation: []string{},
UpdateScriptOperation: []string{},
UpdateScriptPolicyOperation: []string{},
UpdateSubmissionModelOperation: []string{},
UpdateSubmissionReviewOperation: []string{},
}
func (s *Server) securityCookieAuth(ctx context.Context, operationName OperationName, req *http.Request) (context.Context, bool, error) {

View File

@@ -45,6 +45,12 @@ type Handler interface {
//
// POST /mapfixes/{MapfixID}/status/revoke
ActionMapfixRevoke(ctx context.Context, params ActionMapfixRevokeParams) error
// ActionMapfixTriggerRelease implements actionMapfixTriggerRelease operation.
//
// Role MapfixUpload changes status from Uploaded -> Releasing.
//
// POST /mapfixes/{MapfixID}/status/trigger-release
ActionMapfixTriggerRelease(ctx context.Context, params ActionMapfixTriggerReleaseParams) error
// ActionMapfixTriggerSubmit implements actionMapfixTriggerSubmit operation.
//
// Role Submitter changes status from UnderConstruction|ChangesRequested -> Submitting.
@@ -59,7 +65,7 @@ type Handler interface {
ActionMapfixTriggerSubmitUnchecked(ctx context.Context, params ActionMapfixTriggerSubmitUncheckedParams) error
// ActionMapfixTriggerUpload implements actionMapfixTriggerUpload operation.
//
// Role Admin changes status from Validated -> Uploading.
// Role MapfixUpload changes status from Validated -> Uploading.
//
// POST /mapfixes/{MapfixID}/status/trigger-upload
ActionMapfixTriggerUpload(ctx context.Context, params ActionMapfixTriggerUploadParams) error
@@ -69,9 +75,15 @@ type Handler interface {
//
// POST /mapfixes/{MapfixID}/status/trigger-validate
ActionMapfixTriggerValidate(ctx context.Context, params ActionMapfixTriggerValidateParams) error
// ActionMapfixUploaded implements actionMapfixUploaded operation.
//
// Role MapfixUpload manually resets releasing softlock and changes status from Releasing -> Uploaded.
//
// POST /mapfixes/{MapfixID}/status/reset-releasing
ActionMapfixUploaded(ctx context.Context, params ActionMapfixUploadedParams) error
// ActionMapfixValidated implements actionMapfixValidated operation.
//
// Role Admin manually resets uploading softlock and changes status from Uploading -> Validated.
// Role MapfixUpload manually resets uploading softlock and changes status from Uploading -> Validated.
//
// POST /mapfixes/{MapfixID}/status/reset-uploading
ActionMapfixValidated(ctx context.Context, params ActionMapfixValidatedParams) error
@@ -126,7 +138,7 @@ type Handler interface {
ActionSubmissionTriggerSubmitUnchecked(ctx context.Context, params ActionSubmissionTriggerSubmitUncheckedParams) error
// ActionSubmissionTriggerUpload implements actionSubmissionTriggerUpload operation.
//
// Role Admin changes status from Validated -> Uploading.
// Role SubmissionUpload changes status from Validated -> Uploading.
//
// POST /submissions/{SubmissionID}/status/trigger-upload
ActionSubmissionTriggerUpload(ctx context.Context, params ActionSubmissionTriggerUploadParams) error
@@ -138,10 +150,29 @@ type Handler interface {
ActionSubmissionTriggerValidate(ctx context.Context, params ActionSubmissionTriggerValidateParams) error
// ActionSubmissionValidated implements actionSubmissionValidated operation.
//
// Role Admin manually resets uploading softlock and changes status from Uploading -> Validated.
// Role SubmissionUpload manually resets uploading softlock and changes status from Uploading ->
// Validated.
//
// POST /submissions/{SubmissionID}/status/reset-uploading
ActionSubmissionValidated(ctx context.Context, params ActionSubmissionValidatedParams) error
// BatchAssetThumbnails implements batchAssetThumbnails operation.
//
// Batch fetch asset thumbnails.
//
// POST /thumbnails/assets
BatchAssetThumbnails(ctx context.Context, req *BatchAssetThumbnailsReq) (*BatchAssetThumbnailsOK, error)
// BatchUserThumbnails implements batchUserThumbnails operation.
//
// Batch fetch user avatar thumbnails.
//
// POST /thumbnails/users
BatchUserThumbnails(ctx context.Context, req *BatchUserThumbnailsReq) (*BatchUserThumbnailsOK, error)
// BatchUsernames implements batchUsernames operation.
//
// Batch fetch usernames.
//
// POST /usernames
BatchUsernames(ctx context.Context, req *BatchUsernamesReq) (*BatchUsernamesOK, error)
// CreateMapfix implements createMapfix operation.
//
// Trigger the validator to create a mapfix.
@@ -184,6 +215,12 @@ type Handler interface {
//
// POST /submissions/{SubmissionID}/comment
CreateSubmissionAuditComment(ctx context.Context, req CreateSubmissionAuditCommentReq, params CreateSubmissionAuditCommentParams) error
// CreateSubmissionReview implements createSubmissionReview operation.
//
// Create a review for a submission.
//
// POST /submissions/{SubmissionID}/reviews
CreateSubmissionReview(ctx context.Context, req *SubmissionReviewCreate, params CreateSubmissionReviewParams) (*SubmissionReview, error)
// DeleteScript implements deleteScript operation.
//
// Delete the specified script by ID.
@@ -202,6 +239,30 @@ type Handler interface {
//
// GET /maps/{MapID}/download
DownloadMapAsset(ctx context.Context, params DownloadMapAssetParams) (DownloadMapAssetOK, error)
// GetAOREvent implements getAOREvent operation.
//
// Get a specific AOR event.
//
// GET /aor-events/{AOREventID}
GetAOREvent(ctx context.Context, params GetAOREventParams) (*AOREvent, error)
// GetAOREventSubmissions implements getAOREventSubmissions operation.
//
// Get all submissions for a specific AOR event.
//
// GET /aor-events/{AOREventID}/submissions
GetAOREventSubmissions(ctx context.Context, params GetAOREventSubmissionsParams) ([]Submission, error)
// GetActiveAOREvent implements getActiveAOREvent operation.
//
// Get the currently active AOR event.
//
// GET /aor-events/active
GetActiveAOREvent(ctx context.Context) (*AOREvent, error)
// GetAssetThumbnail implements getAssetThumbnail operation.
//
// Get single asset thumbnail.
//
// GET /thumbnails/asset/{AssetID}
GetAssetThumbnail(ctx context.Context, params GetAssetThumbnailParams) (*GetAssetThumbnailFound, error)
// GetMap implements getMap operation.
//
// Retrieve map with ID.
@@ -232,12 +293,30 @@ type Handler interface {
//
// GET /script-policy/{ScriptPolicyID}
GetScriptPolicy(ctx context.Context, params GetScriptPolicyParams) (*ScriptPolicy, error)
// GetStats implements getStats operation.
//
// Get aggregate statistics.
//
// GET /stats
GetStats(ctx context.Context) (*Stats, error)
// GetSubmission implements getSubmission operation.
//
// Retrieve map with ID.
//
// GET /submissions/{SubmissionID}
GetSubmission(ctx context.Context, params GetSubmissionParams) (*Submission, error)
// GetUserThumbnail implements getUserThumbnail operation.
//
// Get single user avatar thumbnail.
//
// GET /thumbnails/user/{UserID}
GetUserThumbnail(ctx context.Context, params GetUserThumbnailParams) (*GetUserThumbnailFound, error)
// ListAOREvents implements listAOREvents operation.
//
// Get list of AOR events.
//
// GET /aor-events
ListAOREvents(ctx context.Context, params ListAOREventsParams) ([]AOREvent, error)
// ListMapfixAuditEvents implements listMapfixAuditEvents operation.
//
// Retrieve a list of audit events.
@@ -274,6 +353,12 @@ type Handler interface {
//
// GET /submissions/{SubmissionID}/audit-events
ListSubmissionAuditEvents(ctx context.Context, params ListSubmissionAuditEventsParams) ([]AuditEvent, error)
// ListSubmissionReviews implements listSubmissionReviews operation.
//
// Get all reviews for a submission.
//
// GET /submissions/{SubmissionID}/reviews
ListSubmissionReviews(ctx context.Context, params ListSubmissionReviewsParams) ([]SubmissionReview, error)
// ListSubmissions implements listSubmissions operation.
//
// Get list of submissions.
@@ -282,10 +367,10 @@ type Handler interface {
ListSubmissions(ctx context.Context, params ListSubmissionsParams) (*Submissions, error)
// ReleaseSubmissions implements releaseSubmissions operation.
//
// Release a set of uploaded maps.
// Release a set of uploaded maps. Role SubmissionRelease.
//
// POST /release-submissions
ReleaseSubmissions(ctx context.Context, req []ReleaseInfo) error
ReleaseSubmissions(ctx context.Context, req []ReleaseInfo) (*OperationID, error)
// SessionRoles implements sessionRoles operation.
//
// Get list of roles for the current session.
@@ -316,6 +401,12 @@ type Handler interface {
//
// POST /submissions/{SubmissionID}/completed
SetSubmissionCompleted(ctx context.Context, params SetSubmissionCompletedParams) error
// UpdateMapfixDescription implements updateMapfixDescription operation.
//
// Update description (submitter only).
//
// PATCH /mapfixes/{MapfixID}/description
UpdateMapfixDescription(ctx context.Context, req UpdateMapfixDescriptionReq, params UpdateMapfixDescriptionParams) error
// UpdateMapfixModel implements updateMapfixModel operation.
//
// Update model following role restrictions.
@@ -340,6 +431,12 @@ type Handler interface {
//
// POST /submissions/{SubmissionID}/model
UpdateSubmissionModel(ctx context.Context, params UpdateSubmissionModelParams) error
// UpdateSubmissionReview implements updateSubmissionReview operation.
//
// Update an existing review.
//
// PATCH /submissions/{SubmissionID}/reviews/{ReviewID}
UpdateSubmissionReview(ctx context.Context, req *SubmissionReviewCreate, params UpdateSubmissionReviewParams) (*SubmissionReview, error)
// NewError creates *ErrorStatusCode from error returned by handler.
//
// Used for common default response.

View File

@@ -68,6 +68,15 @@ func (UnimplementedHandler) ActionMapfixRevoke(ctx context.Context, params Actio
return ht.ErrNotImplemented
}
// ActionMapfixTriggerRelease implements actionMapfixTriggerRelease operation.
//
// Role MapfixUpload changes status from Uploaded -> Releasing.
//
// POST /mapfixes/{MapfixID}/status/trigger-release
func (UnimplementedHandler) ActionMapfixTriggerRelease(ctx context.Context, params ActionMapfixTriggerReleaseParams) error {
return ht.ErrNotImplemented
}
// ActionMapfixTriggerSubmit implements actionMapfixTriggerSubmit operation.
//
// Role Submitter changes status from UnderConstruction|ChangesRequested -> Submitting.
@@ -88,7 +97,7 @@ func (UnimplementedHandler) ActionMapfixTriggerSubmitUnchecked(ctx context.Conte
// ActionMapfixTriggerUpload implements actionMapfixTriggerUpload operation.
//
// Role Admin changes status from Validated -> Uploading.
// Role MapfixUpload changes status from Validated -> Uploading.
//
// POST /mapfixes/{MapfixID}/status/trigger-upload
func (UnimplementedHandler) ActionMapfixTriggerUpload(ctx context.Context, params ActionMapfixTriggerUploadParams) error {
@@ -104,9 +113,18 @@ func (UnimplementedHandler) ActionMapfixTriggerValidate(ctx context.Context, par
return ht.ErrNotImplemented
}
// ActionMapfixUploaded implements actionMapfixUploaded operation.
//
// Role MapfixUpload manually resets releasing softlock and changes status from Releasing -> Uploaded.
//
// POST /mapfixes/{MapfixID}/status/reset-releasing
func (UnimplementedHandler) ActionMapfixUploaded(ctx context.Context, params ActionMapfixUploadedParams) error {
return ht.ErrNotImplemented
}
// ActionMapfixValidated implements actionMapfixValidated operation.
//
// Role Admin manually resets uploading softlock and changes status from Uploading -> Validated.
// Role MapfixUpload manually resets uploading softlock and changes status from Uploading -> Validated.
//
// POST /mapfixes/{MapfixID}/status/reset-uploading
func (UnimplementedHandler) ActionMapfixValidated(ctx context.Context, params ActionMapfixValidatedParams) error {
@@ -188,7 +206,7 @@ func (UnimplementedHandler) ActionSubmissionTriggerSubmitUnchecked(ctx context.C
// ActionSubmissionTriggerUpload implements actionSubmissionTriggerUpload operation.
//
// Role Admin changes status from Validated -> Uploading.
// Role SubmissionUpload changes status from Validated -> Uploading.
//
// POST /submissions/{SubmissionID}/status/trigger-upload
func (UnimplementedHandler) ActionSubmissionTriggerUpload(ctx context.Context, params ActionSubmissionTriggerUploadParams) error {
@@ -206,13 +224,41 @@ func (UnimplementedHandler) ActionSubmissionTriggerValidate(ctx context.Context,
// ActionSubmissionValidated implements actionSubmissionValidated operation.
//
// Role Admin manually resets uploading softlock and changes status from Uploading -> Validated.
// Role SubmissionUpload manually resets uploading softlock and changes status from Uploading ->
// Validated.
//
// POST /submissions/{SubmissionID}/status/reset-uploading
func (UnimplementedHandler) ActionSubmissionValidated(ctx context.Context, params ActionSubmissionValidatedParams) error {
return ht.ErrNotImplemented
}
// BatchAssetThumbnails implements batchAssetThumbnails operation.
//
// Batch fetch asset thumbnails.
//
// POST /thumbnails/assets
func (UnimplementedHandler) BatchAssetThumbnails(ctx context.Context, req *BatchAssetThumbnailsReq) (r *BatchAssetThumbnailsOK, _ error) {
return r, ht.ErrNotImplemented
}
// BatchUserThumbnails implements batchUserThumbnails operation.
//
// Batch fetch user avatar thumbnails.
//
// POST /thumbnails/users
func (UnimplementedHandler) BatchUserThumbnails(ctx context.Context, req *BatchUserThumbnailsReq) (r *BatchUserThumbnailsOK, _ error) {
return r, ht.ErrNotImplemented
}
// BatchUsernames implements batchUsernames operation.
//
// Batch fetch usernames.
//
// POST /usernames
func (UnimplementedHandler) BatchUsernames(ctx context.Context, req *BatchUsernamesReq) (r *BatchUsernamesOK, _ error) {
return r, ht.ErrNotImplemented
}
// CreateMapfix implements createMapfix operation.
//
// Trigger the validator to create a mapfix.
@@ -276,6 +322,15 @@ func (UnimplementedHandler) CreateSubmissionAuditComment(ctx context.Context, re
return ht.ErrNotImplemented
}
// CreateSubmissionReview implements createSubmissionReview operation.
//
// Create a review for a submission.
//
// POST /submissions/{SubmissionID}/reviews
func (UnimplementedHandler) CreateSubmissionReview(ctx context.Context, req *SubmissionReviewCreate, params CreateSubmissionReviewParams) (r *SubmissionReview, _ error) {
return r, ht.ErrNotImplemented
}
// DeleteScript implements deleteScript operation.
//
// Delete the specified script by ID.
@@ -303,6 +358,42 @@ func (UnimplementedHandler) DownloadMapAsset(ctx context.Context, params Downloa
return r, ht.ErrNotImplemented
}
// GetAOREvent implements getAOREvent operation.
//
// Get a specific AOR event.
//
// GET /aor-events/{AOREventID}
func (UnimplementedHandler) GetAOREvent(ctx context.Context, params GetAOREventParams) (r *AOREvent, _ error) {
return r, ht.ErrNotImplemented
}
// GetAOREventSubmissions implements getAOREventSubmissions operation.
//
// Get all submissions for a specific AOR event.
//
// GET /aor-events/{AOREventID}/submissions
func (UnimplementedHandler) GetAOREventSubmissions(ctx context.Context, params GetAOREventSubmissionsParams) (r []Submission, _ error) {
return r, ht.ErrNotImplemented
}
// GetActiveAOREvent implements getActiveAOREvent operation.
//
// Get the currently active AOR event.
//
// GET /aor-events/active
func (UnimplementedHandler) GetActiveAOREvent(ctx context.Context) (r *AOREvent, _ error) {
return r, ht.ErrNotImplemented
}
// GetAssetThumbnail implements getAssetThumbnail operation.
//
// Get single asset thumbnail.
//
// GET /thumbnails/asset/{AssetID}
func (UnimplementedHandler) GetAssetThumbnail(ctx context.Context, params GetAssetThumbnailParams) (r *GetAssetThumbnailFound, _ error) {
return r, ht.ErrNotImplemented
}
// GetMap implements getMap operation.
//
// Retrieve map with ID.
@@ -348,6 +439,15 @@ func (UnimplementedHandler) GetScriptPolicy(ctx context.Context, params GetScrip
return r, ht.ErrNotImplemented
}
// GetStats implements getStats operation.
//
// Get aggregate statistics.
//
// GET /stats
func (UnimplementedHandler) GetStats(ctx context.Context) (r *Stats, _ error) {
return r, ht.ErrNotImplemented
}
// GetSubmission implements getSubmission operation.
//
// Retrieve map with ID.
@@ -357,6 +457,24 @@ func (UnimplementedHandler) GetSubmission(ctx context.Context, params GetSubmiss
return r, ht.ErrNotImplemented
}
// GetUserThumbnail implements getUserThumbnail operation.
//
// Get single user avatar thumbnail.
//
// GET /thumbnails/user/{UserID}
func (UnimplementedHandler) GetUserThumbnail(ctx context.Context, params GetUserThumbnailParams) (r *GetUserThumbnailFound, _ error) {
return r, ht.ErrNotImplemented
}
// ListAOREvents implements listAOREvents operation.
//
// Get list of AOR events.
//
// GET /aor-events
func (UnimplementedHandler) ListAOREvents(ctx context.Context, params ListAOREventsParams) (r []AOREvent, _ error) {
return r, ht.ErrNotImplemented
}
// ListMapfixAuditEvents implements listMapfixAuditEvents operation.
//
// Retrieve a list of audit events.
@@ -411,6 +529,15 @@ func (UnimplementedHandler) ListSubmissionAuditEvents(ctx context.Context, param
return r, ht.ErrNotImplemented
}
// ListSubmissionReviews implements listSubmissionReviews operation.
//
// Get all reviews for a submission.
//
// GET /submissions/{SubmissionID}/reviews
func (UnimplementedHandler) ListSubmissionReviews(ctx context.Context, params ListSubmissionReviewsParams) (r []SubmissionReview, _ error) {
return r, ht.ErrNotImplemented
}
// ListSubmissions implements listSubmissions operation.
//
// Get list of submissions.
@@ -422,11 +549,11 @@ func (UnimplementedHandler) ListSubmissions(ctx context.Context, params ListSubm
// ReleaseSubmissions implements releaseSubmissions operation.
//
// Release a set of uploaded maps.
// Release a set of uploaded maps. Role SubmissionRelease.
//
// POST /release-submissions
func (UnimplementedHandler) ReleaseSubmissions(ctx context.Context, req []ReleaseInfo) error {
return ht.ErrNotImplemented
func (UnimplementedHandler) ReleaseSubmissions(ctx context.Context, req []ReleaseInfo) (r *OperationID, _ error) {
return r, ht.ErrNotImplemented
}
// SessionRoles implements sessionRoles operation.
@@ -474,6 +601,15 @@ func (UnimplementedHandler) SetSubmissionCompleted(ctx context.Context, params S
return ht.ErrNotImplemented
}
// UpdateMapfixDescription implements updateMapfixDescription operation.
//
// Update description (submitter only).
//
// PATCH /mapfixes/{MapfixID}/description
func (UnimplementedHandler) UpdateMapfixDescription(ctx context.Context, req UpdateMapfixDescriptionReq, params UpdateMapfixDescriptionParams) error {
return ht.ErrNotImplemented
}
// UpdateMapfixModel implements updateMapfixModel operation.
//
// Update model following role restrictions.
@@ -510,6 +646,15 @@ func (UnimplementedHandler) UpdateSubmissionModel(ctx context.Context, params Up
return ht.ErrNotImplemented
}
// UpdateSubmissionReview implements updateSubmissionReview operation.
//
// Update an existing review.
//
// PATCH /submissions/{SubmissionID}/reviews/{ReviewID}
func (UnimplementedHandler) UpdateSubmissionReview(ctx context.Context, req *SubmissionReviewCreate, params UpdateSubmissionReviewParams) (r *SubmissionReview, _ error) {
return r, ht.ErrNotImplemented
}
// NewError creates *ErrorStatusCode from error returned by handler.
//
// Used for common default response.

File diff suppressed because it is too large Load Diff

75
pkg/cmds/aor.go Normal file
View File

@@ -0,0 +1,75 @@
package cmds
import (
"git.itzana.me/strafesnet/maps-service/pkg/datastore/gormstore"
"git.itzana.me/strafesnet/maps-service/pkg/service"
log "github.com/sirupsen/logrus"
"github.com/urfave/cli/v2"
)
func NewAORCommand() *cli.Command {
return &cli.Command{
Name: "aor",
Usage: "Run AOR (Accept or Reject) event processor",
Action: runAORProcessor,
Flags: []cli.Flag{
&cli.StringFlag{
Name: "pg-host",
Usage: "Host of postgres database",
EnvVars: []string{"PG_HOST"},
Required: true,
},
&cli.IntFlag{
Name: "pg-port",
Usage: "Port of postgres database",
EnvVars: []string{"PG_PORT"},
Required: true,
},
&cli.StringFlag{
Name: "pg-db",
Usage: "Name of database to connect to",
EnvVars: []string{"PG_DB"},
Required: true,
},
&cli.StringFlag{
Name: "pg-user",
Usage: "User to connect with",
EnvVars: []string{"PG_USER"},
Required: true,
},
&cli.StringFlag{
Name: "pg-password",
Usage: "Password to connect with",
EnvVars: []string{"PG_PASSWORD"},
Required: true,
},
&cli.BoolFlag{
Name: "migrate",
Usage: "Run database migrations",
Value: false,
EnvVars: []string{"MIGRATE"},
},
},
}
}
func runAORProcessor(ctx *cli.Context) error {
log.Info("Starting AOR event processor")
// Connect to database
db, err := gormstore.New(ctx)
if err != nil {
log.WithError(err).Fatal("failed to connect database")
return err
}
// Create scheduler and process events
scheduler := service.NewAORScheduler(db)
if err := scheduler.ProcessAOREvents(); err != nil {
log.WithError(err).Error("AOR event processing failed")
return err
}
log.Info("AOR event processor completed successfully")
return nil
}

View File

@@ -18,6 +18,7 @@ import (
"git.itzana.me/strafesnet/maps-service/pkg/validator_controller"
"git.itzana.me/strafesnet/maps-service/pkg/web_api"
"github.com/nats-io/nats.go"
"github.com/redis/go-redis/v9"
log "github.com/sirupsen/logrus"
"github.com/urfave/cli/v2"
"google.golang.org/grpc"
@@ -102,6 +103,24 @@ func NewServeCommand() *cli.Command {
EnvVars: []string{"RBX_API_KEY"},
Required: true,
},
&cli.StringFlag{
Name: "redis-host",
Usage: "Host of Redis cache",
EnvVars: []string{"REDIS_HOST"},
Value: "localhost:6379",
},
&cli.StringFlag{
Name: "redis-password",
Usage: "Password for Redis",
EnvVars: []string{"REDIS_PASSWORD"},
Value: "",
},
&cli.IntFlag{
Name: "redis-db",
Usage: "Redis database number",
EnvVars: []string{"REDIS_DB"},
Value: 0,
},
},
}
}
@@ -129,6 +148,24 @@ func serve(ctx *cli.Context) error {
log.WithError(err).Fatal("failed to add stream")
}
// Initialize Redis client
redisClient := redis.NewClient(&redis.Options{
Addr: ctx.String("redis-host"),
Password: ctx.String("redis-password"),
DB: ctx.Int("redis-db"),
})
// Test Redis connection
if err := redisClient.Ping(ctx.Context).Err(); err != nil {
log.WithError(err).Warn("failed to connect to Redis - thumbnails will not be cached")
}
// Initialize Roblox client
robloxClient := &roblox.Client{
HttpClient: http.DefaultClient,
ApiKey: ctx.String("rbx-api-key"),
}
// connect to main game database
conn, err := grpc.Dial(ctx.String("data-rpc-host"), grpc.WithTransportCredentials(insecure.NewCredentials()))
if err != nil {
@@ -139,13 +176,15 @@ func serve(ctx *cli.Context) error {
js,
maps.NewMapsServiceClient(conn),
users.NewUsersServiceClient(conn),
robloxClient,
redisClient,
)
svc_external := web_api.NewService(
&svc_inner,
roblox.Client{
HttpClient: http.DefaultClient,
ApiKey: ctx.String("rbx-api-key"),
ApiKey: ctx.String("rbx-api-key"),
},
)

View File

@@ -24,11 +24,14 @@ const (
)
type Datastore interface {
AOREvents() AOREvents
AORSubmissions() AORSubmissions
AuditEvents() AuditEvents
Maps() Maps
Mapfixes() Mapfixes
Operations() Operations
Submissions() Submissions
SubmissionReviews() SubmissionReviews
Scripts() Scripts
ScriptPolicy() ScriptPolicy
}
@@ -83,6 +86,16 @@ type Submissions interface {
ListWithTotal(ctx context.Context, filters OptionalMap, page model.Page, sort ListSort) (int64, []model.Submission, error)
}
type SubmissionReviews interface {
Get(ctx context.Context, id int64) (model.SubmissionReview, error)
GetBySubmissionAndReviewer(ctx context.Context, submissionID int64, reviewerID uint64) (model.SubmissionReview, error)
Create(ctx context.Context, review model.SubmissionReview) (model.SubmissionReview, error)
Update(ctx context.Context, id int64, values OptionalMap) error
Delete(ctx context.Context, id int64) error
ListBySubmission(ctx context.Context, submissionID int64) ([]model.SubmissionReview, error)
MarkOutdatedBySubmission(ctx context.Context, submissionID int64) error
}
type Scripts interface {
Get(ctx context.Context, id int64) (model.Script, error)
Create(ctx context.Context, smap model.Script) (model.Script, error)
@@ -99,3 +112,22 @@ type ScriptPolicy interface {
Delete(ctx context.Context, id int64) error
List(ctx context.Context, filters OptionalMap, page model.Page) ([]model.ScriptPolicy, error)
}
type AOREvents interface {
Get(ctx context.Context, id int64) (model.AOREvent, error)
GetActive(ctx context.Context) (model.AOREvent, error)
GetByStatus(ctx context.Context, status model.AOREventStatus) ([]model.AOREvent, error)
Create(ctx context.Context, event model.AOREvent) (model.AOREvent, error)
Update(ctx context.Context, id int64, values OptionalMap) error
Delete(ctx context.Context, id int64) error
List(ctx context.Context, filters OptionalMap, page model.Page) ([]model.AOREvent, error)
}
type AORSubmissions interface {
Get(ctx context.Context, id int64) (model.AORSubmission, error)
GetByAOREvent(ctx context.Context, eventID int64) ([]model.AORSubmission, error)
GetBySubmission(ctx context.Context, submissionID int64) ([]model.AORSubmission, error)
Create(ctx context.Context, aorSubmission model.AORSubmission) (model.AORSubmission, error)
Delete(ctx context.Context, id int64) error
ListWithSubmissions(ctx context.Context, eventID int64) ([]model.Submission, error)
}

View File

@@ -0,0 +1,89 @@
package gormstore
import (
"context"
"errors"
"git.itzana.me/strafesnet/maps-service/pkg/datastore"
"git.itzana.me/strafesnet/maps-service/pkg/model"
"gorm.io/gorm"
)
type AOREvents struct {
db *gorm.DB
}
func (env *AOREvents) Get(ctx context.Context, id int64) (model.AOREvent, error) {
var event model.AOREvent
if err := env.db.First(&event, id).Error; err != nil {
if errors.Is(err, gorm.ErrRecordNotFound) {
return event, datastore.ErrNotExist
}
return event, err
}
return event, nil
}
func (env *AOREvents) GetActive(ctx context.Context) (model.AOREvent, error) {
var event model.AOREvent
// Get the most recent non-closed event
if err := env.db.Where("status != ?", model.AOREventStatusClosed).
Order("start_date DESC").
First(&event).Error; err != nil {
if errors.Is(err, gorm.ErrRecordNotFound) {
return event, datastore.ErrNotExist
}
return event, err
}
return event, nil
}
func (env *AOREvents) GetByStatus(ctx context.Context, status model.AOREventStatus) ([]model.AOREvent, error) {
var events []model.AOREvent
if err := env.db.Where("status = ?", status).Order("start_date DESC").Find(&events).Error; err != nil {
return nil, err
}
return events, nil
}
func (env *AOREvents) Create(ctx context.Context, event model.AOREvent) (model.AOREvent, error) {
if err := env.db.Create(&event).Error; err != nil {
return event, err
}
return event, nil
}
func (env *AOREvents) Update(ctx context.Context, id int64, values datastore.OptionalMap) error {
if err := env.db.Model(&model.AOREvent{}).Where("id = ?", id).Updates(values.Map()).Error; err != nil {
if err == gorm.ErrRecordNotFound {
return datastore.ErrNotExist
}
return err
}
return nil
}
func (env *AOREvents) Delete(ctx context.Context, id int64) error {
if err := env.db.Delete(&model.AOREvent{}, id).Error; err != nil {
if err == gorm.ErrRecordNotFound {
return datastore.ErrNotExist
}
return err
}
return nil
}
func (env *AOREvents) List(ctx context.Context, filters datastore.OptionalMap, page model.Page) ([]model.AOREvent, error) {
var events []model.AOREvent
query := env.db.Where(filters.Map())
if page.Size > 0 {
offset := (page.Number - 1) * page.Size
query = query.Limit(int(page.Size)).Offset(int(offset))
}
if err := query.Order("start_date DESC").Find(&events).Error; err != nil {
return nil, err
}
return events, nil
}

View File

@@ -0,0 +1,70 @@
package gormstore
import (
"context"
"errors"
"git.itzana.me/strafesnet/maps-service/pkg/datastore"
"git.itzana.me/strafesnet/maps-service/pkg/model"
"gorm.io/gorm"
)
type AORSubmissions struct {
db *gorm.DB
}
func (env *AORSubmissions) Get(ctx context.Context, id int64) (model.AORSubmission, error) {
var aorSubmission model.AORSubmission
if err := env.db.First(&aorSubmission, id).Error; err != nil {
if errors.Is(err, gorm.ErrRecordNotFound) {
return aorSubmission, datastore.ErrNotExist
}
return aorSubmission, err
}
return aorSubmission, nil
}
func (env *AORSubmissions) GetByAOREvent(ctx context.Context, eventID int64) ([]model.AORSubmission, error) {
var aorSubmissions []model.AORSubmission
if err := env.db.Where("aor_event_id = ?", eventID).Order("added_at DESC").Find(&aorSubmissions).Error; err != nil {
return nil, err
}
return aorSubmissions, nil
}
func (env *AORSubmissions) GetBySubmission(ctx context.Context, submissionID int64) ([]model.AORSubmission, error) {
var aorSubmissions []model.AORSubmission
if err := env.db.Where("submission_id = ?", submissionID).Order("added_at DESC").Find(&aorSubmissions).Error; err != nil {
return nil, err
}
return aorSubmissions, nil
}
func (env *AORSubmissions) Create(ctx context.Context, aorSubmission model.AORSubmission) (model.AORSubmission, error) {
if err := env.db.Create(&aorSubmission).Error; err != nil {
return aorSubmission, err
}
return aorSubmission, nil
}
func (env *AORSubmissions) Delete(ctx context.Context, id int64) error {
if err := env.db.Delete(&model.AORSubmission{}, id).Error; err != nil {
if err == gorm.ErrRecordNotFound {
return datastore.ErrNotExist
}
return err
}
return nil
}
func (env *AORSubmissions) ListWithSubmissions(ctx context.Context, eventID int64) ([]model.Submission, error) {
var submissions []model.Submission
if err := env.db.
Joins("JOIN aor_submissions ON aor_submissions.submission_id = submissions.id").
Where("aor_submissions.aor_event_id = ?", eventID).
Order("aor_submissions.added_at DESC").
Find(&submissions).Error; err != nil {
return nil, err
}
return submissions, nil
}

View File

@@ -31,11 +31,14 @@ func New(ctx *cli.Context) (datastore.Datastore, error) {
if ctx.Bool("migrate") {
if err := db.AutoMigrate(
&model.AOREvent{},
&model.AORSubmission{},
&model.AuditEvent{},
&model.Map{},
&model.Mapfix{},
&model.Operation{},
&model.Submission{},
&model.SubmissionReview{},
&model.Script{},
&model.ScriptPolicy{},
); err != nil {

View File

@@ -9,6 +9,14 @@ type Gormstore struct {
db *gorm.DB
}
func (g Gormstore) AOREvents() datastore.AOREvents {
return &AOREvents{db: g.db}
}
func (g Gormstore) AORSubmissions() datastore.AORSubmissions {
return &AORSubmissions{db: g.db}
}
func (g Gormstore) AuditEvents() datastore.AuditEvents {
return &AuditEvents{db: g.db}
}
@@ -29,6 +37,10 @@ func (g Gormstore) Submissions() datastore.Submissions {
return &Submissions{db: g.db}
}
func (g Gormstore) SubmissionReviews() datastore.SubmissionReviews {
return &SubmissionReviews{db: g.db}
}
func (g Gormstore) Scripts() datastore.Scripts {
return &Scripts{db: g.db}
}

View File

@@ -0,0 +1,83 @@
package gormstore
import (
"context"
"errors"
"git.itzana.me/strafesnet/maps-service/pkg/datastore"
"git.itzana.me/strafesnet/maps-service/pkg/model"
"gorm.io/gorm"
)
type SubmissionReviews struct {
db *gorm.DB
}
func (env *SubmissionReviews) Get(ctx context.Context, id int64) (model.SubmissionReview, error) {
var review model.SubmissionReview
if err := env.db.First(&review, id).Error; err != nil {
if errors.Is(err, gorm.ErrRecordNotFound) {
return review, datastore.ErrNotExist
}
return review, err
}
return review, nil
}
func (env *SubmissionReviews) GetBySubmissionAndReviewer(ctx context.Context, submissionID int64, reviewerID uint64) (model.SubmissionReview, error) {
var review model.SubmissionReview
if err := env.db.Where("submission_id = ? AND reviewer_id = ?", submissionID, reviewerID).First(&review).Error; err != nil {
if errors.Is(err, gorm.ErrRecordNotFound) {
return review, datastore.ErrNotExist
}
return review, err
}
return review, nil
}
func (env *SubmissionReviews) Create(ctx context.Context, review model.SubmissionReview) (model.SubmissionReview, error) {
if err := env.db.Create(&review).Error; err != nil {
return review, err
}
return review, nil
}
func (env *SubmissionReviews) Update(ctx context.Context, id int64, values datastore.OptionalMap) error {
if err := env.db.Model(&model.SubmissionReview{}).Where("id = ?", id).Updates(values.Map()).Error; err != nil {
if err == gorm.ErrRecordNotFound {
return datastore.ErrNotExist
}
return err
}
return nil
}
func (env *SubmissionReviews) Delete(ctx context.Context, id int64) error {
if err := env.db.Delete(&model.SubmissionReview{}, id).Error; err != nil {
if err == gorm.ErrRecordNotFound {
return datastore.ErrNotExist
}
return err
}
return nil
}
func (env *SubmissionReviews) ListBySubmission(ctx context.Context, submissionID int64) ([]model.SubmissionReview, error) {
var reviews []model.SubmissionReview
if err := env.db.Where("submission_id = ?", submissionID).Order("created_at DESC").Find(&reviews).Error; err != nil {
return nil, err
}
return reviews, nil
}
func (env *SubmissionReviews) MarkOutdatedBySubmission(ctx context.Context, submissionID int64) error {
if err := env.db.Model(&model.SubmissionReview{}).Where("submission_id = ?", submissionID).Update("outdated", true).Error; err != nil {
return err
}
return nil
}

37
pkg/model/aor_event.go Normal file
View File

@@ -0,0 +1,37 @@
package model
import "time"
type AOREventStatus int32
const (
AOREventStatusScheduled AOREventStatus = 0 // Event scheduled, waiting for start
AOREventStatusOpen AOREventStatus = 1 // Event started, accepting submissions (1st of month)
AOREventStatusFrozen AOREventStatus = 2 // Submissions frozen (after 1st of month)
AOREventStatusSelected AOREventStatus = 3 // Submissions selected for AOR (after week 1)
AOREventStatusCompleted AOREventStatus = 4 // Decisions finalized (end of month)
AOREventStatusClosed AOREventStatus = 5 // Event closed/archived
)
// AOREvent represents an Accept or Reject event cycle
// AOR events occur every 4 months (April, August, December)
type AOREvent struct {
ID int64 `gorm:"primaryKey"`
StartDate time.Time `gorm:"index"` // 1st day of AOR month
FreezeDate time.Time // End of 1st day (23:59:59)
SelectionDate time.Time // End of week 1 (7 days after start)
DecisionDate time.Time // End of month (when final decisions are made)
Status AOREventStatus
CreatedAt time.Time
UpdatedAt time.Time
}
// AORSubmission represents a submission that was added to an AOR event
type AORSubmission struct {
ID int64 `gorm:"primaryKey"`
AOREventID int64 `gorm:"index"`
SubmissionID int64 `gorm:"index"`
AddedAt time.Time
CreatedAt time.Time
UpdatedAt time.Time
}

View File

@@ -18,10 +18,12 @@ const (
MapfixStatusValidating MapfixStatus = 5
MapfixStatusValidated MapfixStatus = 6
MapfixStatusUploading MapfixStatus = 7
MapfixStatusUploaded MapfixStatus = 8 // uploaded to the group, but pending release
MapfixStatusReleasing MapfixStatus = 11
// Phase: Final MapfixStatus
MapfixStatusUploaded MapfixStatus = 8 // uploaded to the group, but pending release
MapfixStatusRejected MapfixStatus = 9
MapfixStatusReleased MapfixStatus = 10
)
type Mapfix struct {

View File

@@ -65,3 +65,29 @@ type UploadMapfixRequest struct {
ModelVersion uint64
TargetAssetID uint64
}
type ReleaseSubmissionRequest struct {
// Release schedule
SubmissionID int64
ReleaseDate int64
// Model download info
ModelID uint64
ModelVersion uint64
// MapCreate
UploadedAssetID uint64
DisplayName string
Creator string
GameID uint32
Submitter uint64
}
type BatchReleaseSubmissionsRequest struct {
Submissions []ReleaseSubmissionRequest
OperationID int32
}
type ReleaseMapfixRequest struct {
MapfixID int64
ModelID uint64
ModelVersion uint64
TargetAssetID uint64
}

View File

@@ -17,7 +17,7 @@ type ScriptPolicy struct {
// Hash of the source code that leads to this policy.
// If this is a replacement mapping, the original source may not be pointed to by any policy.
// The original source should still exist in the scripts table, which can be located by the same hash.
FromScriptHash int64 // postgres does not support unsigned integers, so we have to pretend
FromScriptHash int64 `gorm:"uniqueIndex"` // postgres does not support unsigned integers, so we have to pretend
// The ID of the replacement source (ScriptPolicyReplace)
// or verbatim source (ScriptPolicyAllowed)
// or 0 (other)

View File

@@ -26,7 +26,7 @@ func HashParse(hash string) (uint64, error){
type Script struct {
ID int64 `gorm:"primaryKey"`
Name string
Hash int64 // postgres does not support unsigned integers, so we have to pretend
Hash int64 `gorm:"uniqueIndex"` // postgres does not support unsigned integers, so we have to pretend
Source string
ResourceType ResourceType // is this a submission or is it a mapfix
ResourceID int64 // which submission / mapfix did this script first appear in

View File

@@ -0,0 +1,14 @@
package model
import "time"
type SubmissionReview struct {
ID int64 `gorm:"primaryKey"`
SubmissionID int64 `gorm:"index"`
ReviewerID uint64
Recommend bool
Description string
Outdated bool
CreatedAt time.Time
UpdatedAt time.Time
}

View File

@@ -96,7 +96,7 @@ func setupRoutes(cfg *RouterConfig) (*gin.Engine, error) {
// Docs
public_api.GET("/docs/*any", ginSwagger.WrapHandler(swaggerfiles.Handler))
public_api.GET("/", func(ctx *gin.Context) {
ctx.Redirect(http.StatusPermanentRedirect, "/docs/index.html")
ctx.Redirect(http.StatusPermanentRedirect, "/public-api/docs/index.html")
})
}

160
pkg/roblox/thumbnails.go Normal file
View File

@@ -0,0 +1,160 @@
package roblox
import (
"bytes"
"encoding/json"
"fmt"
"io"
"net/http"
)
// ThumbnailSize represents valid Roblox thumbnail sizes
type ThumbnailSize string
const (
Size150x150 ThumbnailSize = "150x150"
Size420x420 ThumbnailSize = "420x420"
Size768x432 ThumbnailSize = "768x432"
)
// ThumbnailFormat represents the image format
type ThumbnailFormat string
const (
FormatPng ThumbnailFormat = "Png"
FormatJpeg ThumbnailFormat = "Jpeg"
)
// ThumbnailRequest represents a single thumbnail request
type ThumbnailRequest struct {
RequestID string `json:"requestId,omitempty"`
Type string `json:"type"`
TargetID uint64 `json:"targetId"`
Size string `json:"size,omitempty"`
Format string `json:"format,omitempty"`
}
// ThumbnailData represents a single thumbnail response
type ThumbnailData struct {
TargetID uint64 `json:"targetId"`
State string `json:"state"` // "Completed", "Error", "Pending"
ImageURL string `json:"imageUrl"`
}
// BatchThumbnailsResponse represents the API response
type BatchThumbnailsResponse struct {
Data []ThumbnailData `json:"data"`
}
// GetAssetThumbnails fetches thumbnails for multiple assets in a single batch request
// Roblox allows up to 100 assets per batch
func (c *Client) GetAssetThumbnails(assetIDs []uint64, size ThumbnailSize, format ThumbnailFormat) ([]ThumbnailData, error) {
if len(assetIDs) == 0 {
return []ThumbnailData{}, nil
}
if len(assetIDs) > 100 {
return nil, GetError("batch size cannot exceed 100 assets")
}
// Build request payload - the API expects an array directly, not wrapped in an object
requests := make([]ThumbnailRequest, len(assetIDs))
for i, assetID := range assetIDs {
requests[i] = ThumbnailRequest{
Type: "Asset",
TargetID: assetID,
Size: string(size),
Format: string(format),
}
}
jsonData, err := json.Marshal(requests)
if err != nil {
return nil, GetError("JSONMarshalError: " + err.Error())
}
req, err := http.NewRequest("POST", "https://thumbnails.roblox.com/v1/batch", bytes.NewBuffer(jsonData))
if err != nil {
return nil, GetError("RequestCreationError: " + err.Error())
}
req.Header.Set("Content-Type", "application/json")
resp, err := c.HttpClient.Do(req)
if err != nil {
return nil, GetError("RequestError: " + err.Error())
}
defer resp.Body.Close()
if resp.StatusCode != http.StatusOK {
body, _ := io.ReadAll(resp.Body)
return nil, GetError(fmt.Sprintf("ResponseError: status code %d, body: %s", resp.StatusCode, string(body)))
}
body, err := io.ReadAll(resp.Body)
if err != nil {
return nil, GetError("ReadBodyError: " + err.Error())
}
var response BatchThumbnailsResponse
if err := json.Unmarshal(body, &response); err != nil {
return nil, GetError("JSONUnmarshalError: " + err.Error())
}
return response.Data, nil
}
// GetUserAvatarThumbnails fetches avatar thumbnails for multiple users in a single batch request
func (c *Client) GetUserAvatarThumbnails(userIDs []uint64, size ThumbnailSize, format ThumbnailFormat) ([]ThumbnailData, error) {
if len(userIDs) == 0 {
return []ThumbnailData{}, nil
}
if len(userIDs) > 100 {
return nil, GetError("batch size cannot exceed 100 users")
}
// Build request payload - the API expects an array directly, not wrapped in an object
requests := make([]ThumbnailRequest, len(userIDs))
for i, userID := range userIDs {
requests[i] = ThumbnailRequest{
Type: "AvatarHeadShot",
TargetID: userID,
Size: string(size),
Format: string(format),
}
}
jsonData, err := json.Marshal(requests)
if err != nil {
return nil, GetError("JSONMarshalError: " + err.Error())
}
req, err := http.NewRequest("POST", "https://thumbnails.roblox.com/v1/batch", bytes.NewBuffer(jsonData))
if err != nil {
return nil, GetError("RequestCreationError: " + err.Error())
}
req.Header.Set("Content-Type", "application/json")
resp, err := c.HttpClient.Do(req)
if err != nil {
return nil, GetError("RequestError: " + err.Error())
}
defer resp.Body.Close()
if resp.StatusCode != http.StatusOK {
body, _ := io.ReadAll(resp.Body)
return nil, GetError(fmt.Sprintf("ResponseError: status code %d, body: %s", resp.StatusCode, string(body)))
}
body, err := io.ReadAll(resp.Body)
if err != nil {
return nil, GetError("ReadBodyError: " + err.Error())
}
var response BatchThumbnailsResponse
if err := json.Unmarshal(body, &response); err != nil {
return nil, GetError("JSONUnmarshalError: " + err.Error())
}
return response.Data, nil
}

72
pkg/roblox/users.go Normal file
View File

@@ -0,0 +1,72 @@
package roblox
import (
"bytes"
"encoding/json"
"fmt"
"io"
"net/http"
)
// UserData represents a single user's information
type UserData struct {
ID uint64 `json:"id"`
Name string `json:"name"`
DisplayName string `json:"displayName"`
}
// BatchUsersResponse represents the API response for batch user requests
type BatchUsersResponse struct {
Data []UserData `json:"data"`
}
// GetUsernames fetches usernames for multiple users in a single batch request
// Roblox allows up to 100 users per batch
func (c *Client) GetUsernames(userIDs []uint64) ([]UserData, error) {
if len(userIDs) == 0 {
return []UserData{}, nil
}
if len(userIDs) > 100 {
return nil, GetError("batch size cannot exceed 100 users")
}
// Build request payload
payload := map[string][]uint64{
"userIds": userIDs,
}
jsonData, err := json.Marshal(payload)
if err != nil {
return nil, GetError("JSONMarshalError: " + err.Error())
}
req, err := http.NewRequest("POST", "https://users.roblox.com/v1/users", bytes.NewBuffer(jsonData))
if err != nil {
return nil, GetError("RequestCreationError: " + err.Error())
}
req.Header.Set("Content-Type", "application/json")
resp, err := c.HttpClient.Do(req)
if err != nil {
return nil, GetError("RequestError: " + err.Error())
}
defer resp.Body.Close()
if resp.StatusCode != http.StatusOK {
body, _ := io.ReadAll(resp.Body)
return nil, GetError(fmt.Sprintf("ResponseError: status code %d, body: %s", resp.StatusCode, string(body)))
}
body, err := io.ReadAll(resp.Body)
if err != nil {
return nil, GetError("ReadBodyError: " + err.Error())
}
var response BatchUsersResponse
if err := json.Unmarshal(body, &response); err != nil {
return nil, GetError("JSONUnmarshalError: " + err.Error())
}
return response.Data, nil
}

30
pkg/service/aor_events.go Normal file
View File

@@ -0,0 +1,30 @@
package service
import (
"context"
"git.itzana.me/strafesnet/maps-service/pkg/datastore"
"git.itzana.me/strafesnet/maps-service/pkg/model"
)
// AOR Event service methods
func (svc *Service) GetAOREvent(ctx context.Context, id int64) (model.AOREvent, error) {
return svc.db.AOREvents().Get(ctx, id)
}
func (svc *Service) GetActiveAOREvent(ctx context.Context) (model.AOREvent, error) {
return svc.db.AOREvents().GetActive(ctx)
}
func (svc *Service) ListAOREvents(ctx context.Context, page model.Page) ([]model.AOREvent, error) {
return svc.db.AOREvents().List(ctx, datastore.Optional(), page)
}
func (svc *Service) GetAORSubmissionsByEvent(ctx context.Context, eventID int64) ([]model.Submission, error) {
return svc.db.AORSubmissions().ListWithSubmissions(ctx, eventID)
}
func (svc *Service) GetAORSubmissionsBySubmission(ctx context.Context, submissionID int64) ([]model.AORSubmission, error) {
return svc.db.AORSubmissions().GetBySubmission(ctx, submissionID)
}

View File

@@ -0,0 +1,389 @@
package service
import (
"context"
"time"
"git.itzana.me/strafesnet/maps-service/pkg/datastore"
"git.itzana.me/strafesnet/maps-service/pkg/model"
log "github.com/sirupsen/logrus"
)
// AORScheduler manages AOR events and their lifecycle
type AORScheduler struct {
ds datastore.Datastore
ctx context.Context
}
// NewAORScheduler creates a new AOR scheduler
func NewAORScheduler(ds datastore.Datastore) *AORScheduler {
return &AORScheduler{
ds: ds,
ctx: context.Background(),
}
}
// ProcessAOREvents is the main entry point for the cron job
// It checks and updates AOR event statuses
func (s *AORScheduler) ProcessAOREvents() error {
log.Info("AOR Scheduler: Processing events")
// Initialize: create next AOR event if none exists
if err := s.ensureNextAOREvent(); err != nil {
log.WithError(err).Error("Failed to ensure next AOR event")
return err
}
// Process current active event
if err := s.processAOREvents(); err != nil {
log.WithError(err).Error("Failed to process AOR events")
return err
}
log.Info("AOR Scheduler: Processing completed successfully")
return nil
}
// ensureNextAOREvent creates the next AOR event if one doesn't exist
func (s *AORScheduler) ensureNextAOREvent() error {
// Check if there's an active or scheduled event
_, err := s.ds.AOREvents().GetActive(s.ctx)
if err == nil {
// Event exists, nothing to do
return nil
}
if err != datastore.ErrNotExist {
return err
}
// No active event, create the next one
nextDate := s.calculateNextAORDate(time.Now())
return s.createAOREvent(nextDate)
}
// calculateNextAORDate calculates the next AOR start date
// AOR events are held every 4 months: April, August, December
func (s *AORScheduler) calculateNextAORDate(from time.Time) time.Time {
aorMonths := []time.Month{time.April, time.August, time.December}
currentYear := from.Year()
currentMonth := from.Month()
// Find the next AOR month
for _, month := range aorMonths {
if month > currentMonth {
// Next AOR is this year
return time.Date(currentYear, month, 1, 0, 0, 0, 0, time.UTC)
}
}
// Next AOR is in April of next year
return time.Date(currentYear+1, time.April, 1, 0, 0, 0, 0, time.UTC)
}
// createAOREvent creates a new AOR event with calculated dates
func (s *AORScheduler) createAOREvent(startDate time.Time) error {
freezeDate := startDate.Add(24*time.Hour - time.Second) // End of first day (23:59:59)
selectionDate := startDate.Add(7 * 24 * time.Hour) // 7 days after start
// Decision date is the last day of the month at 23:59:59
// Calculate the first day of next month, then subtract 1 second
year, month, _ := startDate.Date()
firstOfNextMonth := time.Date(year, month+1, 1, 0, 0, 0, 0, time.UTC)
decisionDate := firstOfNextMonth.Add(-time.Second)
event := model.AOREvent{
StartDate: startDate,
FreezeDate: freezeDate,
SelectionDate: selectionDate,
DecisionDate: decisionDate,
Status: model.AOREventStatusScheduled,
}
_, err := s.ds.AOREvents().Create(s.ctx, event)
if err != nil {
return err
}
log.WithFields(log.Fields{
"start_date": startDate,
"freeze_date": freezeDate,
"selection_date": selectionDate,
"decision_date": decisionDate,
}).Info("Created new AOR event")
return nil
}
// processAOREvents checks and updates AOR event statuses
func (s *AORScheduler) processAOREvents() error {
now := time.Now()
// Get active event
event, err := s.ds.AOREvents().GetActive(s.ctx)
if err == datastore.ErrNotExist {
// No active event, ensure one is created
return s.ensureNextAOREvent()
}
if err != nil {
return err
}
// Process event based on current status and dates
switch event.Status {
case model.AOREventStatusScheduled:
// Check if event should start (it's now the 1st of the AOR month)
if now.After(event.StartDate) || now.Equal(event.StartDate) {
if err := s.openAOREvent(event.ID); err != nil {
return err
}
}
case model.AOREventStatusOpen:
// Check if submissions should be frozen (past the freeze date)
if now.After(event.FreezeDate) {
if err := s.freezeAOREvent(event.ID); err != nil {
return err
}
}
case model.AOREventStatusFrozen:
// Check if it's time to select submissions (past selection date)
if now.After(event.SelectionDate) {
if err := s.selectSubmissions(event.ID); err != nil {
return err
}
}
case model.AOREventStatusSelected:
// Check if it's time to finalize decisions (past decision date)
if now.After(event.DecisionDate) {
if err := s.finalizeDecisions(event.ID); err != nil {
return err
}
}
case model.AOREventStatusCompleted:
// Event completed, create next one and close this one
nextDate := s.calculateNextAORDate(event.StartDate)
if err := s.createAOREvent(nextDate); err != nil {
return err
}
if err := s.closeAOREvent(event.ID); err != nil {
return err
}
}
return nil
}
// openAOREvent transitions an event to Open status
func (s *AORScheduler) openAOREvent(eventID int64) error {
err := s.ds.AOREvents().Update(s.ctx, eventID, datastore.Optional().Add("status", model.AOREventStatusOpen))
if err != nil {
return err
}
log.WithField("event_id", eventID).Info("AOR event opened - submissions now accepted")
return nil
}
// freezeAOREvent transitions an event to Frozen status
// TODO: lock submission from updates
func (s *AORScheduler) freezeAOREvent(eventID int64) error {
err := s.ds.AOREvents().Update(s.ctx, eventID, datastore.Optional().Add("status", model.AOREventStatusFrozen))
if err != nil {
return err
}
log.WithField("event_id", eventID).Info("AOR event frozen - submissions locked")
return nil
}
// selectSubmissions automatically selects qualifying submissions
func (s *AORScheduler) selectSubmissions(eventID int64) error {
// Get all submissions in Submitted status
submissions, err := s.ds.Submissions().List(s.ctx, datastore.Optional().Add("status_id", model.SubmissionStatusSubmitted), model.Page{Number: 0, Size: 0}, datastore.ListSortDisabled)
if err != nil {
return err
}
selectedCount := 0
for _, submission := range submissions {
// Get all reviews for this submission
reviews, err := s.ds.SubmissionReviews().ListBySubmission(s.ctx, submission.ID)
if err != nil {
log.WithError(err).WithField("submission_id", submission.ID).Error("Failed to get reviews")
continue
}
// Apply selection criteria
if s.shouldAddToAOR(reviews) {
// Add to AOR event
aorSubmission := model.AORSubmission{
AOREventID: eventID,
SubmissionID: submission.ID,
AddedAt: time.Now(),
}
_, err := s.ds.AORSubmissions().Create(s.ctx, aorSubmission)
if err != nil {
log.WithError(err).WithField("submission_id", submission.ID).Error("Failed to add submission to AOR")
continue
}
selectedCount++
log.WithField("submission_id", submission.ID).Info("Added submission to AOR event")
}
}
// Mark event as selected (waiting for end of month to finalize)
err = s.ds.AOREvents().Update(s.ctx, eventID, datastore.Optional().Add("status", model.AOREventStatusSelected))
if err != nil {
return err
}
log.WithFields(log.Fields{
"event_id": eventID,
"selected_count": selectedCount,
}).Info("AOR submission selection completed - waiting for end of month to finalize decisions")
return nil
}
// shouldAddToAOR determines if a submission should be added to the AOR event
// Criteria:
// - If there are 0 reviews: NOT added
// - If there is 1+ review with recommend=true and not outdated: added
// - If majority (>=50%) of non-outdated reviews recommend: added
// TODO: Audit events
func (s *AORScheduler) shouldAddToAOR(reviews []model.SubmissionReview) bool {
// Filter out outdated reviews
var validReviews []model.SubmissionReview
for _, review := range reviews {
if !review.Outdated {
validReviews = append(validReviews, review)
}
}
// If there are 0 valid reviews, don't add
if len(validReviews) == 0 {
return false
}
// Count recommendations
recommendCount := 0
for _, review := range validReviews {
if review.Recommend {
recommendCount++
}
}
// Need at least 50% recommendations (2 accept + 2 deny = 50% = added)
// This means recommendCount * 2 >= len(validReviews)
return recommendCount*2 >= len(validReviews)
}
// shouldAccept determines if a submission should be accepted in final decisions
// Criteria: Must have >50% (strictly greater than) recommendations
func (s *AORScheduler) shouldAccept(reviews []model.SubmissionReview) bool {
// Filter out outdated reviews
var validReviews []model.SubmissionReview
for _, review := range reviews {
if !review.Outdated {
validReviews = append(validReviews, review)
}
}
// If there are 0 valid reviews, don't accept
if len(validReviews) == 0 {
return false
}
// Count recommendations
recommendCount := 0
for _, review := range validReviews {
if review.Recommend {
recommendCount++
}
}
// Need MORE than 50% recommendations (strictly greater)
// This means recommendCount * 2 > len(validReviews)
return recommendCount*2 > len(validReviews)
}
// finalizeDecisions makes final accept/reject decisions at end of month
// Submissions in the AOR event with >50% recommends are accepted
// Submissions in the AOR event with <=50% recommends are rejected
// TODO: Implement acceptance logic
// TODO: Query roblox group to get get min votes needed for acceptance
// TODO: Audit events
func (s *AORScheduler) finalizeDecisions(eventID int64) error {
// Get all submissions that were selected for this AOR event
aorSubmissions, err := s.ds.AORSubmissions().GetByAOREvent(s.ctx, eventID)
if err != nil {
return err
}
acceptedCount := 0
rejectedCount := 0
// Process each submission in the AOR event
for _, aorSub := range aorSubmissions {
// Get the submission
submission, err := s.ds.Submissions().Get(s.ctx, aorSub.SubmissionID)
if err != nil {
log.WithError(err).WithField("submission_id", aorSub.SubmissionID).Error("Failed to get submission")
continue
}
// Get all reviews for this submission
reviews, err := s.ds.SubmissionReviews().ListBySubmission(s.ctx, aorSub.SubmissionID)
if err != nil {
log.WithError(err).WithField("submission_id", aorSub.SubmissionID).Error("Failed to get reviews")
continue
}
// Check if submission has >50% recommends (strictly greater)
if s.shouldAccept(reviews) {
// This submission has >50% recommends - accept it
// TODO: Implement acceptance logic
// For now, this is a placeholder
log.WithField("submission_id", submission.ID).Info("TODO: Accept submission (placeholder)")
acceptedCount++
} else {
// This submission does not have >50% recommends - reject it
err := s.ds.Submissions().Update(s.ctx, submission.ID, datastore.Optional().Add("status_id", model.SubmissionStatusRejected))
if err != nil {
log.WithError(err).WithField("submission_id", submission.ID).Error("Failed to reject submission")
continue
}
log.WithField("submission_id", submission.ID).Info("Rejected submission")
rejectedCount++
}
}
// Mark event as completed
err = s.ds.AOREvents().Update(s.ctx, eventID, datastore.Optional().Add("status", model.AOREventStatusCompleted))
if err != nil {
return err
}
log.WithFields(log.Fields{
"event_id": eventID,
"accepted_count": acceptedCount,
"rejected_count": rejectedCount,
}).Info("AOR decisions finalized")
return nil
}
// closeAOREvent transitions an event to Closed status
func (s *AORScheduler) closeAOREvent(eventID int64) error {
err := s.ds.AOREvents().Update(s.ctx, eventID, datastore.Optional().Add("status", model.AOREventStatusClosed))
if err != nil {
return err
}
log.WithField("event_id", eventID).Info("AOR event closed")
return nil
}

View File

@@ -2,6 +2,7 @@ package service
import (
"context"
"time"
"git.itzana.me/strafesnet/go-grpc/maps"
"git.itzana.me/strafesnet/maps-service/pkg/datastore"
@@ -26,7 +27,7 @@ func (update MapUpdate) SetGameID(game_id uint32) {
datastore.OptionalMap(update).Add("game_id", game_id)
}
func (update MapUpdate) SetDate(date int64) {
datastore.OptionalMap(update).Add("date", date)
datastore.OptionalMap(update).Add("date", time.Unix(date, 0))
}
func (update MapUpdate) SetSubmitter(submitter uint64) {
datastore.OptionalMap(update).Add("submitter", submitter)

View File

@@ -112,3 +112,29 @@ func (svc *Service) NatsValidateMapfix(
return nil
}
func (svc *Service) NatsReleaseMapfix(
MapfixID int64,
ModelID uint64,
ModelVersion uint64,
TargetAssetID uint64,
) error {
release_fix_request := model.ReleaseMapfixRequest{
MapfixID: MapfixID,
ModelID: ModelID,
ModelVersion: ModelVersion,
TargetAssetID: TargetAssetID,
}
j, err := json.Marshal(release_fix_request)
if err != nil {
return err
}
_, err = svc.nats.Publish("maptest.mapfixes.release", []byte(j))
if err != nil {
return err
}
return nil
}

View File

@@ -88,6 +88,28 @@ func (svc *Service) NatsUploadSubmission(
return nil
}
func (svc *Service) NatsBatchReleaseSubmissions(
Submissions []model.ReleaseSubmissionRequest,
operation int32,
) error {
release_new_request := model.BatchReleaseSubmissionsRequest{
Submissions: Submissions,
OperationID: operation,
}
j, err := json.Marshal(release_new_request)
if err != nil {
return err
}
_, err = svc.nats.Publish("maptest.submissions.batchrelease", []byte(j))
if err != nil {
return err
}
return nil
}
func (svc *Service) NatsValidateSubmission(
SubmissionID int64,
ModelID uint64,

View File

@@ -1,17 +1,22 @@
package service
import (
"context"
"git.itzana.me/strafesnet/go-grpc/maps"
"git.itzana.me/strafesnet/go-grpc/users"
"git.itzana.me/strafesnet/maps-service/pkg/datastore"
"git.itzana.me/strafesnet/maps-service/pkg/roblox"
"github.com/nats-io/nats.go"
"github.com/redis/go-redis/v9"
)
type Service struct {
db datastore.Datastore
nats nats.JetStreamContext
maps maps.MapsServiceClient
users users.UsersServiceClient
db datastore.Datastore
nats nats.JetStreamContext
maps maps.MapsServiceClient
users users.UsersServiceClient
thumbnailService *ThumbnailService
}
func NewService(
@@ -19,11 +24,44 @@ func NewService(
nats nats.JetStreamContext,
maps maps.MapsServiceClient,
users users.UsersServiceClient,
robloxClient *roblox.Client,
redisClient *redis.Client,
) Service {
return Service{
db: db,
nats: nats,
maps: maps,
users: users,
db: db,
nats: nats,
maps: maps,
users: users,
thumbnailService: NewThumbnailService(robloxClient, redisClient),
}
}
// GetAssetThumbnails proxies to the thumbnail service
func (s *Service) GetAssetThumbnails(ctx context.Context, assetIDs []uint64, size roblox.ThumbnailSize) (map[uint64]string, error) {
return s.thumbnailService.GetAssetThumbnails(ctx, assetIDs, size)
}
// GetUserAvatarThumbnails proxies to the thumbnail service
func (s *Service) GetUserAvatarThumbnails(ctx context.Context, userIDs []uint64, size roblox.ThumbnailSize) (map[uint64]string, error) {
return s.thumbnailService.GetUserAvatarThumbnails(ctx, userIDs, size)
}
// GetSingleAssetThumbnail proxies to the thumbnail service
func (s *Service) GetSingleAssetThumbnail(ctx context.Context, assetID uint64, size roblox.ThumbnailSize) (string, error) {
return s.thumbnailService.GetSingleAssetThumbnail(ctx, assetID, size)
}
// GetSingleUserAvatarThumbnail proxies to the thumbnail service
func (s *Service) GetSingleUserAvatarThumbnail(ctx context.Context, userID uint64, size roblox.ThumbnailSize) (string, error) {
return s.thumbnailService.GetSingleUserAvatarThumbnail(ctx, userID, size)
}
// GetUsernames proxies to the thumbnail service
func (s *Service) GetUsernames(ctx context.Context, userIDs []uint64) (map[uint64]string, error) {
return s.thumbnailService.GetUsernames(ctx, userIDs)
}
// GetSingleUsername proxies to the thumbnail service
func (s *Service) GetSingleUsername(ctx context.Context, userID uint64) (string, error) {
return s.thumbnailService.GetSingleUsername(ctx, userID)
}

View File

@@ -0,0 +1,55 @@
package service
import (
"context"
"git.itzana.me/strafesnet/maps-service/pkg/datastore"
"git.itzana.me/strafesnet/maps-service/pkg/model"
)
type SubmissionReviewUpdate datastore.OptionalMap
func NewSubmissionReviewUpdate() SubmissionReviewUpdate {
update := datastore.Optional()
return SubmissionReviewUpdate(update)
}
func (update SubmissionReviewUpdate) SetRecommend(recommend bool) {
datastore.OptionalMap(update).Add("recommend", recommend)
}
func (update SubmissionReviewUpdate) SetDescription(description string) {
datastore.OptionalMap(update).Add("description", description)
}
func (update SubmissionReviewUpdate) SetOutdated(outdated bool) {
datastore.OptionalMap(update).Add("outdated", outdated)
}
func (svc *Service) CreateSubmissionReview(ctx context.Context, review model.SubmissionReview) (model.SubmissionReview, error) {
return svc.db.SubmissionReviews().Create(ctx, review)
}
func (svc *Service) GetSubmissionReview(ctx context.Context, id int64) (model.SubmissionReview, error) {
return svc.db.SubmissionReviews().Get(ctx, id)
}
func (svc *Service) GetSubmissionReviewBySubmissionAndReviewer(ctx context.Context, submissionID int64, reviewerID uint64) (model.SubmissionReview, error) {
return svc.db.SubmissionReviews().GetBySubmissionAndReviewer(ctx, submissionID, reviewerID)
}
func (svc *Service) UpdateSubmissionReview(ctx context.Context, id int64, update SubmissionReviewUpdate) error {
return svc.db.SubmissionReviews().Update(ctx, id, datastore.OptionalMap(update))
}
func (svc *Service) DeleteSubmissionReview(ctx context.Context, id int64) error {
return svc.db.SubmissionReviews().Delete(ctx, id)
}
func (svc *Service) ListSubmissionReviewsBySubmission(ctx context.Context, submissionID int64) ([]model.SubmissionReview, error) {
return svc.db.SubmissionReviews().ListBySubmission(ctx, submissionID)
}
func (svc *Service) MarkSubmissionReviewsOutdated(ctx context.Context, submissionID int64) error {
return svc.db.SubmissionReviews().MarkOutdatedBySubmission(ctx, submissionID)
}

218
pkg/service/thumbnails.go Normal file
View File

@@ -0,0 +1,218 @@
package service
import (
"context"
"encoding/json"
"fmt"
"time"
"git.itzana.me/strafesnet/maps-service/pkg/roblox"
"github.com/redis/go-redis/v9"
)
type ThumbnailService struct {
robloxClient *roblox.Client
redisClient *redis.Client
cacheTTL time.Duration
}
func NewThumbnailService(robloxClient *roblox.Client, redisClient *redis.Client) *ThumbnailService {
return &ThumbnailService{
robloxClient: robloxClient,
redisClient: redisClient,
cacheTTL: 24 * time.Hour, // Cache thumbnails for 24 hours
}
}
// CachedThumbnail represents a cached thumbnail entry
type CachedThumbnail struct {
ImageURL string `json:"imageUrl"`
State string `json:"state"`
CachedAt time.Time `json:"cachedAt"`
}
// GetAssetThumbnails fetches thumbnails with Redis caching and batching
func (s *ThumbnailService) GetAssetThumbnails(ctx context.Context, assetIDs []uint64, size roblox.ThumbnailSize) (map[uint64]string, error) {
if len(assetIDs) == 0 {
return map[uint64]string{}, nil
}
result := make(map[uint64]string)
var missingIDs []uint64
// Try to get from cache first
for _, assetID := range assetIDs {
cacheKey := fmt.Sprintf("thumbnail:asset:%d:%s", assetID, size)
cached, err := s.redisClient.Get(ctx, cacheKey).Result()
if err == redis.Nil {
// Cache miss
missingIDs = append(missingIDs, assetID)
} else if err != nil {
// Redis error - treat as cache miss
missingIDs = append(missingIDs, assetID)
} else {
// Cache hit
var thumbnail CachedThumbnail
if err := json.Unmarshal([]byte(cached), &thumbnail); err == nil && thumbnail.State == "Completed" {
result[assetID] = thumbnail.ImageURL
} else {
missingIDs = append(missingIDs, assetID)
}
}
}
// If all were cached, return early
if len(missingIDs) == 0 {
return result, nil
}
// Batch fetch missing thumbnails from Roblox API
// Split into batches of 100 (Roblox API limit)
for i := 0; i < len(missingIDs); i += 100 {
end := i + 100
if end > len(missingIDs) {
end = len(missingIDs)
}
batch := missingIDs[i:end]
thumbnails, err := s.robloxClient.GetAssetThumbnails(batch, size, roblox.FormatPng)
if err != nil {
return nil, fmt.Errorf("failed to fetch thumbnails: %w", err)
}
// Process results and cache them
for _, thumb := range thumbnails {
cached := CachedThumbnail{
ImageURL: thumb.ImageURL,
State: thumb.State,
CachedAt: time.Now(),
}
if thumb.State == "Completed" && thumb.ImageURL != "" {
result[thumb.TargetID] = thumb.ImageURL
}
// Cache the result (even if incomplete, to avoid repeated API calls)
cacheKey := fmt.Sprintf("thumbnail:asset:%d:%s", thumb.TargetID, size)
cachedJSON, _ := json.Marshal(cached)
// Use shorter TTL for incomplete thumbnails
ttl := s.cacheTTL
if thumb.State != "Completed" {
ttl = 5 * time.Minute
}
s.redisClient.Set(ctx, cacheKey, cachedJSON, ttl)
}
}
return result, nil
}
// GetUserAvatarThumbnails fetches user avatar thumbnails with Redis caching and batching
func (s *ThumbnailService) GetUserAvatarThumbnails(ctx context.Context, userIDs []uint64, size roblox.ThumbnailSize) (map[uint64]string, error) {
if len(userIDs) == 0 {
return map[uint64]string{}, nil
}
result := make(map[uint64]string)
var missingIDs []uint64
// Try to get from cache first
for _, userID := range userIDs {
cacheKey := fmt.Sprintf("thumbnail:user:%d:%s", userID, size)
cached, err := s.redisClient.Get(ctx, cacheKey).Result()
if err == redis.Nil {
// Cache miss
missingIDs = append(missingIDs, userID)
} else if err != nil {
// Redis error - treat as cache miss
missingIDs = append(missingIDs, userID)
} else {
// Cache hit
var thumbnail CachedThumbnail
if err := json.Unmarshal([]byte(cached), &thumbnail); err == nil && thumbnail.State == "Completed" {
result[userID] = thumbnail.ImageURL
} else {
missingIDs = append(missingIDs, userID)
}
}
}
// If all were cached, return early
if len(missingIDs) == 0 {
return result, nil
}
// Batch fetch missing thumbnails from Roblox API
// Split into batches of 100 (Roblox API limit)
for i := 0; i < len(missingIDs); i += 100 {
end := i + 100
if end > len(missingIDs) {
end = len(missingIDs)
}
batch := missingIDs[i:end]
thumbnails, err := s.robloxClient.GetUserAvatarThumbnails(batch, size, roblox.FormatPng)
if err != nil {
return nil, fmt.Errorf("failed to fetch user thumbnails: %w", err)
}
// Process results and cache them
for _, thumb := range thumbnails {
cached := CachedThumbnail{
ImageURL: thumb.ImageURL,
State: thumb.State,
CachedAt: time.Now(),
}
if thumb.State == "Completed" && thumb.ImageURL != "" {
result[thumb.TargetID] = thumb.ImageURL
}
// Cache the result
cacheKey := fmt.Sprintf("thumbnail:user:%d:%s", thumb.TargetID, size)
cachedJSON, _ := json.Marshal(cached)
// Use shorter TTL for incomplete thumbnails
ttl := s.cacheTTL
if thumb.State != "Completed" {
ttl = 5 * time.Minute
}
s.redisClient.Set(ctx, cacheKey, cachedJSON, ttl)
}
}
return result, nil
}
// GetSingleAssetThumbnail is a convenience method for fetching a single asset thumbnail
func (s *ThumbnailService) GetSingleAssetThumbnail(ctx context.Context, assetID uint64, size roblox.ThumbnailSize) (string, error) {
results, err := s.GetAssetThumbnails(ctx, []uint64{assetID}, size)
if err != nil {
return "", err
}
if url, ok := results[assetID]; ok {
return url, nil
}
return "", fmt.Errorf("thumbnail not available for asset %d", assetID)
}
// GetSingleUserAvatarThumbnail is a convenience method for fetching a single user avatar thumbnail
func (s *ThumbnailService) GetSingleUserAvatarThumbnail(ctx context.Context, userID uint64, size roblox.ThumbnailSize) (string, error) {
results, err := s.GetUserAvatarThumbnails(ctx, []uint64{userID}, size)
if err != nil {
return "", err
}
if url, ok := results[userID]; ok {
return url, nil
}
return "", fmt.Errorf("thumbnail not available for user %d", userID)
}

108
pkg/service/users.go Normal file
View File

@@ -0,0 +1,108 @@
package service
import (
"context"
"encoding/json"
"fmt"
"time"
"git.itzana.me/strafesnet/maps-service/pkg/roblox"
"github.com/redis/go-redis/v9"
)
// CachedUser represents a cached user entry
type CachedUser struct {
Name string `json:"name"`
DisplayName string `json:"displayName"`
CachedAt time.Time `json:"cachedAt"`
}
// GetUsernames fetches usernames with Redis caching and batching
func (s *ThumbnailService) GetUsernames(ctx context.Context, userIDs []uint64) (map[uint64]string, error) {
if len(userIDs) == 0 {
return map[uint64]string{}, nil
}
result := make(map[uint64]string)
var missingIDs []uint64
// Try to get from cache first
for _, userID := range userIDs {
cacheKey := fmt.Sprintf("user:name:%d", userID)
cached, err := s.redisClient.Get(ctx, cacheKey).Result()
if err == redis.Nil {
// Cache miss
missingIDs = append(missingIDs, userID)
} else if err != nil {
// Redis error - treat as cache miss
missingIDs = append(missingIDs, userID)
} else {
// Cache hit
var user CachedUser
if err := json.Unmarshal([]byte(cached), &user); err == nil && user.Name != "" {
result[userID] = user.Name
} else {
missingIDs = append(missingIDs, userID)
}
}
}
// If all were cached, return early
if len(missingIDs) == 0 {
return result, nil
}
// Batch fetch missing usernames from Roblox API
// Split into batches of 100 (Roblox API limit)
for i := 0; i < len(missingIDs); i += 100 {
end := i + 100
if end > len(missingIDs) {
end = len(missingIDs)
}
batch := missingIDs[i:end]
var users []roblox.UserData
var err error
users, err = s.robloxClient.GetUsernames(batch)
if err != nil {
return nil, fmt.Errorf("failed to fetch usernames: %w", err)
}
// Process results and cache them
for _, user := range users {
cached := CachedUser{
Name: user.Name,
DisplayName: user.DisplayName,
CachedAt: time.Now(),
}
if user.Name != "" {
result[user.ID] = user.Name
}
// Cache the result
cacheKey := fmt.Sprintf("user:name:%d", user.ID)
cachedJSON, _ := json.Marshal(cached)
// Cache usernames for a long time (7 days) since they rarely change
s.redisClient.Set(ctx, cacheKey, cachedJSON, 7*24*time.Hour)
}
}
return result, nil
}
// GetSingleUsername is a convenience method for fetching a single username
func (s *ThumbnailService) GetSingleUsername(ctx context.Context, userID uint64) (string, error) {
results, err := s.GetUsernames(ctx, []uint64{userID})
if err != nil {
return "", err
}
if name, ok := results[userID]; ok {
return name, nil
}
return "", fmt.Errorf("username not available for user %d", userID)
}

View File

@@ -26,6 +26,8 @@ func NewMapfixesController(
var(
// prevent two mapfixes with same asset id
ActiveMapfixStatuses = []model.MapfixStatus{
model.MapfixStatusReleasing,
model.MapfixStatusUploaded,
model.MapfixStatusUploading,
model.MapfixStatusValidated,
model.MapfixStatusValidating,
@@ -184,7 +186,7 @@ func (svc *Mapfixes) SetStatusValidated(ctx context.Context, params *validator.M
// (Internal endpoint) Role Validator changes status from Validating -> Accepted.
//
// POST /mapfixes/{MapfixID}/status/validator-failed
func (svc *Mapfixes) SetStatusFailed(ctx context.Context, params *validator.MapfixID) (*validator.NullResponse, error) {
func (svc *Mapfixes) SetStatusNotValidated(ctx context.Context, params *validator.MapfixID) (*validator.NullResponse, error) {
MapfixID := int64(params.ID)
// transaction
target_status := model.MapfixStatusAcceptedUnvalidated
@@ -253,6 +255,117 @@ func (svc *Mapfixes) SetStatusUploaded(ctx context.Context, params *validator.Ma
return &validator.NullResponse{}, nil
}
func (svc *Mapfixes) SetStatusNotUploaded(ctx context.Context, params *validator.MapfixID) (*validator.NullResponse, error) {
MapfixID := int64(params.ID)
// transaction
target_status := model.MapfixStatusValidated
update := service.NewMapfixUpdate()
update.SetStatusID(target_status)
allow_statuses := []model.MapfixStatus{model.MapfixStatusUploading}
err := svc.inner.UpdateMapfixIfStatus(ctx, MapfixID, allow_statuses, update)
if err != nil {
return nil, err
}
// push an action audit event
event_data := model.AuditEventDataAction{
TargetStatus: uint32(target_status),
}
err = svc.inner.CreateAuditEventAction(
ctx,
model.ValidatorUserID,
model.Resource{
ID: MapfixID,
Type: model.ResourceMapfix,
},
event_data,
)
if err != nil {
return nil, err
}
return &validator.NullResponse{}, nil
}
// ActionMapfixReleased implements actionMapfixReleased operation.
//
// (Internal endpoint) Role Validator changes status from Releasing -> Released.
//
// POST /mapfixes/{MapfixID}/status/validator-released
func (svc *Mapfixes) SetStatusReleased(ctx context.Context, params *validator.MapfixReleaseRequest) (*validator.NullResponse, error) {
MapfixID := int64(params.MapfixID)
// transaction
target_status := model.MapfixStatusReleased
update := service.NewMapfixUpdate()
update.SetStatusID(target_status)
allow_statuses := []model.MapfixStatus{model.MapfixStatusReleasing}
err := svc.inner.UpdateMapfixIfStatus(ctx, MapfixID, allow_statuses, update)
if err != nil {
return nil, err
}
event_data := model.AuditEventDataAction{
TargetStatus: uint32(target_status),
}
err = svc.inner.CreateAuditEventAction(
ctx,
model.ValidatorUserID,
model.Resource{
ID: MapfixID,
Type: model.ResourceMapfix,
},
event_data,
)
if err != nil {
return nil, err
}
// metadata maintenance
map_update := service.NewMapUpdate()
map_update.SetAssetVersion(params.AssetVersion)
map_update.SetModes(params.Modes)
err = svc.inner.UpdateMap(ctx, int64(params.TargetAssetID), map_update)
if err != nil {
return nil, err
}
return &validator.NullResponse{}, nil
}
func (svc *Mapfixes) SetStatusNotReleased(ctx context.Context, params *validator.MapfixID) (*validator.NullResponse, error) {
MapfixID := int64(params.ID)
// transaction
target_status := model.MapfixStatusUploaded
update := service.NewMapfixUpdate()
update.SetStatusID(target_status)
allow_statuses := []model.MapfixStatus{model.MapfixStatusReleasing}
err := svc.inner.UpdateMapfixIfStatus(ctx, MapfixID, allow_statuses, update)
if err != nil {
return nil, err
}
// push an action audit event
event_data := model.AuditEventDataAction{
TargetStatus: uint32(target_status),
}
err = svc.inner.CreateAuditEventAction(
ctx,
model.ValidatorUserID,
model.Resource{
ID: MapfixID,
Type: model.ResourceMapfix,
},
event_data,
)
if err != nil {
return nil, err
}
return &validator.NullResponse{}, nil
}
// CreateMapfixAuditError implements createMapfixAuditError operation.
//

View File

@@ -19,6 +19,18 @@ func NewOperationsController(
}
}
func (svc *Operations) Success(ctx context.Context, params *validator.OperationSuccessRequest) (*validator.NullResponse, error) {
success_params := service.NewOperationCompleteParams(
params.Path,
)
err := svc.inner.CompleteOperation(ctx, int32(params.OperationID), success_params)
if err != nil {
return nil, err
}
return &validator.NullResponse{}, nil
}
// ActionOperationFailed implements actionOperationFailed operation.
//
// Fail the specified OperationID with a StatusMessage.

View File

@@ -4,6 +4,7 @@ import (
"context"
"errors"
"fmt"
"time"
"git.itzana.me/strafesnet/go-grpc/validator"
"git.itzana.me/strafesnet/maps-service/pkg/datastore"
@@ -24,7 +25,7 @@ func NewSubmissionsController(
}
var(
// prevent two mapfixes with same asset id
// prevent two submissions with same asset id
ActiveSubmissionStatuses = []model.SubmissionStatus{
model.SubmissionStatusUploading,
model.SubmissionStatusValidated,
@@ -202,7 +203,7 @@ func (svc *Submissions) SetStatusValidated(ctx context.Context, params *validato
// (Internal endpoint) Role Validator changes status from Validating -> Accepted.
//
// POST /submissions/{SubmissionID}/status/validator-failed
func (svc *Submissions) SetStatusFailed(ctx context.Context, params *validator.SubmissionID) (*validator.NullResponse, error) {
func (svc *Submissions) SetStatusNotValidated(ctx context.Context, params *validator.SubmissionID) (*validator.NullResponse, error) {
SubmissionID := int64(params.ID)
// transaction
target_status := model.SubmissionStatusAcceptedUnvalidated
@@ -273,6 +274,68 @@ func (svc *Submissions) SetStatusUploaded(ctx context.Context, params *validator
return &validator.NullResponse{}, nil
}
func (svc *Submissions) SetStatusNotUploaded(ctx context.Context, params *validator.SubmissionID) (*validator.NullResponse, error) {
SubmissionID := int64(params.ID)
// transaction
target_status := model.SubmissionStatusValidated
update := service.NewSubmissionUpdate()
update.SetStatusID(target_status)
allowed_statuses :=[]model.SubmissionStatus{model.SubmissionStatusUploading}
err := svc.inner.UpdateSubmissionIfStatus(ctx, SubmissionID, allowed_statuses, update)
if err != nil {
return nil, err
}
// push an action audit event
event_data := model.AuditEventDataAction{
TargetStatus: uint32(target_status),
}
err = svc.inner.CreateAuditEventAction(
ctx,
model.ValidatorUserID,
model.Resource{
ID: SubmissionID,
Type: model.ResourceSubmission,
},
event_data,
)
if err != nil {
return nil, err
}
return &validator.NullResponse{}, nil
}
func (svc *Submissions) SetStatusReleased(ctx context.Context, params *validator.SubmissionReleaseRequest) (*validator.NullResponse, error){
// create map with go-grpc
_, err := svc.inner.CreateMap(ctx, model.Map{
ID: params.MapCreate.ID,
DisplayName: params.MapCreate.DisplayName,
Creator: params.MapCreate.Creator,
GameID: params.MapCreate.GameID,
Date: time.Unix(params.MapCreate.Date, 0),
Submitter: params.MapCreate.Submitter,
Thumbnail: 0,
AssetVersion: params.MapCreate.AssetVersion,
LoadCount: 0,
Modes: params.MapCreate.Modes,
})
if err != nil {
return nil, err
}
// update status to Released
update := service.NewSubmissionUpdate()
update.SetStatusID(model.SubmissionStatusReleased)
err = svc.inner.UpdateSubmissionIfStatus(ctx, int64(params.SubmissionID), []model.SubmissionStatus{model.SubmissionStatusUploaded}, update)
if err != nil {
return nil, err
}
return &validator.NullResponse{}, nil
}
// CreateSubmissionAuditError implements createSubmissionAuditError operation.
//
// Post an error to the audit log

121
pkg/web_api/aor_events.go Normal file
View File

@@ -0,0 +1,121 @@
package web_api
import (
"context"
"git.itzana.me/strafesnet/maps-service/pkg/api"
"git.itzana.me/strafesnet/maps-service/pkg/model"
)
// ListAOREvents implements listAOREvents operation.
//
// Get list of AOR events.
//
// GET /aor-events
func (svc *Service) ListAOREvents(ctx context.Context, params api.ListAOREventsParams) ([]api.AOREvent, error) {
page := model.Page{
Number: params.Page,
Size: params.Limit,
}
events, err := svc.inner.ListAOREvents(ctx, page)
if err != nil {
return nil, err
}
var resp []api.AOREvent
for _, event := range events {
resp = append(resp, api.AOREvent{
ID: event.ID,
StartDate: event.StartDate.Unix(),
FreezeDate: event.FreezeDate.Unix(),
SelectionDate: event.SelectionDate.Unix(),
DecisionDate: event.DecisionDate.Unix(),
Status: int32(event.Status),
CreatedAt: event.CreatedAt.Unix(),
UpdatedAt: event.UpdatedAt.Unix(),
})
}
return resp, nil
}
// GetActiveAOREvent implements getActiveAOREvent operation.
//
// Get the currently active AOR event.
//
// GET /aor-events/active
func (svc *Service) GetActiveAOREvent(ctx context.Context) (*api.AOREvent, error) {
event, err := svc.inner.GetActiveAOREvent(ctx)
if err != nil {
return nil, err
}
return &api.AOREvent{
ID: event.ID,
StartDate: event.StartDate.Unix(),
FreezeDate: event.FreezeDate.Unix(),
SelectionDate: event.SelectionDate.Unix(),
DecisionDate: event.DecisionDate.Unix(),
Status: int32(event.Status),
CreatedAt: event.CreatedAt.Unix(),
UpdatedAt: event.UpdatedAt.Unix(),
}, nil
}
// GetAOREvent implements getAOREvent operation.
//
// Get a specific AOR event.
//
// GET /aor-events/{AOREventID}
func (svc *Service) GetAOREvent(ctx context.Context, params api.GetAOREventParams) (*api.AOREvent, error) {
event, err := svc.inner.GetAOREvent(ctx, params.AOREventID)
if err != nil {
return nil, err
}
return &api.AOREvent{
ID: event.ID,
StartDate: event.StartDate.Unix(),
FreezeDate: event.FreezeDate.Unix(),
SelectionDate: event.SelectionDate.Unix(),
DecisionDate: event.DecisionDate.Unix(),
Status: int32(event.Status),
CreatedAt: event.CreatedAt.Unix(),
UpdatedAt: event.UpdatedAt.Unix(),
}, nil
}
// GetAOREventSubmissions implements getAOREventSubmissions operation.
//
// Get all submissions for a specific AOR event.
//
// GET /aor-events/{AOREventID}/submissions
func (svc *Service) GetAOREventSubmissions(ctx context.Context, params api.GetAOREventSubmissionsParams) ([]api.Submission, error) {
submissions, err := svc.inner.GetAORSubmissionsByEvent(ctx, params.AOREventID)
if err != nil {
return nil, err
}
var resp []api.Submission
for _, submission := range submissions {
resp = append(resp, api.Submission{
ID: submission.ID,
DisplayName: submission.DisplayName,
Creator: submission.Creator,
GameID: int32(submission.GameID),
CreatedAt: submission.CreatedAt.Unix(),
UpdatedAt: submission.UpdatedAt.Unix(),
Submitter: int64(submission.Submitter),
AssetID: int64(submission.AssetID),
AssetVersion: int64(submission.AssetVersion),
ValidatedAssetID: api.NewOptInt64(int64(submission.ValidatedAssetID)),
ValidatedAssetVersion: api.NewOptInt64(int64(submission.ValidatedAssetVersion)),
Completed: submission.Completed,
UploadedAssetID: api.NewOptInt64(int64(submission.UploadedAssetID)),
StatusID: int32(submission.StatusID),
})
}
return resp, nil
}

View File

@@ -22,6 +22,8 @@ var(
}
// limit mapfixes in the pipeline to one per target map
ActiveAcceptedMapfixStatuses = []model.MapfixStatus{
model.MapfixStatusReleasing,
model.MapfixStatusUploaded,
model.MapfixStatusUploading,
model.MapfixStatusValidated,
model.MapfixStatusValidating,
@@ -193,6 +195,9 @@ func (svc *Service) ListMapfixes(ctx context.Context, params api.ListMapfixesPar
if asset_id, asset_id_ok := params.AssetID.Get(); asset_id_ok{
filter.SetAssetID(uint64(asset_id))
}
if asset_version, asset_version_ok := params.AssetVersion.Get(); asset_version_ok{
filter.SetAssetVersion(uint64(asset_version))
}
if target_asset_id, target_asset_id_ok := params.TargetAssetID.Get(); target_asset_id_ok{
filter.SetTargetAssetID(uint64(target_asset_id))
}
@@ -322,6 +327,48 @@ func (svc *Service) UpdateMapfixModel(ctx context.Context, params api.UpdateMapf
)
}
// UpdateMapfixDescription implements updateMapfixDescription operation.
//
// Update description (submitter only, status ChangesRequested or UnderConstruction).
//
// PATCH /mapfixes/{MapfixID}/description
func (svc *Service) UpdateMapfixDescription(ctx context.Context, req api.UpdateMapfixDescriptionReq, params api.UpdateMapfixDescriptionParams) error {
userInfo, ok := ctx.Value("UserInfo").(UserInfoHandle)
if !ok {
return ErrUserInfo
}
// read mapfix
mapfix, err := svc.inner.GetMapfix(ctx, params.MapfixID)
if err != nil {
return err
}
userId, err := userInfo.GetUserID()
if err != nil {
return err
}
// check if caller is the submitter
if userId != mapfix.Submitter {
return ErrPermissionDeniedNotSubmitter
}
// read the new description from request body
data, err := io.ReadAll(req)
if err != nil {
return err
}
newDescription := string(data)
// check if Status is ChangesRequested or UnderConstruction
update := service.NewMapfixUpdate()
update.SetDescription(newDescription)
allow_statuses := []model.MapfixStatus{model.MapfixStatusChangesRequested, model.MapfixStatusUnderConstruction}
return svc.inner.UpdateMapfixIfStatus(ctx, params.MapfixID, allow_statuses, update)
}
// ActionMapfixReject invokes actionMapfixReject operation.
//
// Role Reviewer changes status from Submitted -> Rejected.
@@ -786,6 +833,127 @@ func (svc *Service) ActionMapfixValidated(ctx context.Context, params api.Action
)
}
// ActionMapfixTriggerRelease invokes actionMapfixTriggerRelease operation.
//
// Role MapfixUpload changes status from Uploaded -> Releasing.
//
// POST /mapfixes/{MapfixID}/status/trigger-release
func (svc *Service) ActionMapfixTriggerRelease(ctx context.Context, params api.ActionMapfixTriggerReleaseParams) error {
userInfo, ok := ctx.Value("UserInfo").(UserInfoHandle)
if !ok {
return ErrUserInfo
}
has_role, err := userInfo.HasRoleMapfixUpload()
if err != nil {
return err
}
// check if caller has required role
if !has_role {
return ErrPermissionDeniedNeedRoleMapfixUpload
}
userId, err := userInfo.GetUserID()
if err != nil {
return err
}
// transaction
target_status := model.MapfixStatusReleasing
update := service.NewMapfixUpdate()
update.SetStatusID(target_status)
allow_statuses := []model.MapfixStatus{model.MapfixStatusUploaded}
mapfix, err := svc.inner.UpdateAndGetMapfixIfStatus(ctx, params.MapfixID, allow_statuses, update)
if err != nil {
return err
}
// this is a map fix
err = svc.inner.NatsReleaseMapfix(
mapfix.ID,
mapfix.ValidatedAssetID,
mapfix.ValidatedAssetVersion,
mapfix.TargetAssetID,
)
if err != nil {
return err
}
event_data := model.AuditEventDataAction{
TargetStatus: uint32(target_status),
}
return svc.inner.CreateAuditEventAction(
ctx,
userId,
model.Resource{
ID: params.MapfixID,
Type: model.ResourceMapfix,
},
event_data,
)
}
// ActionMapfixUploaded invokes actionMapfixUploaded operation.
//
// Role MapfixUpload manually resets releasing softlock and changes status from Releasing -> Uploaded.
//
// POST /mapfixes/{MapfixID}/status/reset-releasing
func (svc *Service) ActionMapfixUploaded(ctx context.Context, params api.ActionMapfixUploadedParams) error {
userInfo, ok := ctx.Value("UserInfo").(UserInfoHandle)
if !ok {
return ErrUserInfo
}
has_role, err := userInfo.HasRoleMapfixUpload()
if err != nil {
return err
}
// check if caller has required role
if !has_role {
return ErrPermissionDeniedNeedRoleMapfixUpload
}
userId, err := userInfo.GetUserID()
if err != nil {
return err
}
// check when mapfix was updated
mapfix, err := svc.inner.GetMapfix(ctx, params.MapfixID)
if err != nil {
return err
}
if time.Now().Before(mapfix.UpdatedAt.Add(time.Second*10)) {
// the last time the mapfix was updated must be longer than 10 seconds ago
return ErrDelayReset
}
// transaction
target_status := model.MapfixStatusUploaded
update := service.NewMapfixUpdate()
update.SetStatusID(target_status)
allow_statuses := []model.MapfixStatus{model.MapfixStatusReleasing}
err = svc.inner.UpdateMapfixIfStatus(ctx, params.MapfixID, allow_statuses, update)
if err != nil {
return err
}
event_data := model.AuditEventDataAction{
TargetStatus: uint32(target_status),
}
return svc.inner.CreateAuditEventAction(
ctx,
userId,
model.Resource{
ID: params.MapfixID,
Type: model.ResourceMapfix,
},
event_data,
)
}
// ActionMapfixTriggerValidate invokes actionMapfixTriggerValidate operation.
//
// Role Reviewer triggers validation and changes status from Submitted -> Validating.

View File

@@ -36,10 +36,28 @@ func (svc *Service) CreateScript(ctx context.Context, req *api.ScriptCreate) (*a
return nil, err
}
hash := int64(model.HashSource(req.Source))
// Check if a script with this hash already exists
filter := service.NewScriptFilter()
filter.SetHash(hash)
existingScripts, err := svc.inner.ListScripts(ctx, filter, model.Page{Number: 1, Size: 1})
if err != nil {
return nil, err
}
// If script with this hash exists, return existing script ID
if len(existingScripts) > 0 {
return &api.ScriptID{
ScriptID: existingScripts[0].ID,
}, nil
}
// Create new script
script, err := svc.inner.CreateScript(ctx, model.Script{
ID: 0,
Name: req.Name,
Hash: int64(model.HashSource(req.Source)),
Hash: hash,
Source: req.Source,
ResourceType: model.ResourceType(req.ResourceType),
ResourceID: req.ResourceID.Or(0),

View File

@@ -58,7 +58,7 @@ func (usr UserInfoHandle) Validate() (bool, error) {
}
return validate.Valid, nil
}
func (usr UserInfoHandle) hasRoles(wantRoles model.Roles) (bool, error) {
func (usr UserInfoHandle) HasRoles(wantRoles model.Roles) (bool, error) {
haveroles, err := usr.GetRoles()
if err != nil {
return false, err
@@ -94,25 +94,25 @@ func (usr UserInfoHandle) GetRoles() (model.Roles, error) {
// RoleThumbnail
func (usr UserInfoHandle) HasRoleMapfixUpload() (bool, error) {
return usr.hasRoles(model.RolesMapfixUpload)
return usr.HasRoles(model.RolesMapfixUpload)
}
func (usr UserInfoHandle) HasRoleMapfixReview() (bool, error) {
return usr.hasRoles(model.RolesMapfixReview)
return usr.HasRoles(model.RolesMapfixReview)
}
func (usr UserInfoHandle) HasRoleMapDownload() (bool, error) {
return usr.hasRoles(model.RolesMapDownload)
return usr.HasRoles(model.RolesMapDownload)
}
func (usr UserInfoHandle) HasRoleSubmissionRelease() (bool, error) {
return usr.hasRoles(model.RolesSubmissionRelease)
return usr.HasRoles(model.RolesSubmissionRelease)
}
func (usr UserInfoHandle) HasRoleSubmissionUpload() (bool, error) {
return usr.hasRoles(model.RolesSubmissionUpload)
return usr.HasRoles(model.RolesSubmissionUpload)
}
func (usr UserInfoHandle) HasRoleSubmissionReview() (bool, error) {
return usr.hasRoles(model.RolesSubmissionReview)
return usr.HasRoles(model.RolesSubmissionReview)
}
func (usr UserInfoHandle) HasRoleScriptWrite() (bool, error) {
return usr.hasRoles(model.RolesScriptWrite)
return usr.HasRoles(model.RolesScriptWrite)
}
/// Not implemented
func (usr UserInfoHandle) HasRoleMaptest() (bool, error) {

105
pkg/web_api/stats.go Normal file
View File

@@ -0,0 +1,105 @@
package web_api
import (
"context"
"git.itzana.me/strafesnet/maps-service/pkg/api"
"git.itzana.me/strafesnet/maps-service/pkg/datastore"
"git.itzana.me/strafesnet/maps-service/pkg/model"
"git.itzana.me/strafesnet/maps-service/pkg/service"
)
// GET /stats
func (svc *Service) GetStats(ctx context.Context) (*api.Stats, error) {
// Get total submissions count
totalSubmissions, _, err := svc.inner.ListSubmissionsWithTotal(ctx, service.NewSubmissionFilter(), model.Page{
Number: 1,
Size: 0, // We only want the count, not the items
}, datastore.ListSortDisabled)
if err != nil {
return nil, err
}
// Get total mapfixes count
totalMapfixes, _, err := svc.inner.ListMapfixesWithTotal(ctx, service.NewMapfixFilter(), model.Page{
Number: 1,
Size: 0, // We only want the count, not the items
}, datastore.ListSortDisabled)
if err != nil {
return nil, err
}
// Get released submissions count
releasedSubmissionsFilter := service.NewSubmissionFilter()
releasedSubmissionsFilter.SetStatuses([]model.SubmissionStatus{model.SubmissionStatusReleased})
releasedSubmissions, _, err := svc.inner.ListSubmissionsWithTotal(ctx, releasedSubmissionsFilter, model.Page{
Number: 1,
Size: 0,
}, datastore.ListSortDisabled)
if err != nil {
return nil, err
}
// Get released mapfixes count
releasedMapfixesFilter := service.NewMapfixFilter()
releasedMapfixesFilter.SetStatuses([]model.MapfixStatus{model.MapfixStatusReleased})
releasedMapfixes, _, err := svc.inner.ListMapfixesWithTotal(ctx, releasedMapfixesFilter, model.Page{
Number: 1,
Size: 0,
}, datastore.ListSortDisabled)
if err != nil {
return nil, err
}
// Get submitted submissions count (under review)
submittedSubmissionsFilter := service.NewSubmissionFilter()
submittedSubmissionsFilter.SetStatuses([]model.SubmissionStatus{
model.SubmissionStatusUnderConstruction,
model.SubmissionStatusChangesRequested,
model.SubmissionStatusSubmitting,
model.SubmissionStatusSubmitted,
model.SubmissionStatusAcceptedUnvalidated,
model.SubmissionStatusValidating,
model.SubmissionStatusValidated,
model.SubmissionStatusUploading,
model.SubmissionStatusUploaded,
})
submittedSubmissions, _, err := svc.inner.ListSubmissionsWithTotal(ctx, submittedSubmissionsFilter, model.Page{
Number: 1,
Size: 0,
}, datastore.ListSortDisabled)
if err != nil {
return nil, err
}
// Get submitted mapfixes count (under review)
submittedMapfixesFilter := service.NewMapfixFilter()
submittedMapfixesFilter.SetStatuses([]model.MapfixStatus{
model.MapfixStatusUnderConstruction,
model.MapfixStatusChangesRequested,
model.MapfixStatusSubmitting,
model.MapfixStatusSubmitted,
model.MapfixStatusAcceptedUnvalidated,
model.MapfixStatusValidating,
model.MapfixStatusValidated,
model.MapfixStatusUploading,
model.MapfixStatusUploaded,
model.MapfixStatusReleasing,
})
submittedMapfixes, _, err := svc.inner.ListMapfixesWithTotal(ctx, submittedMapfixesFilter, model.Page{
Number: 1,
Size: 0,
}, datastore.ListSortDisabled)
if err != nil {
return nil, err
}
return &api.Stats{
TotalSubmissions: totalSubmissions,
TotalMapfixes: totalMapfixes,
ReleasedSubmissions: releasedSubmissions,
ReleasedMapfixes: releasedMapfixes,
SubmittedSubmissions: submittedSubmissions,
SubmittedMapfixes: submittedMapfixes,
}, nil
}

View File

@@ -0,0 +1,207 @@
package web_api
import (
"context"
"errors"
"git.itzana.me/strafesnet/maps-service/pkg/api"
"git.itzana.me/strafesnet/maps-service/pkg/datastore"
"git.itzana.me/strafesnet/maps-service/pkg/model"
"git.itzana.me/strafesnet/maps-service/pkg/service"
)
var (
ErrReviewNotOwner = errors.New("You can only edit your own review")
ErrReviewNotSubmitted = errors.New("Reviews can only be created or edited when the submission is in Submitted status")
)
// ListSubmissionReviews implements listSubmissionReviews operation.
//
// Get all reviews for a submission.
//
// GET /submissions/{SubmissionID}/reviews
func (svc *Service) ListSubmissionReviews(ctx context.Context, params api.ListSubmissionReviewsParams) ([]api.SubmissionReview, error) {
reviews, err := svc.inner.ListSubmissionReviewsBySubmission(ctx, params.SubmissionID)
if err != nil {
return nil, err
}
var resp []api.SubmissionReview
for _, review := range reviews {
resp = append(resp, api.SubmissionReview{
ID: review.ID,
SubmissionID: review.SubmissionID,
ReviewerID: int64(review.ReviewerID),
Recommend: review.Recommend,
Description: review.Description,
Outdated: review.Outdated,
CreatedAt: review.CreatedAt.Unix(),
UpdatedAt: review.UpdatedAt.Unix(),
})
}
return resp, nil
}
// CreateSubmissionReview implements createSubmissionReview operation.
//
// Create a review for a submission.
//
// POST /submissions/{SubmissionID}/reviews
func (svc *Service) CreateSubmissionReview(ctx context.Context, req *api.SubmissionReviewCreate, params api.CreateSubmissionReviewParams) (*api.SubmissionReview, error) {
userInfo, ok := ctx.Value("UserInfo").(UserInfoHandle)
if !ok {
return nil, ErrUserInfo
}
// Check if caller has required role
has_role, err := userInfo.HasRoleSubmissionReview()
if err != nil {
return nil, err
}
if !has_role {
return nil, ErrPermissionDeniedNeedRoleSubmissionReview
}
userId, err := userInfo.GetUserID()
if err != nil {
return nil, err
}
// Check if submission exists and is in Submitted status
submission, err := svc.inner.GetSubmission(ctx, params.SubmissionID)
if err != nil {
return nil, err
}
if submission.StatusID != model.SubmissionStatusSubmitted {
return nil, ErrReviewNotSubmitted
}
// Check if user already has a review for this submission
existingReview, err := svc.inner.GetSubmissionReviewBySubmissionAndReviewer(ctx, params.SubmissionID, userId)
if err != nil && !errors.Is(err, datastore.ErrNotExist) {
return nil, err
}
// If review exists, update it instead
if err == nil {
update := service.NewSubmissionReviewUpdate()
update.SetRecommend(req.Recommend)
update.SetDescription(req.Description)
update.SetOutdated(false)
err = svc.inner.UpdateSubmissionReview(ctx, existingReview.ID, update)
if err != nil {
return nil, err
}
// Fetch updated review
updatedReview, err := svc.inner.GetSubmissionReview(ctx, existingReview.ID)
if err != nil {
return nil, err
}
return &api.SubmissionReview{
ID: updatedReview.ID,
SubmissionID: updatedReview.SubmissionID,
ReviewerID: int64(updatedReview.ReviewerID),
Recommend: updatedReview.Recommend,
Description: updatedReview.Description,
Outdated: updatedReview.Outdated,
CreatedAt: updatedReview.CreatedAt.Unix(),
UpdatedAt: updatedReview.UpdatedAt.Unix(),
}, nil
}
// Create new review
review := model.SubmissionReview{
SubmissionID: params.SubmissionID,
ReviewerID: userId,
Recommend: req.Recommend,
Description: req.Description,
Outdated: false,
}
createdReview, err := svc.inner.CreateSubmissionReview(ctx, review)
if err != nil {
return nil, err
}
return &api.SubmissionReview{
ID: createdReview.ID,
SubmissionID: createdReview.SubmissionID,
ReviewerID: int64(createdReview.ReviewerID),
Recommend: createdReview.Recommend,
Description: createdReview.Description,
Outdated: createdReview.Outdated,
CreatedAt: createdReview.CreatedAt.Unix(),
UpdatedAt: createdReview.UpdatedAt.Unix(),
}, nil
}
// UpdateSubmissionReview implements updateSubmissionReview operation.
//
// Update an existing review.
//
// PATCH /submissions/{SubmissionID}/reviews/{ReviewID}
func (svc *Service) UpdateSubmissionReview(ctx context.Context, req *api.SubmissionReviewCreate, params api.UpdateSubmissionReviewParams) (*api.SubmissionReview, error) {
userInfo, ok := ctx.Value("UserInfo").(UserInfoHandle)
if !ok {
return nil, ErrUserInfo
}
userId, err := userInfo.GetUserID()
if err != nil {
return nil, err
}
// Get the existing review
review, err := svc.inner.GetSubmissionReview(ctx, params.ReviewID)
if err != nil {
return nil, err
}
// Check if user is the owner of the review
if review.ReviewerID != userId {
return nil, ErrReviewNotOwner
}
// Check if submission is still in Submitted status
submission, err := svc.inner.GetSubmission(ctx, params.SubmissionID)
if err != nil {
return nil, err
}
if submission.StatusID != model.SubmissionStatusSubmitted {
return nil, ErrReviewNotSubmitted
}
// Update the review
update := service.NewSubmissionReviewUpdate()
update.SetRecommend(req.Recommend)
update.SetDescription(req.Description)
update.SetOutdated(false) // Clear outdated flag on edit
err = svc.inner.UpdateSubmissionReview(ctx, params.ReviewID, update)
if err != nil {
return nil, err
}
// Fetch updated review
updatedReview, err := svc.inner.GetSubmissionReview(ctx, params.ReviewID)
if err != nil {
return nil, err
}
return &api.SubmissionReview{
ID: updatedReview.ID,
SubmissionID: updatedReview.SubmissionID,
ReviewerID: int64(updatedReview.ReviewerID),
Recommend: updatedReview.Recommend,
Description: updatedReview.Description,
Outdated: updatedReview.Outdated,
CreatedAt: updatedReview.CreatedAt.Unix(),
UpdatedAt: updatedReview.UpdatedAt.Unix(),
}, nil
}

View File

@@ -20,13 +20,6 @@ var(
model.SubmissionStatusSubmitted,
model.SubmissionStatusUnderConstruction,
}
// limit submissions in the pipeline to one per target map
ActiveAcceptedSubmissionStatuses = []model.SubmissionStatus{
model.SubmissionStatusUploading,
model.SubmissionStatusValidated,
model.SubmissionStatusValidating,
model.SubmissionStatusAcceptedUnvalidated,
}
// Allow 5 submissions every 10 minutes
CreateSubmissionRateLimit int64 = 5
CreateSubmissionRecencyWindow = time.Second*600
@@ -236,6 +229,9 @@ func (svc *Service) ListSubmissions(ctx context.Context, params api.ListSubmissi
if asset_id, asset_id_ok := params.AssetID.Get(); asset_id_ok{
filter.SetAssetID(uint64(asset_id))
}
if asset_version, asset_version_ok := params.AssetVersion.Get(); asset_version_ok{
filter.SetAssetVersion(uint64(asset_version))
}
if uploaded_asset_id, uploaded_asset_id_ok := params.UploadedAssetID.Get(); uploaded_asset_id_ok{
filter.SetUploadedAssetID(uint64(uploaded_asset_id))
}
@@ -1038,19 +1034,24 @@ func (svc *Service) ActionSubmissionAccepted(ctx context.Context, params api.Act
// Release a set of uploaded maps.
//
// POST /release-submissions
func (svc *Service) ReleaseSubmissions(ctx context.Context, request []api.ReleaseInfo) error {
func (svc *Service) ReleaseSubmissions(ctx context.Context, request []api.ReleaseInfo) (*api.OperationID, error) {
userInfo, ok := ctx.Value("UserInfo").(UserInfoHandle)
if !ok {
return ErrUserInfo
return nil, ErrUserInfo
}
has_role, err := userInfo.HasRoleSubmissionRelease()
if err != nil {
return err
return nil, err
}
// check if caller has required role
if !has_role {
return ErrPermissionDeniedNeedRoleSubmissionRelease
return nil, ErrPermissionDeniedNeedRoleSubmissionRelease
}
userId, err := userInfo.GetUserID()
if err != nil {
return nil, err
}
idList := make([]int64, len(request))
@@ -1061,48 +1062,62 @@ func (svc *Service) ReleaseSubmissions(ctx context.Context, request []api.Releas
// fetch submissions
submissions, err := svc.inner.GetSubmissionList(ctx, idList)
if err != nil {
return err
return nil, err
}
// the submissions are not ordered the same as the idList!
id_to_submission := make(map[int64]*model.Submission, len(submissions))
// check each submission to make sure it is ready to release
for _,submission := range submissions{
if submission.StatusID != model.SubmissionStatusUploaded{
return ErrReleaseInvalidStatus
return nil, ErrReleaseInvalidStatus
}
if submission.UploadedAssetID == 0{
return ErrReleaseNoUploadedAssetID
return nil, ErrReleaseNoUploadedAssetID
}
id_to_submission[submission.ID] = &submission
}
for i,submission := range submissions{
date := request[i].Date.Unix()
// create each map with go-grpc
_, err := svc.inner.CreateMap(ctx, model.Map{
ID: int64(submission.UploadedAssetID),
DisplayName: submission.DisplayName,
Creator: submission.Creator,
GameID: submission.GameID,
Date: time.Unix(date, 0),
Submitter: submission.Submitter,
// Thumbnail: 0,
// AssetVersion: 0,
// LoadCount: 0,
// Modes: 0,
})
if err != nil {
return err
}
// update each status to Released
update := service.NewSubmissionUpdate()
update.SetStatusID(model.SubmissionStatusReleased)
err = svc.inner.UpdateSubmissionIfStatus(ctx, submission.ID, []model.SubmissionStatus{model.SubmissionStatusUploaded}, update)
if err != nil {
return err
}
// construct batch release nats message
release_submissions := make([]model.ReleaseSubmissionRequest, len(request))
for i, release_info := range request {
// from request
release_submissions[i].ReleaseDate = release_info.Date.Unix()
release_submissions[i].SubmissionID = release_info.SubmissionID
submission := id_to_submission[release_info.SubmissionID]
// from submission
release_submissions[i].ModelID = submission.ValidatedAssetID
release_submissions[i].ModelVersion = submission.ValidatedAssetVersion
// for map create
release_submissions[i].UploadedAssetID = submission.UploadedAssetID
release_submissions[i].DisplayName = submission.DisplayName
release_submissions[i].Creator = submission.Creator
release_submissions[i].GameID = submission.GameID
release_submissions[i].Submitter = submission.Submitter
}
return nil
// create a trackable long-running operation
operation, err := svc.inner.CreateOperation(ctx, model.Operation{
Owner: userId,
StatusID: model.OperationStatusCreated,
})
if err != nil {
return nil, err
}
// this is a map fix
err = svc.inner.NatsBatchReleaseSubmissions(
release_submissions,
operation.ID,
)
if err != nil {
return nil, err
}
return &api.OperationID{
OperationID: operation.ID,
}, nil
}
// CreateSubmissionAuditComment implements createSubmissionAuditComment operation.

135
pkg/web_api/thumbnails.go Normal file
View File

@@ -0,0 +1,135 @@
package web_api
import (
"context"
"strconv"
"git.itzana.me/strafesnet/maps-service/pkg/api"
"git.itzana.me/strafesnet/maps-service/pkg/roblox"
)
// BatchAssetThumbnails handles batch fetching of asset thumbnails
func (svc *Service) BatchAssetThumbnails(ctx context.Context, req *api.BatchAssetThumbnailsReq) (*api.BatchAssetThumbnailsOK, error) {
if len(req.AssetIds) == 0 {
return &api.BatchAssetThumbnailsOK{
Thumbnails: api.NewOptBatchAssetThumbnailsOKThumbnails(map[string]string{}),
}, nil
}
// Convert size string to enum
size := roblox.Size420x420
if req.Size.IsSet() {
sizeStr := req.Size.Value
switch api.BatchAssetThumbnailsReqSize(sizeStr) {
case api.BatchAssetThumbnailsReqSize150x150:
size = roblox.Size150x150
case api.BatchAssetThumbnailsReqSize768x432:
size = roblox.Size768x432
}
}
// Fetch thumbnails from service
thumbnails, err := svc.inner.GetAssetThumbnails(ctx, req.AssetIds, size)
if err != nil {
return nil, err
}
// Convert map[uint64]string to map[string]string for JSON
result := make(map[string]string, len(thumbnails))
for assetID, url := range thumbnails {
result[strconv.FormatUint(assetID, 10)] = url
}
return &api.BatchAssetThumbnailsOK{
Thumbnails: api.NewOptBatchAssetThumbnailsOKThumbnails(result),
}, nil
}
// GetAssetThumbnail handles single asset thumbnail fetch (with redirect)
func (svc *Service) GetAssetThumbnail(ctx context.Context, params api.GetAssetThumbnailParams) (*api.GetAssetThumbnailFound, error) {
// Convert size string to enum
size := roblox.Size420x420
if params.Size.IsSet() {
sizeStr := params.Size.Value
switch api.GetAssetThumbnailSize(sizeStr) {
case api.GetAssetThumbnailSize150x150:
size = roblox.Size150x150
case api.GetAssetThumbnailSize768x432:
size = roblox.Size768x432
}
}
// Fetch thumbnail
thumbnailURL, err := svc.inner.GetSingleAssetThumbnail(ctx, params.AssetID, size)
if err != nil {
return nil, err
}
// Return redirect response
return &api.GetAssetThumbnailFound{
Location: api.NewOptString(thumbnailURL),
}, nil
}
// BatchUserThumbnails handles batch fetching of user avatar thumbnails
func (svc *Service) BatchUserThumbnails(ctx context.Context, req *api.BatchUserThumbnailsReq) (*api.BatchUserThumbnailsOK, error) {
if len(req.UserIds) == 0 {
return &api.BatchUserThumbnailsOK{
Thumbnails: api.NewOptBatchUserThumbnailsOKThumbnails(map[string]string{}),
}, nil
}
// Convert size string to enum
size := roblox.Size150x150
if req.Size.IsSet() {
sizeStr := req.Size.Value
switch api.BatchUserThumbnailsReqSize(sizeStr) {
case api.BatchUserThumbnailsReqSize420x420:
size = roblox.Size420x420
case api.BatchUserThumbnailsReqSize768x432:
size = roblox.Size768x432
}
}
// Fetch thumbnails from service
thumbnails, err := svc.inner.GetUserAvatarThumbnails(ctx, req.UserIds, size)
if err != nil {
return nil, err
}
// Convert map[uint64]string to map[string]string for JSON
result := make(map[string]string, len(thumbnails))
for userID, url := range thumbnails {
result[strconv.FormatUint(userID, 10)] = url
}
return &api.BatchUserThumbnailsOK{
Thumbnails: api.NewOptBatchUserThumbnailsOKThumbnails(result),
}, nil
}
// GetUserThumbnail handles single user avatar thumbnail fetch (with redirect)
func (svc *Service) GetUserThumbnail(ctx context.Context, params api.GetUserThumbnailParams) (*api.GetUserThumbnailFound, error) {
// Convert size string to enum
size := roblox.Size150x150
if params.Size.IsSet() {
sizeStr := params.Size.Value
switch api.GetUserThumbnailSize(sizeStr) {
case api.GetUserThumbnailSize420x420:
size = roblox.Size420x420
case api.GetUserThumbnailSize768x432:
size = roblox.Size768x432
}
}
// Fetch thumbnail
thumbnailURL, err := svc.inner.GetSingleUserAvatarThumbnail(ctx, params.UserID, size)
if err != nil {
return nil, err
}
// Return redirect response
return &api.GetUserThumbnailFound{
Location: api.NewOptString(thumbnailURL),
}, nil
}

33
pkg/web_api/users.go Normal file
View File

@@ -0,0 +1,33 @@
package web_api
import (
"context"
"strconv"
"git.itzana.me/strafesnet/maps-service/pkg/api"
)
// BatchUsernames handles batch fetching of usernames
func (svc *Service) BatchUsernames(ctx context.Context, req *api.BatchUsernamesReq) (*api.BatchUsernamesOK, error) {
if len(req.UserIds) == 0 {
return &api.BatchUsernamesOK{
Usernames: api.NewOptBatchUsernamesOKUsernames(map[string]string{}),
}, nil
}
// Fetch usernames from service
usernames, err := svc.inner.GetUsernames(ctx, req.UserIds)
if err != nil {
return nil, err
}
// Convert map[uint64]string to map[string]string for JSON
result := make(map[string]string, len(usernames))
for userID, username := range usernames {
result[strconv.FormatUint(userID, 10)] = username
}
return &api.BatchUsernamesOK{
Usernames: api.NewOptBatchUsernamesOKUsernames(result),
}, nil
}

View File

@@ -1,6 +1,6 @@
[package]
name = "submissions-api"
version = "0.8.2"
version = "0.10.1"
edition = "2024"
publish = ["strafesnet"]
repository = "https://git.itzana.me/StrafesNET/maps-service"

View File

@@ -152,6 +152,48 @@ impl Context{
Ok(())
}
pub async fn get_mapfixes(&self,config:GetMapfixesRequest<'_>)->Result<MapfixesResponse,Error>{
let url_raw=format!("{}/mapfixes",self.0.base_url);
let mut url=reqwest::Url::parse(url_raw.as_str()).map_err(Error::Parse)?;
{
let mut query_pairs=url.query_pairs_mut();
query_pairs.append_pair("Page",config.Page.to_string().as_str());
query_pairs.append_pair("Limit",config.Limit.to_string().as_str());
if let Some(sort)=config.Sort{
query_pairs.append_pair("Sort",(sort as u8).to_string().as_str());
}
if let Some(display_name)=config.DisplayName{
query_pairs.append_pair("DisplayName",display_name);
}
if let Some(creator)=config.Creator{
query_pairs.append_pair("Creator",creator);
}
if let Some(game_id)=config.GameID{
query_pairs.append_pair("GameID",(game_id as u8).to_string().as_str());
}
if let Some(submitter)=config.Submitter{
query_pairs.append_pair("Submitter",submitter.to_string().as_str());
}
if let Some(asset_id)=config.AssetID{
query_pairs.append_pair("AssetID",asset_id.to_string().as_str());
}
if let Some(asset_version)=config.AssetVersion{
query_pairs.append_pair("AssetVersion",asset_version.to_string().as_str());
}
if let Some(uploaded_asset_id)=config.TargetAssetID{
query_pairs.append_pair("TargetAssetID",uploaded_asset_id.to_string().as_str());
}
if let Some(status_id)=config.StatusID{
query_pairs.append_pair("StatusID",(status_id as u8).to_string().as_str());
}
}
response_ok(
self.0.get(url).await.map_err(Error::Reqwest)?
).await.map_err(Error::Response)?
.json().await.map_err(Error::ReqwestJson)
}
pub async fn get_submissions(&self,config:GetSubmissionsRequest<'_>)->Result<SubmissionsResponse,Error>{
let url_raw=format!("{}/submissions",self.0.base_url);
let mut url=reqwest::Url::parse(url_raw.as_str()).map_err(Error::Parse)?;
@@ -178,6 +220,9 @@ impl Context{
if let Some(asset_id)=config.AssetID{
query_pairs.append_pair("AssetID",asset_id.to_string().as_str());
}
if let Some(asset_version)=config.AssetVersion{
query_pairs.append_pair("AssetVersion",asset_version.to_string().as_str());
}
if let Some(uploaded_asset_id)=config.UploadedAssetID{
query_pairs.append_pair("UploadedAssetID",uploaded_asset_id.to_string().as_str());
}
@@ -218,7 +263,37 @@ impl Context{
).await.map_err(Error::Response)?
.json().await.map_err(Error::ReqwestJson)
}
pub async fn release_submissions(&self,config:ReleaseRequest<'_>)->Result<(),Error>{
pub async fn get_mapfix_audit_events(&self,config:GetMapfixAuditEventsRequest)->Result<Vec<AuditEventReponse>,Error>{
let url_raw=format!("{}/mapfixes/{}/audit-events",self.0.base_url,config.MapfixID);
let mut url=reqwest::Url::parse(url_raw.as_str()).map_err(Error::Parse)?;
{
let mut query_pairs=url.query_pairs_mut();
query_pairs.append_pair("Page",config.Page.to_string().as_str());
query_pairs.append_pair("Limit",config.Limit.to_string().as_str());
}
response_ok(
self.0.get(url).await.map_err(Error::Reqwest)?
).await.map_err(Error::Response)?
.json().await.map_err(Error::ReqwestJson)
}
pub async fn get_submission_audit_events(&self,config:GetSubmissionAuditEventsRequest)->Result<Vec<AuditEventReponse>,Error>{
let url_raw=format!("{}/submissions/{}/audit-events",self.0.base_url,config.SubmissionID);
let mut url=reqwest::Url::parse(url_raw.as_str()).map_err(Error::Parse)?;
{
let mut query_pairs=url.query_pairs_mut();
query_pairs.append_pair("Page",config.Page.to_string().as_str());
query_pairs.append_pair("Limit",config.Limit.to_string().as_str());
}
response_ok(
self.0.get(url).await.map_err(Error::Reqwest)?
).await.map_err(Error::Response)?
.json().await.map_err(Error::ReqwestJson)
}
pub async fn release_submissions(&self,config:ReleaseRequest<'_>)->Result<OperationIDResponse,Error>{
let url_raw=format!("{}/release-submissions",self.0.base_url);
let url=reqwest::Url::parse(url_raw.as_str()).map_err(Error::Parse)?;
@@ -226,8 +301,7 @@ impl Context{
response_ok(
self.0.post(url,body).await.map_err(Error::Reqwest)?
).await.map_err(Error::Response)?;
Ok(())
).await.map_err(Error::Response)?
.json().await.map_err(Error::ReqwestJson)
}
}

View File

@@ -30,7 +30,6 @@ impl<Items> std::error::Error for SingleItemError<Items> where Items:std::fmt::D
pub type ScriptSingleItemError=SingleItemError<Vec<ScriptID>>;
pub type ScriptPolicySingleItemError=SingleItemError<Vec<ScriptPolicyID>>;
#[allow(dead_code)]
#[derive(Debug)]
pub struct UrlAndBody{
pub url:url::Url,
@@ -76,7 +75,7 @@ pub enum GameID{
FlyTrials=5,
}
#[allow(nonstandard_style)]
#[expect(nonstandard_style)]
#[derive(Clone,Debug,serde::Serialize)]
pub struct CreateMapfixRequest<'a>{
pub OperationID:OperationID,
@@ -89,13 +88,13 @@ pub struct CreateMapfixRequest<'a>{
pub TargetAssetID:u64,
pub Description:&'a str,
}
#[allow(nonstandard_style)]
#[expect(nonstandard_style)]
#[derive(Clone,Debug,serde::Deserialize)]
pub struct MapfixIDResponse{
pub MapfixID:MapfixID,
}
#[allow(nonstandard_style)]
#[expect(nonstandard_style)]
#[derive(Clone,Debug,serde::Serialize)]
pub struct CreateSubmissionRequest<'a>{
pub OperationID:OperationID,
@@ -108,7 +107,7 @@ pub struct CreateSubmissionRequest<'a>{
pub Status:u32,
pub Roles:u32,
}
#[allow(nonstandard_style)]
#[expect(nonstandard_style)]
#[derive(Clone,Debug,serde::Deserialize)]
pub struct SubmissionIDResponse{
pub SubmissionID:SubmissionID,
@@ -127,11 +126,11 @@ pub enum ResourceType{
Submission=2,
}
#[allow(nonstandard_style)]
#[expect(nonstandard_style)]
pub struct GetScriptRequest{
pub ScriptID:ScriptID,
}
#[allow(nonstandard_style)]
#[expect(nonstandard_style)]
#[derive(Clone,Debug,serde::Serialize)]
pub struct GetScriptsRequest<'a>{
pub Page:u32,
@@ -151,7 +150,7 @@ pub struct GetScriptsRequest<'a>{
pub struct HashRequest<'a>{
pub hash:&'a str,
}
#[allow(nonstandard_style)]
#[expect(nonstandard_style)]
#[derive(Clone,Debug,serde::Deserialize)]
pub struct ScriptResponse{
pub ID:ScriptID,
@@ -161,7 +160,7 @@ pub struct ScriptResponse{
pub ResourceType:ResourceType,
pub ResourceID:ResourceID,
}
#[allow(nonstandard_style)]
#[expect(nonstandard_style)]
#[derive(Clone,Debug,serde::Serialize)]
pub struct CreateScriptRequest<'a>{
pub Name:&'a str,
@@ -170,7 +169,7 @@ pub struct CreateScriptRequest<'a>{
#[serde(skip_serializing_if="Option::is_none")]
pub ResourceID:Option<ResourceID>,
}
#[allow(nonstandard_style)]
#[expect(nonstandard_style)]
#[derive(Clone,Debug,serde::Deserialize)]
pub struct ScriptIDResponse{
pub ScriptID:ScriptID,
@@ -186,11 +185,11 @@ pub enum Policy{
Replace=4,
}
#[allow(nonstandard_style)]
#[expect(nonstandard_style)]
pub struct GetScriptPolicyRequest{
pub ScriptPolicyID:ScriptPolicyID,
}
#[allow(nonstandard_style)]
#[expect(nonstandard_style)]
#[derive(Clone,Debug,serde::Serialize)]
pub struct GetScriptPoliciesRequest<'a>{
pub Page:u32,
@@ -202,7 +201,7 @@ pub struct GetScriptPoliciesRequest<'a>{
#[serde(skip_serializing_if="Option::is_none")]
pub Policy:Option<Policy>,
}
#[allow(nonstandard_style)]
#[expect(nonstandard_style)]
#[derive(Clone,Debug,serde::Deserialize)]
pub struct ScriptPolicyResponse{
pub ID:ScriptPolicyID,
@@ -210,20 +209,20 @@ pub struct ScriptPolicyResponse{
pub ToScriptID:ScriptID,
pub Policy:Policy
}
#[allow(nonstandard_style)]
#[expect(nonstandard_style)]
#[derive(Clone,Debug,serde::Serialize)]
pub struct CreateScriptPolicyRequest{
pub FromScriptID:ScriptID,
pub ToScriptID:ScriptID,
pub Policy:Policy,
}
#[allow(nonstandard_style)]
#[expect(nonstandard_style)]
#[derive(Clone,Debug,serde::Deserialize)]
pub struct ScriptPolicyIDResponse{
pub ScriptPolicyID:ScriptPolicyID,
}
#[allow(nonstandard_style)]
#[expect(nonstandard_style)]
#[derive(Clone,Debug,serde::Serialize)]
pub struct UpdateScriptPolicyRequest{
pub ID:ScriptPolicyID,
@@ -235,7 +234,7 @@ pub struct UpdateScriptPolicyRequest{
pub Policy:Option<Policy>,
}
#[allow(nonstandard_style)]
#[expect(nonstandard_style)]
#[derive(Clone,Debug)]
pub struct UpdateSubmissionModelRequest{
pub SubmissionID:SubmissionID,
@@ -252,6 +251,73 @@ pub enum Sort{
DateDescending=4,
}
#[derive(Clone,Debug,serde_repr::Serialize_repr,serde_repr::Deserialize_repr)]
#[repr(u8)]
pub enum MapfixStatus{
// Phase: Creation
UnderConstruction=0,
ChangesRequested=1,
// Phase: Review
Submitting=2,
Submitted=3,
// Phase: Testing
AcceptedUnvalidated=4, // pending script review, can re-trigger validation
Validating=5,
Validated=6,
Uploading=7,
Uploaded=8, // uploaded to the group, but pending release
Releasing=11,
// Phase: Final MapfixStatus
Rejected=9,
Released=10,
}
#[expect(nonstandard_style)]
#[derive(Clone,Debug)]
pub struct GetMapfixesRequest<'a>{
pub Page:u32,
pub Limit:u32,
pub Sort:Option<Sort>,
pub DisplayName:Option<&'a str>,
pub Creator:Option<&'a str>,
pub GameID:Option<GameID>,
pub Submitter:Option<u64>,
pub AssetID:Option<u64>,
pub AssetVersion:Option<u64>,
pub TargetAssetID:Option<u64>,
pub StatusID:Option<MapfixStatus>,
}
#[expect(nonstandard_style)]
#[derive(Clone,Debug,serde::Serialize,serde::Deserialize)]
pub struct MapfixResponse{
pub ID:MapfixID,
pub DisplayName:String,
pub Creator:String,
pub GameID:u32,
pub CreatedAt:i64,
pub UpdatedAt:i64,
pub Submitter:u64,
pub AssetID:u64,
pub AssetVersion:u64,
pub ValidatedAssetID:Option<u64>,
pub ValidatedAssetVersion:Option<u64>,
pub Completed:bool,
pub TargetAssetID:u64,
pub StatusID:MapfixStatus,
pub Description:String,
}
#[expect(nonstandard_style)]
#[derive(Clone,Debug,serde::Deserialize)]
pub struct MapfixesResponse{
pub Total:u64,
pub Mapfixes:Vec<MapfixResponse>,
}
#[derive(Clone,Debug,serde_repr::Deserialize_repr)]
#[repr(u8)]
pub enum SubmissionStatus{
@@ -275,7 +341,7 @@ pub enum SubmissionStatus{
Released=10,
}
#[allow(nonstandard_style)]
#[expect(nonstandard_style)]
#[derive(Clone,Debug)]
pub struct GetSubmissionsRequest<'a>{
pub Page:u32,
@@ -286,11 +352,12 @@ pub struct GetSubmissionsRequest<'a>{
pub GameID:Option<GameID>,
pub Submitter:Option<u64>,
pub AssetID:Option<u64>,
pub AssetVersion:Option<u64>,
pub UploadedAssetID:Option<u64>,
pub StatusID:Option<SubmissionStatus>,
}
#[allow(nonstandard_style)]
#[expect(nonstandard_style)]
#[derive(Clone,Debug,serde::Deserialize)]
pub struct SubmissionResponse{
pub ID:SubmissionID,
@@ -302,18 +369,20 @@ pub struct SubmissionResponse{
pub Submitter:u64,
pub AssetID:u64,
pub AssetVersion:u64,
pub ValidatedAssetID:Option<u64>,
pub ValidatedAssetVersion:Option<u64>,
pub UploadedAssetID:u64,
pub StatusID:SubmissionStatus,
}
#[allow(nonstandard_style)]
#[expect(nonstandard_style)]
#[derive(Clone,Debug,serde::Deserialize)]
pub struct SubmissionsResponse{
pub Total:u64,
pub Submissions:Vec<SubmissionResponse>,
}
#[allow(nonstandard_style)]
#[expect(nonstandard_style)]
#[derive(Clone,Debug)]
pub struct GetMapsRequest<'a>{
pub Page:u32,
@@ -324,7 +393,7 @@ pub struct GetMapsRequest<'a>{
pub GameID:Option<GameID>,
}
#[allow(nonstandard_style)]
#[expect(nonstandard_style)]
#[derive(Clone,Debug,serde::Deserialize)]
pub struct MapResponse{
pub ID:i64,
@@ -334,7 +403,119 @@ pub struct MapResponse{
pub Date:i64,
}
#[allow(nonstandard_style)]
#[expect(nonstandard_style)]
#[derive(Clone,Debug)]
pub struct GetMapfixAuditEventsRequest{
pub Page:u32,
pub Limit:u32,
pub MapfixID:i64,
}
#[expect(nonstandard_style)]
#[derive(Clone,Debug)]
pub struct GetSubmissionAuditEventsRequest{
pub Page:u32,
pub Limit:u32,
pub SubmissionID:i64,
}
#[derive(Clone,Debug,serde_repr::Deserialize_repr)]
#[repr(u32)]
pub enum AuditEventType{
Action=0,
Comment=1,
ChangeModel=2,
ChangeValidatedModel=3,
ChangeDisplayName=4,
ChangeCreator=5,
Error=6,
CheckList=7,
}
#[derive(Clone,Debug,serde::Deserialize)]
pub struct AuditEventAction{
pub target_status:MapfixStatus,
}
#[derive(Clone,Debug,serde::Deserialize)]
pub struct AuditEventComment{
pub comment:String,
}
#[derive(Clone,Debug,serde::Deserialize)]
pub struct AuditEventChangeModel{
pub old_model_id:u64,
pub old_model_version:u64,
pub new_model_id:u64,
pub new_model_version:u64,
}
#[derive(Clone,Debug,serde::Deserialize)]
pub struct AuditEventChangeValidatedModel{
pub validated_model_id:u64,
pub validated_model_version:u64,
}
#[derive(Clone,Debug,serde::Deserialize)]
pub struct AuditEventChangeName{
pub old_name:String,
pub new_name:String,
}
#[derive(Clone,Debug,serde::Deserialize)]
pub struct AuditEventError{
pub error:String,
}
#[derive(Clone,Debug,serde::Deserialize)]
pub struct AuditEventCheck{
pub name:String,
pub summary:String,
pub passed:bool,
}
#[derive(Clone,Debug,serde::Deserialize)]
pub struct AuditEventCheckList{
pub check_list:Vec<AuditEventCheck>,
}
#[derive(Clone,Debug)]
pub enum AuditEventData{
Action(AuditEventAction),
Comment(AuditEventComment),
ChangeModel(AuditEventChangeModel),
ChangeValidatedModel(AuditEventChangeValidatedModel),
ChangeDisplayName(AuditEventChangeName),
ChangeCreator(AuditEventChangeName),
Error(AuditEventError),
CheckList(AuditEventCheckList),
}
#[derive(Clone,Copy,Debug,Hash,Eq,PartialEq,serde::Serialize,serde::Deserialize)]
pub struct AuditEventID(pub(crate)i64);
#[expect(nonstandard_style)]
#[derive(Clone,Debug,serde::Deserialize)]
pub struct AuditEventReponse{
pub ID:AuditEventID,
pub Date:i64,
pub User:u64,
pub Username:String,
pub ResourceType:ResourceType,
pub ResourceID:ResourceID,
pub EventType:AuditEventType,
EventData:serde_json::Value,
}
impl AuditEventReponse{
pub fn data(self)->serde_json::Result<AuditEventData>{
Ok(match self.EventType{
AuditEventType::Action=>AuditEventData::Action(serde_json::from_value(self.EventData)?),
AuditEventType::Comment=>AuditEventData::Comment(serde_json::from_value(self.EventData)?),
AuditEventType::ChangeModel=>AuditEventData::ChangeModel(serde_json::from_value(self.EventData)?),
AuditEventType::ChangeValidatedModel=>AuditEventData::ChangeValidatedModel(serde_json::from_value(self.EventData)?),
AuditEventType::ChangeDisplayName=>AuditEventData::ChangeDisplayName(serde_json::from_value(self.EventData)?),
AuditEventType::ChangeCreator=>AuditEventData::ChangeCreator(serde_json::from_value(self.EventData)?),
AuditEventType::Error=>AuditEventData::Error(serde_json::from_value(self.EventData)?),
AuditEventType::CheckList=>AuditEventData::CheckList(serde_json::from_value(self.EventData)?),
})
}
}
#[expect(nonstandard_style)]
#[derive(Clone,Debug,serde::Serialize)]
pub struct Check{
pub Name:&'static str,
@@ -342,7 +523,7 @@ pub struct Check{
pub Passed:bool,
}
#[allow(nonstandard_style)]
#[expect(nonstandard_style)]
#[derive(Clone,Debug)]
pub struct ActionSubmissionSubmittedRequest{
pub SubmissionID:SubmissionID,
@@ -352,33 +533,33 @@ pub struct ActionSubmissionSubmittedRequest{
pub GameID:GameID,
}
#[allow(nonstandard_style)]
#[expect(nonstandard_style)]
#[derive(Clone,Debug)]
pub struct ActionSubmissionRequestChangesRequest{
pub SubmissionID:SubmissionID,
}
#[allow(nonstandard_style)]
#[expect(nonstandard_style)]
#[derive(Clone,Debug)]
pub struct ActionSubmissionUploadedRequest{
pub SubmissionID:SubmissionID,
pub UploadedAssetID:u64,
}
#[allow(nonstandard_style)]
#[expect(nonstandard_style)]
#[derive(Clone,Debug)]
pub struct ActionSubmissionAcceptedRequest{
pub SubmissionID:SubmissionID,
}
#[allow(nonstandard_style)]
#[expect(nonstandard_style)]
#[derive(Clone,Debug)]
pub struct CreateSubmissionAuditErrorRequest{
pub SubmissionID:SubmissionID,
pub ErrorMessage:String,
}
#[allow(nonstandard_style)]
#[expect(nonstandard_style)]
#[derive(Clone,Debug)]
pub struct CreateSubmissionAuditCheckListRequest<'a>{
pub SubmissionID:SubmissionID,
@@ -387,8 +568,16 @@ pub struct CreateSubmissionAuditCheckListRequest<'a>{
#[derive(Clone,Copy,Debug,Hash,Eq,PartialEq,serde::Serialize,serde::Deserialize)]
pub struct SubmissionID(pub(crate)i64);
impl SubmissionID{
pub const fn new(value:i64)->Self{
Self(value)
}
pub const fn value(&self)->i64{
self.0
}
}
#[allow(nonstandard_style)]
#[expect(nonstandard_style)]
#[derive(Clone,Debug)]
pub struct UpdateMapfixModelRequest{
pub MapfixID:MapfixID,
@@ -396,7 +585,7 @@ pub struct UpdateMapfixModelRequest{
pub ModelVersion:u64,
}
#[allow(nonstandard_style)]
#[expect(nonstandard_style)]
#[derive(Clone,Debug)]
pub struct ActionMapfixSubmittedRequest{
pub MapfixID:MapfixID,
@@ -406,32 +595,32 @@ pub struct ActionMapfixSubmittedRequest{
pub GameID:GameID,
}
#[allow(nonstandard_style)]
#[expect(nonstandard_style)]
#[derive(Clone,Debug)]
pub struct ActionMapfixRequestChangesRequest{
pub MapfixID:MapfixID,
}
#[allow(nonstandard_style)]
#[expect(nonstandard_style)]
#[derive(Clone,Debug)]
pub struct ActionMapfixUploadedRequest{
pub MapfixID:MapfixID,
}
#[allow(nonstandard_style)]
#[expect(nonstandard_style)]
#[derive(Clone,Debug)]
pub struct ActionMapfixAcceptedRequest{
pub MapfixID:MapfixID,
}
#[allow(nonstandard_style)]
#[expect(nonstandard_style)]
#[derive(Clone,Debug)]
pub struct CreateMapfixAuditErrorRequest{
pub MapfixID:MapfixID,
pub ErrorMessage:String,
}
#[allow(nonstandard_style)]
#[expect(nonstandard_style)]
#[derive(Clone,Debug)]
pub struct CreateMapfixAuditCheckListRequest<'a>{
pub MapfixID:MapfixID,
@@ -440,8 +629,16 @@ pub struct CreateMapfixAuditCheckListRequest<'a>{
#[derive(Clone,Copy,Debug,Hash,Eq,PartialEq,serde::Serialize,serde::Deserialize)]
pub struct MapfixID(pub(crate)i64);
impl MapfixID{
pub const fn new(value:i64)->Self{
Self(value)
}
pub const fn value(&self)->i64{
self.0
}
}
#[allow(nonstandard_style)]
#[expect(nonstandard_style)]
#[derive(Clone,Debug)]
pub struct ActionOperationFailedRequest{
pub OperationID:OperationID,
@@ -468,7 +665,7 @@ impl Resource{
}
}
#[allow(nonstandard_style)]
#[expect(nonstandard_style)]
#[derive(Clone,Debug,serde::Serialize)]
pub struct ReleaseInfo{
pub SubmissionID:SubmissionID,
@@ -478,3 +675,8 @@ pub struct ReleaseInfo{
pub struct ReleaseRequest<'a>{
pub schedule:&'a [ReleaseInfo],
}
#[expect(nonstandard_style)]
#[derive(Clone,Debug,serde::Deserialize)]
pub struct OperationIDResponse{
pub OperationID:OperationID,
}

View File

@@ -4,18 +4,18 @@ version = "0.1.1"
edition = "2024"
[dependencies]
async-nats = "0.42.0"
async-nats = "0.45.0"
futures = "0.3.31"
rbx_asset = { version = "0.4.9", features = ["gzip", "rustls-tls"], default-features = false, registry = "strafesnet" }
rbx_binary = "1.0.0"
rbx_dom_weak = "3.0.0"
rbx_reflection_database = "1.0.3"
rbx_xml = "1.0.0"
rbx_asset = { version = "0.5.0", features = ["gzip", "rustls-tls"], default-features = false, registry = "strafesnet" }
rbx_binary = "2.0.0"
rbx_dom_weak = "4.0.0"
rbx_reflection_database = "2.0.1"
rbx_xml = "2.0.0"
regex = { version = "1.11.3", default-features = false }
serde = { version = "1.0.215", features = ["derive"] }
serde_json = "1.0.133"
siphasher = "1.0.1"
tokio = { version = "1.41.1", features = ["macros", "rt-multi-thread", "signal"] }
heck = "0.5.0"
lazy-regex = "3.4.1"
rust-grpc = { version = "1.2.1", registry = "strafesnet" }
tonic = "0.13.1"
rust-grpc = { version = "1.6.1", registry = "strafesnet" }
tonic = "0.14.1"

View File

@@ -6,7 +6,7 @@ use heck::{ToSnakeCase,ToTitleCase};
use rbx_dom_weak::Instance;
use rust_grpc::validator::Check;
#[allow(dead_code)]
#[expect(dead_code)]
#[derive(Debug)]
pub enum Error{
ModelInfoDownload(rbx_asset::cloud::GetError),
@@ -24,7 +24,16 @@ impl std::fmt::Display for Error{
}
impl std::error::Error for Error{}
#[allow(nonstandard_style)]
macro_rules! lazy_regex{
($r:literal)=>{{
use regex::Regex;
use std::sync::LazyLock;
static RE:LazyLock<Regex>=LazyLock::new(||Regex::new($r).unwrap());
&RE
}};
}
#[expect(nonstandard_style)]
pub struct CheckRequest{
ModelID:u64,
SkipChecks:bool,
@@ -47,12 +56,20 @@ impl From<crate::nats_types::CheckSubmissionRequest> for CheckRequest{
}
}
#[derive(Clone,Copy,Debug,Hash,Eq,PartialEq)]
#[derive(Clone,Copy,Debug,Hash,Eq,PartialEq,Ord,PartialOrd)]
struct ModeID(u64);
impl ModeID{
const MAIN:Self=Self(0);
const BONUS:Self=Self(1);
}
impl std::fmt::Display for ModeID{
fn fmt(&self,f:&mut std::fmt::Formatter<'_>)->std::fmt::Result{
match self{
&ModeID::MAIN=>write!(f,"Main"),
&ModeID(mode_id)=>write!(f,"Bonus{mode_id}"),
}
}
}
enum Zone{
Start,
Finish,
@@ -62,7 +79,7 @@ struct ModeElement{
zone:Zone,
mode_id:ModeID,
}
#[allow(dead_code)]
#[expect(dead_code)]
pub enum IDParseError{
NoCaptures,
ParseInt(core::num::ParseIntError),
@@ -79,7 +96,7 @@ impl std::str::FromStr for ModeElement{
"BonusFinish"=>Ok(Self{zone:Zone::Finish,mode_id:ModeID::BONUS}),
"BonusAnticheat"=>Ok(Self{zone:Zone::Anticheat,mode_id:ModeID::BONUS}),
other=>{
let everything_pattern=lazy_regex::lazy_regex!(r"^Bonus(\d+)Start$|^BonusStart(\d+)$|^Bonus(\d+)Finish$|^BonusFinish(\d+)$|^Bonus(\d+)Anticheat$|^BonusAnticheat(\d+)$");
let everything_pattern=lazy_regex!(r"^Bonus(\d+)Start$|^BonusStart(\d+)$|^Bonus(\d+)Finish$|^BonusFinish(\d+)$|^Bonus(\d+)Anticheat$|^BonusAnticheat(\d+)$");
if let Some(captures)=everything_pattern.captures(other){
if let Some(mode_id)=captures.get(1).or(captures.get(2)){
return Ok(Self{
@@ -139,16 +156,16 @@ impl std::str::FromStr for StageElement{
type Err=IDParseError;
fn from_str(s:&str)->Result<Self,Self::Err>{
// Trigger ForceTrigger Teleport ForceTeleport SpawnAt ForceSpawnAt
let bonus_start_pattern=lazy_regex::lazy_regex!(r"^(?:Force)?(Teleport|SpawnAt|Trigger)(\d+)$");
if let Some(captures)=bonus_start_pattern.captures(s){
let teleport_pattern=lazy_regex!(r"^(?:Force)?(Teleport|SpawnAt|Trigger)(\d+)$");
if let Some(captures)=teleport_pattern.captures(s){
return Ok(StageElement{
behaviour:StageElementBehaviour::Teleport,
stage_id:StageID(captures[1].parse().map_err(IDParseError::ParseInt)?),
});
}
// Spawn
let bonus_finish_pattern=lazy_regex::lazy_regex!(r"^Spawn(\d+)$");
if let Some(captures)=bonus_finish_pattern.captures(s){
let spawn_pattern=lazy_regex!(r"^Spawn(\d+)$");
if let Some(captures)=spawn_pattern.captures(s){
return Ok(StageElement{
behaviour:StageElementBehaviour::Spawn,
stage_id:StageID(captures[1].parse().map_err(IDParseError::ParseInt)?),
@@ -180,15 +197,15 @@ struct WormholeElement{
impl std::str::FromStr for WormholeElement{
type Err=IDParseError;
fn from_str(s:&str)->Result<Self,Self::Err>{
let bonus_start_pattern=lazy_regex::lazy_regex!(r"^WormholeIn(\d+)$");
if let Some(captures)=bonus_start_pattern.captures(s){
let wormhole_in_pattern=lazy_regex!(r"^WormholeIn(\d+)$");
if let Some(captures)=wormhole_in_pattern.captures(s){
return Ok(Self{
behaviour:WormholeBehaviour::In,
wormhole_id:WormholeID(captures[1].parse().map_err(IDParseError::ParseInt)?),
});
}
let bonus_finish_pattern=lazy_regex::lazy_regex!(r"^WormholeOut(\d+)$");
if let Some(captures)=bonus_finish_pattern.captures(s){
let wormhole_out_pattern=lazy_regex!(r"^WormholeOut(\d+)$");
if let Some(captures)=wormhole_out_pattern.captures(s){
return Ok(Self{
behaviour:WormholeBehaviour::Out,
wormhole_id:WormholeID(captures[1].parse().map_err(IDParseError::ParseInt)?),
@@ -206,6 +223,15 @@ impl std::fmt::Display for WormholeElement{
}
}
fn count_sequential(modes:&HashMap<ModeID,Vec<&Instance>>)->usize{
for mode_id in 0..modes.len(){
if !modes.contains_key(&ModeID(mode_id as u64)){
return mode_id;
}
}
return modes.len();
}
/// Count various map elements
#[derive(Default)]
struct Counts<'a>{
@@ -225,6 +251,24 @@ pub struct ModelInfo<'a>{
counts:Counts<'a>,
unanchored_parts:Vec<&'a Instance>,
}
impl ModelInfo<'_>{
pub fn count_modes(&self)->Option<usize>{
let start_zones_count=self.counts.mode_start_counts.len();
let finish_zones_count=self.counts.mode_finish_counts.len();
let sequential_start_zones=count_sequential(&self.counts.mode_start_counts);
let sequential_finish_zones=count_sequential(&self.counts.mode_finish_counts);
// all counts must match
if start_zones_count==finish_zones_count
&& sequential_start_zones==sequential_finish_zones
&& start_zones_count==sequential_start_zones
&& finish_zones_count==sequential_finish_zones
{
Some(start_zones_count)
}else{
None
}
}
}
pub fn get_model_info<'a>(dom:&'a rbx_dom_weak::WeakDom,model_instance:&'a rbx_dom_weak::Instance)->ModelInfo<'a>{
// extract model info
@@ -237,7 +281,7 @@ pub fn get_model_info<'a>(dom:&'a rbx_dom_weak::WeakDom,model_instance:&'a rbx_d
let mut unanchored_parts=Vec::new();
let anchored_ustr=rbx_dom_weak::ustr("Anchored");
let db=rbx_reflection_database::get();
let db=rbx_reflection_database::get().unwrap();
let base_part=&db.classes["BasePart"];
let base_parts=dom.descendants_of(model_instance.referent()).filter(|&instance|
db.classes.get(instance.class.as_str()).is_some_and(|class|
@@ -398,7 +442,7 @@ pub struct MapInfoOwned{
pub creator:String,
pub game_id:GameID,
}
#[allow(dead_code)]
#[expect(dead_code)]
#[derive(Debug)]
pub enum IntoMapInfoOwnedError{
DisplayName(StringValueError),
@@ -446,6 +490,8 @@ struct MapCheck<'a>{
mode_finish_counts:SetDifferenceCheck<SetDifferenceCheckContextAtLeastOne<ModeID,Vec<&'a Instance>>>,
// Check for dangling MapAnticheat zones (no associated MapStart)
mode_anticheat_counts:SetDifferenceCheck<SetDifferenceCheckContextAllowNone<ModeID,Vec<&'a Instance>>>,
// Check that modes are sequential
modes_sequential:Result<(),Vec<ModeID>>,
// Spawn1 must exist
spawn1:Result<Exists,Absent>,
// Check for dangling Teleport# (no associated Spawn#)
@@ -514,6 +560,25 @@ impl<'a> ModelInfo<'a>{
let mode_anticheat_counts=SetDifferenceCheckContextAllowNone::new(self.counts.mode_anticheat_counts)
.check(&self.counts.mode_start_counts);
// There must not be non-sequential modes. If Bonus100 exists, Bonuses 1-99 had better also exist.
let modes_sequential={
let sequential=count_sequential(&self.counts.mode_start_counts);
if sequential==self.counts.mode_start_counts.len(){
Ok(())
}else{
let mut non_sequential=Vec::with_capacity(self.counts.mode_start_counts.len()-sequential);
for (&mode_id,_) in &self.counts.mode_start_counts{
let ModeID(mode_id_u64)=mode_id;
if !(mode_id_u64<sequential as u64){
non_sequential.push(mode_id);
}
}
// sort so it's prettier when it prints out
non_sequential.sort();
Err(non_sequential)
}
};
// There must be exactly one start zone for every mode in the map.
let mode_start_counts=DuplicateCheckContext(self.counts.mode_start_counts).check(|c|1<c.len());
@@ -550,6 +615,7 @@ impl<'a> ModelInfo<'a>{
mode_start_counts,
mode_finish_counts,
mode_anticheat_counts,
modes_sequential,
spawn1,
teleport_counts,
spawn_counts,
@@ -573,6 +639,7 @@ impl MapCheck<'_>{
mode_start_counts:DuplicateCheck(Ok(())),
mode_finish_counts:SetDifferenceCheck(Ok(())),
mode_anticheat_counts:SetDifferenceCheck(Ok(())),
modes_sequential:Ok(()),
spawn1:Ok(Exists),
teleport_counts:SetDifferenceCheck(Ok(())),
spawn_counts:DuplicateCheck(Ok(())),
@@ -746,6 +813,15 @@ impl MapCheck<'_>{
}
}
};
let sequential_modes=match &self.modes_sequential{
Ok(())=>passed!("SequentialModes"),
Err(context)=>{
let non_sequential=context.len();
let plural_non_sequential=if non_sequential==1{"mode"}else{"modes"};
let comma_separated=Separated::new(", ",||context);
summary_format!("SequentialModes","{non_sequential} {plural_non_sequential} should use a lower ModeID (no gaps): {comma_separated}")
}
};
let spawn1=match &self.spawn1{
Ok(Exists)=>passed!("Spawn1"),
Err(Absent)=>summary_format!("Spawn1","Model has no Spawn1"),
@@ -824,6 +900,7 @@ impl MapCheck<'_>{
extra_finish,
missing_finish,
dangling_anticheat,
sequential_modes,
spawn1,
dangling_teleport,
duplicate_spawns,

View File

@@ -1,7 +1,7 @@
use crate::check::CheckListAndVersion;
use crate::nats_types::CheckMapfixRequest;
#[allow(dead_code)]
#[expect(dead_code)]
#[derive(Debug)]
pub enum Error{
Check(crate::check::Error),

View File

@@ -1,7 +1,7 @@
use crate::check::CheckListAndVersion;
use crate::nats_types::CheckSubmissionRequest;
#[allow(dead_code)]
#[expect(dead_code)]
#[derive(Debug)]
pub enum Error{
Check(crate::check::Error),

View File

@@ -1,7 +1,7 @@
use crate::download::download_asset_version;
use crate::rbx_util::{get_root_instance,get_mapinfo,read_dom,MapInfo,ReadDomError,GetRootInstanceError,GameID};
#[allow(dead_code)]
#[expect(dead_code)]
#[derive(Debug)]
pub enum Error{
CreatorTypeMustBeUser,
@@ -17,11 +17,11 @@ impl std::fmt::Display for Error{
}
impl std::error::Error for Error{}
#[allow(nonstandard_style)]
#[expect(nonstandard_style)]
pub struct CreateRequest{
pub ModelID:u64,
}
#[allow(nonstandard_style)]
#[expect(nonstandard_style)]
pub struct CreateResult{
pub AssetOwner:u64,
pub DisplayName:Option<String>,

View File

@@ -1,7 +1,7 @@
use crate::nats_types::CreateMapfixRequest;
use crate::create::CreateRequest;
#[allow(dead_code)]
#[expect(dead_code)]
#[derive(Debug)]
pub enum Error{
Create(crate::create::Error),

View File

@@ -2,7 +2,7 @@ use crate::nats_types::CreateSubmissionRequest;
use crate::create::CreateRequest;
use crate::rbx_util::GameID;
#[allow(dead_code)]
#[expect(dead_code)]
#[derive(Debug)]
pub enum Error{
Create(crate::create::Error),

View File

@@ -1,4 +1,4 @@
#[allow(dead_code)]
#[expect(dead_code)]
#[derive(Debug)]
pub enum Error{
ModelLocationDownload(rbx_asset::cloud::GetError),

View File

@@ -18,6 +18,9 @@ impl Service{
endpoint!(set_status_submitted,SubmittedRequest,NullResponse);
endpoint!(set_status_request_changes,MapfixId,NullResponse);
endpoint!(set_status_validated,MapfixId,NullResponse);
endpoint!(set_status_failed,MapfixId,NullResponse);
endpoint!(set_status_not_validated,MapfixId,NullResponse);
endpoint!(set_status_uploaded,MapfixId,NullResponse);
endpoint!(set_status_not_uploaded,MapfixId,NullResponse);
endpoint!(set_status_released,MapfixReleaseRequest,NullResponse);
endpoint!(set_status_not_released,MapfixId,NullResponse);
}

View File

@@ -11,5 +11,6 @@ impl Service{
)->Self{
Self{client}
}
endpoint!(success,OperationSuccessRequest,NullResponse);
endpoint!(fail,OperationFailRequest,NullResponse);
}

View File

@@ -18,6 +18,8 @@ impl Service{
endpoint!(set_status_submitted,SubmittedRequest,NullResponse);
endpoint!(set_status_request_changes,SubmissionId,NullResponse);
endpoint!(set_status_validated,SubmissionId,NullResponse);
endpoint!(set_status_failed,SubmissionId,NullResponse);
endpoint!(set_status_not_validated,SubmissionId,NullResponse);
endpoint!(set_status_uploaded,StatusUploadedRequest,NullResponse);
endpoint!(set_status_not_uploaded,SubmissionId,NullResponse);
endpoint!(set_status_released,SubmissionReleaseRequest,NullResponse);
}

View File

@@ -13,13 +13,15 @@ mod check_submission;
mod create;
mod create_mapfix;
mod create_submission;
mod release;
mod release_mapfix;
mod release_submissions_batch;
mod upload_mapfix;
mod upload_submission;
mod validator;
mod validate_mapfix;
mod validate_submission;
#[allow(dead_code)]
#[derive(Debug)]
pub enum StartupError{
API(tonic::transport::Error),
@@ -47,24 +49,44 @@ async fn main()->Result<(),StartupError>{
},
Err(e)=>panic!("{e}: ROBLOX_GROUP_ID env required"),
};
let load_asset_version_place_id=std::env::var("LOAD_ASSET_VERSION_PLACE_ID").expect("LOAD_ASSET_VERSION_PLACE_ID env required").parse().expect("LOAD_ASSET_VERSION_PLACE_ID int parse failed");
let load_asset_version_universe_id=std::env::var("LOAD_ASSET_VERSION_UNIVERSE_ID").expect("LOAD_ASSET_VERSION_UNIVERSE_ID env required").parse().expect("LOAD_ASSET_VERSION_UNIVERSE_ID int parse failed");
// create / upload models through STRAFESNET_CI2 account
let cookie=std::env::var("RBXCOOKIE").expect("RBXCOOKIE env required");
let cookie_context=rbx_asset::cookie::Context::new(rbx_asset::cookie::Cookie::new(cookie));
// download models through cloud api
// download models through cloud api (STRAFESNET_CI2 account)
let api_key=std::env::var("RBX_API_KEY").expect("RBX_API_KEY env required");
let cloud_context=rbx_asset::cloud::Context::new(rbx_asset::cloud::ApiKey::new(api_key));
// luau execution cloud api (StrafesNET group)
let api_key=std::env::var("RBX_API_KEY_LUAU_EXECUTION").expect("RBX_API_KEY_LUAU_EXECUTION env required");
let cloud_context_luau_execution=rbx_asset::cloud::Context::new(rbx_asset::cloud::ApiKey::new(api_key));
// maps-service api
let api_host_internal=std::env::var("API_HOST_INTERNAL").expect("API_HOST_INTERNAL env required");
let endpoint=tonic::transport::Endpoint::new(api_host_internal).map_err(StartupError::API)?;
let channel=endpoint.connect_lazy();
let mapfixes=crate::grpc::mapfixes::ValidatorMapfixesServiceClient::new(channel.clone());
let operations=crate::grpc::operations::ValidatorOperationsServiceClient::new(channel.clone());
let scripts=crate::grpc::scripts::ValidatorScriptsServiceClient::new(channel.clone());
let script_policy=crate::grpc::script_policy::ValidatorScriptPolicyServiceClient::new(channel.clone());
let submissions=crate::grpc::submissions::ValidatorSubmissionsServiceClient::new(channel);
let message_handler=message_handler::MessageHandler::new(cloud_context,cookie_context,group_id,mapfixes,operations,scripts,script_policy,submissions);
let mapfixes=crate::grpc::mapfixes::Service::new(crate::grpc::mapfixes::ValidatorMapfixesServiceClient::new(channel.clone()));
let operations=crate::grpc::operations::Service::new(crate::grpc::operations::ValidatorOperationsServiceClient::new(channel.clone()));
let scripts=crate::grpc::scripts::Service::new(crate::grpc::scripts::ValidatorScriptsServiceClient::new(channel.clone()));
let script_policy=crate::grpc::script_policy::Service::new(crate::grpc::script_policy::ValidatorScriptPolicyServiceClient::new(channel.clone()));
let submissions=crate::grpc::submissions::Service::new(crate::grpc::submissions::ValidatorSubmissionsServiceClient::new(channel.clone()));
let load_asset_version_runtime=rbx_asset::cloud::LuauSessionLatestRequest{
place_id:load_asset_version_place_id,
universe_id:load_asset_version_universe_id,
};
let message_handler=message_handler::MessageHandler{
cloud_context,
cookie_context,
cloud_context_luau_execution,
group_id,
load_asset_version_runtime,
mapfixes,
operations,
scripts,
script_policy,
submissions,
};
// nats
let nats_host=std::env::var("NATS_HOST").expect("NATS_HOST env required");

View File

@@ -1,4 +1,4 @@
#[allow(dead_code)]
#[expect(dead_code)]
#[derive(Debug)]
pub enum HandleMessageError{
Messages(async_nats::jetstream::consumer::pull::MessagesError),
@@ -9,6 +9,8 @@ pub enum HandleMessageError{
CreateSubmission(tonic::Status),
CheckMapfix(crate::check_mapfix::Error),
CheckSubmission(crate::check_submission::Error),
ReleaseMapfix(crate::release_mapfix::Error),
ReleaseSubmissionsBatch(crate::release_submissions_batch::Error),
UploadMapfix(crate::upload_mapfix::Error),
UploadSubmission(crate::upload_submission::Error),
ValidateMapfix(crate::validate_mapfix::Error),
@@ -30,7 +32,9 @@ fn from_slice<'a,T:serde::de::Deserialize<'a>>(slice:&'a [u8])->Result<T,HandleM
pub struct MessageHandler{
pub(crate) cloud_context:rbx_asset::cloud::Context,
pub(crate) cookie_context:rbx_asset::cookie::Context,
pub(crate) cloud_context_luau_execution:rbx_asset::cloud::Context,
pub(crate) group_id:Option<u64>,
pub(crate) load_asset_version_runtime:rbx_asset::cloud::LuauSessionLatestRequest,
pub(crate) mapfixes:crate::grpc::mapfixes::Service,
pub(crate) operations:crate::grpc::operations::Service,
pub(crate) scripts:crate::grpc::scripts::Service,
@@ -39,27 +43,6 @@ pub struct MessageHandler{
}
impl MessageHandler{
pub fn new(
cloud_context:rbx_asset::cloud::Context,
cookie_context:rbx_asset::cookie::Context,
group_id:Option<u64>,
mapfixes:crate::grpc::mapfixes::ValidatorMapfixesServiceClient,
operations:crate::grpc::operations::ValidatorOperationsServiceClient,
scripts:crate::grpc::scripts::ValidatorScriptsServiceClient,
script_policy:crate::grpc::script_policy::ValidatorScriptPolicyServiceClient,
submissions:crate::grpc::submissions::ValidatorSubmissionsServiceClient,
)->Self{
Self{
cloud_context,
cookie_context,
group_id,
mapfixes:crate::grpc::mapfixes::Service::new(mapfixes),
operations:crate::grpc::operations::Service::new(operations),
scripts:crate::grpc::scripts::Service::new(scripts),
script_policy:crate::grpc::script_policy::Service::new(script_policy),
submissions:crate::grpc::submissions::Service::new(submissions),
}
}
pub async fn handle_message_result(&self,message_result:MessageResult)->Result<(),HandleMessageError>{
let message=message_result.map_err(HandleMessageError::Messages)?;
message.double_ack().await.map_err(HandleMessageError::DoubleAck)?;
@@ -68,6 +51,8 @@ impl MessageHandler{
"maptest.submissions.create"=>self.create_submission(from_slice(&message.payload)?).await.map_err(HandleMessageError::CreateSubmission),
"maptest.mapfixes.check"=>self.check_mapfix(from_slice(&message.payload)?).await.map_err(HandleMessageError::CheckMapfix),
"maptest.submissions.check"=>self.check_submission(from_slice(&message.payload)?).await.map_err(HandleMessageError::CheckSubmission),
"maptest.mapfixes.release"=>self.release_mapfix(from_slice(&message.payload)?).await.map_err(HandleMessageError::ReleaseMapfix),
"maptest.submissions.batchrelease"=>self.release_submissions_batch(from_slice(&message.payload)?).await.map_err(HandleMessageError::ReleaseSubmissionsBatch),
"maptest.mapfixes.upload"=>self.upload_mapfix(from_slice(&message.payload)?).await.map_err(HandleMessageError::UploadMapfix),
"maptest.submissions.upload"=>self.upload_submission(from_slice(&message.payload)?).await.map_err(HandleMessageError::UploadSubmission),
"maptest.mapfixes.validate"=>self.validate_mapfix(from_slice(&message.payload)?).await.map_err(HandleMessageError::ValidateMapfix),

View File

@@ -4,7 +4,7 @@
// Requests are sent from maps-service to validator
// Validation invokes the REST api to update the submissions
#[allow(nonstandard_style)]
#[expect(nonstandard_style)]
#[derive(serde::Deserialize)]
pub struct CreateSubmissionRequest{
// operation_id is passed back in the response message
@@ -18,7 +18,7 @@ pub struct CreateSubmissionRequest{
pub Roles:u32,
}
#[allow(nonstandard_style)]
#[expect(nonstandard_style)]
#[derive(serde::Deserialize)]
pub struct CreateMapfixRequest{
pub OperationID:u32,
@@ -27,7 +27,7 @@ pub struct CreateMapfixRequest{
pub Description:String,
}
#[allow(nonstandard_style)]
#[expect(nonstandard_style)]
#[derive(serde::Deserialize)]
pub struct CheckSubmissionRequest{
pub SubmissionID:u64,
@@ -35,7 +35,7 @@ pub struct CheckSubmissionRequest{
pub SkipChecks:bool,
}
#[allow(nonstandard_style)]
#[expect(nonstandard_style)]
#[derive(serde::Deserialize)]
pub struct CheckMapfixRequest{
pub MapfixID:u64,
@@ -43,7 +43,7 @@ pub struct CheckMapfixRequest{
pub SkipChecks:bool,
}
#[allow(nonstandard_style)]
#[expect(nonstandard_style)]
#[derive(serde::Deserialize)]
pub struct ValidateSubmissionRequest{
// submission_id is passed back in the response message
@@ -53,7 +53,7 @@ pub struct ValidateSubmissionRequest{
pub ValidatedModelID:Option<u64>,
}
#[allow(nonstandard_style)]
#[expect(nonstandard_style)]
#[derive(serde::Deserialize)]
pub struct ValidateMapfixRequest{
// submission_id is passed back in the response message
@@ -64,7 +64,7 @@ pub struct ValidateMapfixRequest{
}
// Create a new map
#[allow(nonstandard_style)]
#[expect(nonstandard_style)]
#[derive(serde::Deserialize)]
pub struct UploadSubmissionRequest{
pub SubmissionID:u64,
@@ -73,7 +73,7 @@ pub struct UploadSubmissionRequest{
pub ModelName:String,
}
#[allow(nonstandard_style)]
#[expect(nonstandard_style)]
#[derive(serde::Deserialize)]
pub struct UploadMapfixRequest{
pub MapfixID:u64,
@@ -81,3 +81,34 @@ pub struct UploadMapfixRequest{
pub ModelVersion:u64,
pub TargetAssetID:u64,
}
// Release a new map
#[expect(nonstandard_style)]
#[derive(serde::Deserialize)]
pub struct ReleaseSubmissionRequest{
pub SubmissionID:u64,
pub ReleaseDate:i64,
pub ModelID:u64,
pub ModelVersion:u64,
pub UploadedAssetID:u64,
pub DisplayName:String,
pub Creator:String,
pub GameID:u32,
pub Submitter:u64,
}
#[expect(nonstandard_style)]
#[derive(serde::Deserialize)]
pub struct ReleaseSubmissionsBatchRequest{
pub Submissions:Vec<ReleaseSubmissionRequest>,
pub OperationID:u32,
}
#[expect(nonstandard_style)]
#[derive(serde::Deserialize)]
pub struct ReleaseMapfixRequest{
pub MapfixID:u64,
pub ModelID:u64,
pub ModelVersion:u64,
pub TargetAssetID:u64,
}

View File

@@ -1,5 +1,4 @@
#[allow(dead_code)]
#[expect(dead_code)]
#[derive(Debug)]
pub enum ReadDomError{
Binary(rbx_binary::DecodeError),
@@ -112,3 +111,21 @@ pub fn get_mapinfo<'a>(dom:&'a rbx_dom_weak::WeakDom,model_instance:&rbx_dom_wea
game_id:model_instance.name.parse(),
}
}
pub async fn get_luau_result_exp_backoff(
context:&rbx_asset::cloud::Context,
luau_session:&rbx_asset::cloud::LuauSessionResponse
)->Result<Result<rbx_asset::cloud::LuauResults,rbx_asset::cloud::LuauError>,rbx_asset::cloud::LuauSessionError>{
const BACKOFF_MUL:f32=1.395_612_5;//exp(1/3)
let mut backoff=1000f32;
loop{
match luau_session.try_get_result(context).await{
//try again when the operation is not done
Err(rbx_asset::cloud::LuauSessionError::NotDone)=>(),
//return all other results
other_result=>return other_result,
}
tokio::time::sleep(std::time::Duration::from_millis(backoff as u64)).await;
backoff*=BACKOFF_MUL;
}
}

104
validation/src/release.rs Normal file
View File

@@ -0,0 +1,104 @@
use crate::rbx_util::read_dom;
#[expect(unused)]
#[derive(Debug)]
pub enum ModesError{
ApiActionMapfixReleased(tonic::Status),
ModelFileDecode(crate::rbx_util::ReadDomError),
GetRootInstance(crate::rbx_util::GetRootInstanceError),
NonSequentialModes,
TooManyModes(usize),
}
impl std::fmt::Display for ModesError{
fn fmt(&self,f:&mut std::fmt::Formatter<'_>)->std::fmt::Result{
write!(f,"{self:?}")
}
}
impl std::error::Error for ModesError{}
// decode and get modes function
pub fn count_modes(maybe_gzip:rbx_asset::types::MaybeGzippedBytes)->Result<u32,ModesError>{
// decode dom (slow!)
let dom=maybe_gzip.read_with(read_dom,read_dom).map_err(ModesError::ModelFileDecode)?;
// extract the root instance
let model_instance=crate::rbx_util::get_root_instance(&dom).map_err(ModesError::GetRootInstance)?;
// extract information from the model
let model_info=crate::check::get_model_info(&dom,model_instance);
// count modes
let modes=model_info.count_modes().ok_or(ModesError::NonSequentialModes)?;
// hard limit LOL
let modes=if modes<u32::MAX as usize{
modes as u32
}else{
return Err(ModesError::TooManyModes(modes));
};
Ok(modes)
}
#[expect(unused)]
#[derive(Debug)]
pub enum LoadAssetVersionsError{
CreateSession(rbx_asset::cloud::CreateError),
NonPositiveNumber(serde_json::Value),
Script(rbx_asset::cloud::LuauError),
InvalidResult(Vec<serde_json::Value>),
LuauSession(rbx_asset::cloud::LuauSessionError),
}
impl std::fmt::Display for LoadAssetVersionsError{
fn fmt(&self,f:&mut std::fmt::Formatter<'_>)->std::fmt::Result{
write!(f,"{self:?}")
}
}
impl std::error::Error for LoadAssetVersionsError{}
// get asset versions in bulk using Roblox Luau API
pub async fn load_asset_versions<I:IntoIterator<Item=u64>>(
context:&rbx_asset::cloud::Context,
runtime:&rbx_asset::cloud::LuauSessionLatestRequest,
assets:I,
)->Result<Vec<u64>,LoadAssetVersionsError>{
// construct script with inline IDs
// TODO: concurrent execution
let mut script="local InsertService=game:GetService(\"InsertService\")\nreturn\n".to_string();
for asset in assets{
use std::fmt::Write;
write!(script,"InsertService:GetLatestAssetVersionAsync({asset}),\n").unwrap();
}
let session=rbx_asset::cloud::LuauSessionCreate{
script:&script[..script.len()-2],
user:None,
timeout:None,
binaryInput:None,
enableBinaryOutput:None,
binaryOutputUri:None,
};
let session_response=context.create_luau_session(runtime,session).await.map_err(LoadAssetVersionsError::CreateSession)?;
let result=crate::rbx_util::get_luau_result_exp_backoff(&context,&session_response).await;
// * Note that only one mapfix can be active per map
// * so it's theoretically impossible for the map to be updated unexpectedly.
// * This means that the incremental asset version does not
// * need to be checked before and after the load asset version is checked.
match result{
Ok(Ok(rbx_asset::cloud::LuauResults{results}))=>{
results.into_iter().map(|load_asset_version|
match load_asset_version.as_u64(){
Some(version)=>Ok(version),
None=>Err(LoadAssetVersionsError::NonPositiveNumber(load_asset_version))
}
).collect()
},
Ok(Err(e))=>Err(LoadAssetVersionsError::Script(e)),
Err(e)=>Err(LoadAssetVersionsError::LuauSession(e)),
}
// * Don't need to check asset version to make sure it hasn't been updated
}

View File

@@ -0,0 +1,101 @@
use crate::download::download_asset_version;
use crate::nats_types::ReleaseMapfixRequest;
use crate::release::{count_modes,load_asset_versions};
#[expect(unused)]
#[derive(Debug)]
pub enum InnerError{
Download(crate::download::Error),
Modes(crate::release::ModesError),
LoadAssetVersions(crate::release::LoadAssetVersionsError),
LoadAssetVersionsListLength,
}
impl std::fmt::Display for InnerError{
fn fmt(&self,f:&mut std::fmt::Formatter<'_>)->std::fmt::Result{
write!(f,"{self:?}")
}
}
impl std::error::Error for InnerError{}
async fn release_inner(
cloud_context:&rbx_asset::cloud::Context,
cloud_context_luau_execution:&rbx_asset::cloud::Context,
load_asset_version_runtime:&rbx_asset::cloud::LuauSessionLatestRequest,
release_info:ReleaseMapfixRequest,
)->Result<rust_grpc::validator::MapfixReleaseRequest,InnerError>{
// download the map model
let maybe_gzip=download_asset_version(cloud_context,rbx_asset::cloud::GetAssetVersionRequest{
asset_id:release_info.ModelID,
version:release_info.ModelVersion,
}).await.map_err(InnerError::Download)?;
// count modes
let modes=count_modes(maybe_gzip).map_err(InnerError::Modes)?;
// fetch load asset version
let load_asset_versions=load_asset_versions(
cloud_context_luau_execution,
load_asset_version_runtime,
[release_info.TargetAssetID],
).await.map_err(InnerError::LoadAssetVersions)?;
// exactly one value in the results
let &[load_asset_version]=load_asset_versions.as_slice()else{
return Err(InnerError::LoadAssetVersionsListLength);
};
Ok(rust_grpc::validator::MapfixReleaseRequest{
mapfix_id:release_info.MapfixID,
target_asset_id:release_info.TargetAssetID,
asset_version:load_asset_version,
modes:modes,
})
}
#[expect(unused)]
#[derive(Debug)]
pub enum Error{
ApiActionMapfixRelease(tonic::Status),
}
impl std::fmt::Display for Error{
fn fmt(&self,f:&mut std::fmt::Formatter<'_>)->std::fmt::Result{
write!(f,"{self:?}")
}
}
impl std::error::Error for Error{}
impl crate::message_handler::MessageHandler{
pub async fn release_mapfix(&self,release_info:ReleaseMapfixRequest)->Result<(),Error>{
let mapfix_id=release_info.MapfixID;
let result=release_inner(
&self.cloud_context,
&self.cloud_context_luau_execution,
&self.load_asset_version_runtime,
release_info,
).await;
match result{
Ok(request)=>{
// update map metadata
self.mapfixes.set_status_released(request).await.map_err(Error::ApiActionMapfixRelease)?;
},
Err(e)=>{
// log error
println!("[release_mapfix] Error: {e}");
// post an error message to the audit log
self.mapfixes.create_audit_error(rust_grpc::validator::AuditErrorRequest{
id:mapfix_id,
error_message:e.to_string(),
}).await.map_err(Error::ApiActionMapfixRelease)?;
// update the mapfix model status to uploaded
self.mapfixes.set_status_not_released(rust_grpc::validator::MapfixId{
id:mapfix_id,
}).await.map_err(Error::ApiActionMapfixRelease)?;
},
}
Ok(())
}
}

View File

@@ -0,0 +1,227 @@
use futures::StreamExt;
use crate::download::download_asset_version;
use crate::nats_types::ReleaseSubmissionsBatchRequest;
use crate::release::{count_modes,load_asset_versions};
#[expect(unused)]
#[derive(Debug)]
pub enum DownloadFutError{
Download(crate::download::Error),
Join(tokio::task::JoinError),
Modes(crate::release::ModesError),
}
impl std::fmt::Display for DownloadFutError{
fn fmt(&self,f:&mut std::fmt::Formatter<'_>)->std::fmt::Result{
write!(f,"{self:?}")
}
}
impl std::error::Error for DownloadFutError{}
#[derive(Debug)]
pub struct ErrorContext<E>{
submission_id:u64,
error:E,
}
impl<E:std::fmt::Debug> std::fmt::Display for ErrorContext<E>{
fn fmt(&self,f:&mut std::fmt::Formatter<'_>)->std::fmt::Result{
write!(f,"SubmissionID({})={:?}",self.submission_id,self.error)
}
}
impl<E:std::fmt::Debug> std::error::Error for ErrorContext<E>{}
async fn download_fut(
cloud_context:&rbx_asset::cloud::Context,
asset_version:rbx_asset::cloud::GetAssetVersionRequest,
)->Result<u32,DownloadFutError>{
// download
let maybe_gzip=download_asset_version(cloud_context,asset_version)
.await
.map_err(DownloadFutError::Download)?;
// count modes in a green thread
let modes=tokio::task::spawn_blocking(||
count_modes(maybe_gzip)
)
.await
.map_err(DownloadFutError::Join)?
.map_err(DownloadFutError::Modes)?;
Ok::<_,DownloadFutError>(modes)
}
#[expect(unused)]
#[derive(Debug)]
pub enum InnerError{
Io(std::io::Error),
LoadAssetVersions(crate::release::LoadAssetVersionsError),
LoadAssetVersionsListLength,
DownloadFutErrors(Vec<ErrorContext<DownloadFutError>>),
ReleaseErrors(Vec<ErrorContext<tonic::Status>>),
}
impl std::fmt::Display for InnerError{
fn fmt(&self,f:&mut std::fmt::Formatter<'_>)->std::fmt::Result{
write!(f,"{self:?}")
}
}
impl std::error::Error for InnerError{}
const MAX_PARALLEL_DECODE:usize=6;
const MAX_CONCURRENT_RELEASE:usize=16;
async fn release_inner(
release_info:ReleaseSubmissionsBatchRequest,
cloud_context:&rbx_asset::cloud::Context,
cloud_context_luau_execution:&rbx_asset::cloud::Context,
load_asset_version_runtime:&rbx_asset::cloud::LuauSessionLatestRequest,
submissions:&crate::grpc::submissions::Service,
)->Result<(),InnerError>{
let available_parallelism=std::thread::available_parallelism().map_err(InnerError::Io)?.get();
// set up futures
// unnecessary allocation :(
let asset_versions:Vec<_> =release_info
.Submissions
.iter()
.map(|submission|rbx_asset::cloud::GetAssetVersionRequest{
asset_id:submission.ModelID,
version:submission.ModelVersion,
})
.enumerate()
.collect();
// fut_download
let fut_download=futures::stream::iter(asset_versions)
.map(|(index,asset_version)|async move{
let modes=download_fut(cloud_context,asset_version).await;
(index,modes)
})
.buffer_unordered(available_parallelism.min(MAX_PARALLEL_DECODE))
.collect::<Vec<(usize,Result<_,DownloadFutError>)>>();
// fut_luau
let fut_load_asset_versions=load_asset_versions(
cloud_context_luau_execution,
load_asset_version_runtime,
release_info.Submissions.iter().map(|submission|submission.UploadedAssetID),
);
// execute futures
let (mut modes_unordered,load_asset_versions_result)=tokio::join!(fut_download,fut_load_asset_versions);
let load_asset_versions=load_asset_versions_result.map_err(InnerError::LoadAssetVersions)?;
// sanity check roblox output
if load_asset_versions.len()!=release_info.Submissions.len(){
return Err(InnerError::LoadAssetVersionsListLength);
};
// rip asymptotic complexity (hash map would be better)
modes_unordered.sort_by_key(|&(index,_)|index);
// check modes calculations for all success
let mut modes=Vec::with_capacity(modes_unordered.len());
let mut errors=Vec::with_capacity(modes_unordered.len());
for (index,result) in modes_unordered{
match result{
Ok(value)=>modes.push(value),
Err(error)=>errors.push(ErrorContext{
submission_id:release_info.Submissions[index].SubmissionID,
error:error,
}),
}
}
if !errors.is_empty(){
return Err(InnerError::DownloadFutErrors(errors));
}
// concurrently dispatch results
let release_results:Vec<_> =futures::stream::iter(
release_info
.Submissions
.into_iter()
.zip(modes)
.zip(load_asset_versions)
.map(|((submission,modes),asset_version)|async move{
let result=submissions.set_status_released(rust_grpc::validator::SubmissionReleaseRequest{
submission_id:submission.SubmissionID,
map_create:Some(rust_grpc::maps_extended::MapCreate{
id:submission.UploadedAssetID as i64,
display_name:submission.DisplayName,
creator:submission.Creator,
game_id:submission.GameID,
date:submission.ReleaseDate,
submitter:submission.Submitter,
thumbnail:0,
asset_version,
modes,
}),
}).await;
(submission.SubmissionID,result)
})
)
.buffer_unordered(MAX_CONCURRENT_RELEASE)
.collect().await;
// check for errors
let errors:Vec<_> =
release_results
.into_iter()
.filter_map(|(submission_id,result)|
result.err().map(|e|ErrorContext{
submission_id,
error:e,
})
)
.collect();
if !errors.is_empty(){
return Err(InnerError::ReleaseErrors(errors));
}
Ok(())
}
#[expect(dead_code)]
#[derive(Debug)]
pub enum Error{
UpdateOperation(tonic::Status),
}
impl std::fmt::Display for Error{
fn fmt(&self,f:&mut std::fmt::Formatter<'_>)->std::fmt::Result{
write!(f,"{self:?}")
}
}
impl std::error::Error for Error{}
impl crate::message_handler::MessageHandler{
pub async fn release_submissions_batch(&self,release_info:ReleaseSubmissionsBatchRequest)->Result<(),Error>{
let operation_id=release_info.OperationID;
let result=release_inner(
release_info,
&self.cloud_context,
&self.cloud_context_luau_execution,
&self.load_asset_version_runtime,
&self.submissions,
).await;
match result{
Ok(())=>{
// operation success
self.operations.success(rust_grpc::validator::OperationSuccessRequest{
operation_id,
path:String::new(),
}).await.map_err(Error::UpdateOperation)?;
},
Err(e)=>{
// operation error
self.operations.fail(rust_grpc::validator::OperationFailRequest{
operation_id,
status_message:e.to_string(),
}).await.map_err(Error::UpdateOperation)?;
},
}
Ok(())
}
}

View File

@@ -5,8 +5,6 @@ pub struct MapfixID(pub(crate)u64);
#[derive(Clone,Copy,Debug,Hash,Eq,PartialEq,serde::Serialize,serde::Deserialize)]
pub struct SubmissionID(pub(crate)u64);
#[derive(Clone,Copy,Debug,Hash,Eq,PartialEq,serde::Serialize,serde::Deserialize)]
pub struct OperationID(pub(crate)u64);
#[derive(Clone,Copy,Debug,Hash,Eq,PartialEq,serde::Serialize,serde::Deserialize)]
pub struct ResourceID(pub(crate)u64);
#[derive(Clone,Copy,Debug,Hash,Eq,PartialEq,serde::Serialize,serde::Deserialize)]
pub struct ScriptID(pub(crate)u64);

View File

@@ -1,13 +1,51 @@
use crate::download::download_asset_version;
use crate::nats_types::UploadMapfixRequest;
#[allow(dead_code)]
#[expect(dead_code)]
#[derive(Debug)]
pub enum Error{
pub enum InnerError{
Download(crate::download::Error),
IO(std::io::Error),
Json(serde_json::Error),
Upload(rbx_asset::cookie::UploadError),
}
impl std::fmt::Display for InnerError{
fn fmt(&self,f:&mut std::fmt::Formatter<'_>)->std::fmt::Result{
write!(f,"{self:?}")
}
}
impl std::error::Error for InnerError{}
async fn upload_inner(
upload_info:UploadMapfixRequest,
cloud_context:&rbx_asset::cloud::Context,
cookie_context:&rbx_asset::cookie::Context,
group_id:Option<u64>,
)->Result<(),InnerError>{
// download the map model
let maybe_gzip=download_asset_version(cloud_context,rbx_asset::cloud::GetAssetVersionRequest{
asset_id:upload_info.ModelID,
version:upload_info.ModelVersion,
}).await.map_err(InnerError::Download)?;
// transparently handle gzipped models
let model_data=maybe_gzip.to_vec().map_err(InnerError::IO)?;
// upload the map to the strafesnet group
let _upload_response=cookie_context.upload(rbx_asset::cookie::UploadRequest{
assetid:upload_info.TargetAssetID,
groupId:group_id,
name:None,
description:None,
ispublic:None,
allowComments:None,
},model_data).await.map_err(InnerError::Upload)?;
Ok(())
}
#[expect(dead_code)]
#[derive(Debug)]
pub enum Error{
ApiActionMapfixUploaded(tonic::Status),
}
impl std::fmt::Display for Error{
@@ -19,31 +57,39 @@ impl std::error::Error for Error{}
impl crate::message_handler::MessageHandler{
pub async fn upload_mapfix(&self,upload_info:UploadMapfixRequest)->Result<(),Error>{
// download the map model
let maybe_gzip=download_asset_version(&self.cloud_context,rbx_asset::cloud::GetAssetVersionRequest{
asset_id:upload_info.ModelID,
version:upload_info.ModelVersion,
}).await.map_err(Error::Download)?;
let mapfix_id=upload_info.MapfixID;
let result=upload_inner(
upload_info,
&self.cloud_context,
&self.cookie_context,
self.group_id,
).await;
// transparently handle gzipped models
let model_data=maybe_gzip.to_vec().map_err(Error::IO)?;
// update the mapfix depending on the result
match result{
Ok(())=>{
// mark mapfix as uploaded, TargetAssetID is unchanged
self.mapfixes.set_status_uploaded(rust_grpc::validator::MapfixId{
id:mapfix_id,
}).await.map_err(Error::ApiActionMapfixUploaded)?;
},
Err(e)=>{
// log error
println!("[upload_mapfix] Error: {e}");
// upload the map to the strafesnet group
let _upload_response=self.cookie_context.upload(rbx_asset::cookie::UploadRequest{
assetid:upload_info.TargetAssetID,
groupId:self.group_id,
name:None,
description:None,
ispublic:None,
allowComments:None,
},model_data).await.map_err(Error::Upload)?;
self.mapfixes.create_audit_error(
rust_grpc::validator::AuditErrorRequest{
id:mapfix_id,
error_message:e.to_string(),
}
).await.map_err(Error::ApiActionMapfixUploaded)?;
// that's it, the database entry does not need to be changed.
// mark mapfix as uploaded, TargetAssetID is unchanged
self.mapfixes.set_status_uploaded(rust_grpc::validator::MapfixId{
id:upload_info.MapfixID,
}).await.map_err(Error::ApiActionMapfixUploaded)?;
// update the mapfix model status to accepted
self.mapfixes.set_status_not_uploaded(rust_grpc::validator::MapfixId{
id:mapfix_id,
}).await.map_err(Error::ApiActionMapfixUploaded)?;
},
}
Ok(())
}

View File

@@ -1,14 +1,52 @@
use crate::download::download_asset_version;
use crate::nats_types::UploadSubmissionRequest;
#[allow(dead_code)]
#[expect(dead_code)]
#[derive(Debug)]
pub enum Error{
pub enum InnerError{
Download(crate::download::Error),
IO(std::io::Error),
Json(serde_json::Error),
Create(rbx_asset::cookie::CreateError),
SystemTime(std::time::SystemTimeError),
}
impl std::fmt::Display for InnerError{
fn fmt(&self,f:&mut std::fmt::Formatter<'_>)->std::fmt::Result{
write!(f,"{self:?}")
}
}
impl std::error::Error for InnerError{}
async fn upload_inner(
upload_info:UploadSubmissionRequest,
cloud_context:&rbx_asset::cloud::Context,
cookie_context:&rbx_asset::cookie::Context,
group_id:Option<u64>,
)->Result<u64,InnerError>{
// download the map model
let maybe_gzip=download_asset_version(cloud_context,rbx_asset::cloud::GetAssetVersionRequest{
asset_id:upload_info.ModelID,
version:upload_info.ModelVersion,
}).await.map_err(InnerError::Download)?;
// transparently handle gzipped models
let model_data=maybe_gzip.to_vec().map_err(InnerError::IO)?;
// upload the map to the strafesnet group
let upload_response=cookie_context.create(rbx_asset::cookie::CreateRequest{
name:upload_info.ModelName.clone(),
description:"".to_owned(),
ispublic:false,
allowComments:false,
groupId:group_id,
},model_data).await.map_err(InnerError::Create)?;
Ok(upload_response.AssetId)
}
#[expect(dead_code)]
#[derive(Debug)]
pub enum Error{
ApiActionSubmissionUploaded(tonic::Status),
}
impl std::fmt::Display for Error{
@@ -20,29 +58,40 @@ impl std::error::Error for Error{}
impl crate::message_handler::MessageHandler{
pub async fn upload_submission(&self,upload_info:UploadSubmissionRequest)->Result<(),Error>{
// download the map model
let maybe_gzip=download_asset_version(&self.cloud_context,rbx_asset::cloud::GetAssetVersionRequest{
asset_id:upload_info.ModelID,
version:upload_info.ModelVersion,
}).await.map_err(Error::Download)?;
let submission_id=upload_info.SubmissionID;
let result=upload_inner(
upload_info,
&self.cloud_context,
&self.cookie_context,
self.group_id,
).await;
// transparently handle gzipped models
let model_data=maybe_gzip.to_vec().map_err(Error::IO)?;
// update the submission depending on the result
match result{
Ok(uploaded_asset_id)=>{
// note the asset id of the created model for later release, and mark the submission as uploaded
self.submissions.set_status_uploaded(rust_grpc::validator::StatusUploadedRequest{
id:submission_id,
uploaded_asset_id,
}).await.map_err(Error::ApiActionSubmissionUploaded)?;
},
Err(e)=>{
// log error
println!("[upload_submission] Error: {e}");
// upload the map to the strafesnet group
let upload_response=self.cookie_context.create(rbx_asset::cookie::CreateRequest{
name:upload_info.ModelName.clone(),
description:"".to_owned(),
ispublic:false,
allowComments:false,
groupId:self.group_id,
},model_data).await.map_err(Error::Create)?;
self.submissions.create_audit_error(
rust_grpc::validator::AuditErrorRequest{
id:submission_id,
error_message:e.to_string(),
}
).await.map_err(Error::ApiActionSubmissionUploaded)?;
// note the asset id of the created model for later release, and mark the submission as uploaded
self.submissions.set_status_uploaded(rust_grpc::validator::StatusUploadedRequest{
id:upload_info.SubmissionID,
uploaded_asset_id:upload_response.AssetId,
}).await.map_err(Error::ApiActionSubmissionUploaded)?;
// update the submission model status to accepted
self.submissions.set_status_not_uploaded(rust_grpc::validator::SubmissionId{
id:submission_id,
}).await.map_err(Error::ApiActionSubmissionUploaded)?;
},
}
Ok(())
}

View File

@@ -1,6 +1,6 @@
use crate::nats_types::ValidateMapfixRequest;
#[allow(dead_code)]
#[expect(dead_code)]
#[derive(Debug)]
pub enum Error{
ApiActionMapfixValidate(tonic::Status),
@@ -37,7 +37,7 @@ impl crate::message_handler::MessageHandler{
).await.map_err(Error::ApiActionMapfixValidate)?;
// update the mapfix model status to accepted
self.mapfixes.set_status_failed(rust_grpc::validator::MapfixId{
self.mapfixes.set_status_not_validated(rust_grpc::validator::MapfixId{
id:mapfix_id,
}).await.map_err(Error::ApiActionMapfixValidate)?;
},

View File

@@ -1,6 +1,6 @@
use crate::nats_types::ValidateSubmissionRequest;
#[allow(dead_code)]
#[expect(dead_code)]
#[derive(Debug)]
pub enum Error{
ApiActionSubmissionValidate(tonic::Status),
@@ -37,7 +37,7 @@ impl crate::message_handler::MessageHandler{
).await.map_err(Error::ApiActionSubmissionValidate)?;
// update the submission model status to accepted
self.submissions.set_status_failed(rust_grpc::validator::SubmissionId{
self.submissions.set_status_not_validated(rust_grpc::validator::SubmissionId{
id:submission_id,
}).await.map_err(Error::ApiActionSubmissionValidate)?;
},

View File

@@ -17,7 +17,7 @@ fn hash_source(source:&str)->u64{
std::hash::Hasher::finish(&hasher)
}
#[allow(dead_code)]
#[expect(dead_code)]
#[derive(Debug)]
pub enum Error{
ModelInfoDownload(rbx_asset::cloud::GetError),
@@ -52,7 +52,7 @@ impl std::fmt::Display for Error{
}
impl std::error::Error for Error{}
#[allow(nonstandard_style)]
#[expect(nonstandard_style)]
pub struct ValidateRequest{
pub ModelID:u64,
pub ModelVersion:u64,
@@ -318,7 +318,7 @@ fn get_partial_path(dom:&rbx_dom_weak::WeakDom,instance:&rbx_dom_weak::Instance)
}
fn get_script_refs(dom:&rbx_dom_weak::WeakDom)->Vec<rbx_dom_weak::types::Ref>{
let db=rbx_reflection_database::get();
let db=rbx_reflection_database::get().unwrap();
let superclass=&db.classes["LuaSourceContainer"];
dom.descendants().filter_map(|inst|{
let class=db.classes.get(inst.class.as_str())?;

Some files were not shown because too many files have changed in this diff Show More