Windows-based shells will add a CRLF when piping the token into
ssh-keygen command resulting in
verification error. This resolves#21527.
---------
Co-authored-by: Heiko Besemann <heiko.besemann@qbeyond.de>
Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
Refactor Hash interfaces and centralize hash function. This will allow
easier introduction of different hash function later on.
This forms the "no-op" part of the SHA256 enablement patch.
Fix#28056
This PR will check whether the repo has zero branch when pushing a
branch. If that, it means this repository hasn't been synced.
The reason caused that is after user upgrade from v1.20 -> v1.21, he
just push branches without visit the repository user interface. Because
all repositories routers will check whether a branches sync is necessary
but push has not such check.
For every repository, it has two states, synced or not synced. If there
is zero branch for a repository, then it will be assumed as non-sync
state. Otherwise, it's synced state. So if we think it's synced, we just
need to update branch/insert new branch. Otherwise do a full sync. So
that, for every push, there will be almost no extra load added. It's
high performance than yours.
For the implementation, we in fact will try to update the branch first,
if updated success with affect records > 0, then all are done. Because
that means the branch has been in the database. If no record is
affected, that means the branch does not exist in database. So there are
two possibilities. One is this is a new branch, then we just need to
insert the record. Another is the branches haven't been synced, then we
need to sync all the branches into database.
The function `GetByBean` has an obvious defect that when the fields are
empty values, it will be ignored. Then users will get a wrong result
which is possibly used to make a security problem.
To avoid the possibility, this PR removed function `GetByBean` and all
references.
And some new generic functions have been introduced to be used.
The recommand usage like below.
```go
// if query an object according id
obj, err := db.GetByID[Object](ctx, id)
// query with other conditions
obj, err := db.Get[Object](ctx, builder.Eq{"a": a, "b":b})
```
It will fix#28268 .
<img width="1313" alt="image"
src="https://github.com/go-gitea/gitea/assets/9418365/cb1e07d5-7a12-4691-a054-8278ba255bfc">
<img width="1318" alt="image"
src="https://github.com/go-gitea/gitea/assets/9418365/4fd60820-97f1-4c2c-a233-d3671a5039e9">
## ⚠️ BREAKING ⚠️
But need to give up some features:
<img width="1312" alt="image"
src="https://github.com/go-gitea/gitea/assets/9418365/281c0d51-0e7d-473f-bbed-216e2f645610">
However, such abandonment may fix#28055 .
## Backgroud
When the user switches the dashboard context to an org, it means they
want to search issues in the repos that belong to the org. However, when
they switch to themselves, it means all repos they can access because
they may have created an issue in a public repo that they don't own.
<img width="286" alt="image"
src="https://github.com/go-gitea/gitea/assets/9418365/182dcd5b-1c20-4725-93af-96e8dfae5b97">
It's a confusing design. Think about this: What does "In your
repositories" mean when the user switches to an org? Repos belong to the
user or the org?
Whatever, it has been broken by #26012 and its following PRs. After the
PR, it searches for issues in repos that the dashboard context user owns
or has been explicitly granted access to, so it causes #28268.
## How to fix it
It's not really difficult to fix it. Just extend the repo scope to
search issues when the dashboard context user is the doer. Since the
user may create issues or be mentioned in any public repo, we can just
set `AllPublic` to true, which is already supported by indexers. The DB
condition will also support it in this PR.
But the real difficulty is how to count the search results grouped by
repos. It's something like "search issues with this keyword and those
filters, and return the total number and the top results. **Then, group
all of them by repo and return the counts of each group.**"
<img width="314" alt="image"
src="https://github.com/go-gitea/gitea/assets/9418365/5206eb20-f8f5-49b9-b45a-1be2fcf679f4">
Before #26012, it was being done in the DB, but it caused the results to
be incomplete (see the description of #26012).
And to keep this, #26012 implement it in an inefficient way, just count
the issues by repo one by one, so it cannot work when `AllPublic` is
true because it's almost impossible to do this for all public repos.
1bfcdeef4c/modules/indexer/issues/indexer.go (L318-L338)
## Give up unnecessary features
We may can resovle `TODO: use "group by" of the indexer engines to
implement it`, I'm sure it can be done with Elasticsearch, but IIRC,
Bleve and Meilisearch don't support "group by".
And the real question is, does it worth it? Why should we need to know
the counts grouped by repos?
Let me show you my search dashboard on gitea.com.
<img width="1304" alt="image"
src="https://github.com/go-gitea/gitea/assets/9418365/2bca2d46-6c71-4de1-94cb-0c9af27c62ff">
I never think the long repo list helps anything.
And if we agree to abandon it, things will be much easier. That is this
PR.
## TODO
I know it's important to filter by repos when searching issues. However,
it shouldn't be the way we have it now. It could be implemented like
this.
<img width="1316" alt="image"
src="https://github.com/go-gitea/gitea/assets/9418365/99ee5f21-cbb5-4dfe-914d-cb796cb79fbe">
The indexers support it well now, but it requires some frontend work,
which I'm not good at. So, I think someone could help do that in another
PR and merge this one to fix the bug first.
Or please block this PR and help to complete it.
Finally, "Switch dashboard context" is also a design that needs
improvement. In my opinion, it can be accomplished by adding filtering
conditions instead of "switching".
When we pick up a job, all waiting jobs should firstly be ordered by
update time,
otherwise when there's a running job, if I rerun an older job, the older
job will run first, as it's id is smaller.
This resolves a problem I encountered while updating gitea from 1.20.4
to 1.21. For some reason (correct or otherwise) there are some values in
`repository.size` that are NULL in my gitea database which cause this
migration to fail due to the NOT NULL constraints.
Log snippet (excuse the escape characters)
```
ESC[36mgitea |ESC[0m 2023-12-04T03:52:28.573122395Z 2023/12/04 03:52:28 ...ations/migrations.go:641:Migrate() [I] Migration[263]: Add git_size and lfs_size columns to repository table
ESC[36mgitea |ESC[0m 2023-12-04T03:52:28.608705544Z 2023/12/04 03:52:28 routers/common/db.go:36:InitDBEngine() [E] ORM engine initialization attempt #3/10 failed. Error: migrate: migration[263]: Add git_size and lfs_size columns to repository table failed: NOT NULL constraint failed: repository.git_size
```
I assume this should be reasonably safe since `repository.git_size` has
a default value of 0 but I don't know if that value being 0 in the odd
situation where `repository.size == NULL` has any problematic
consequences.
- Currently the repository description uses the same sanitizer as a
normal markdown document. This means that element such as heading and
images are allowed and can be abused.
- Create a minimal restricted sanitizer for the repository description,
which only allows what the postprocessor currently allows, which are
links and emojis.
- Added unit testing.
- Resolves https://codeberg.org/forgejo/forgejo/issues/1202
- Resolves https://codeberg.org/Codeberg/Community/issues/1122
(cherry picked from commit 631c87cc23)
Co-authored-by: Gusted <postmaster@gusted.xyz>
Changed behavior to calculate package quota limit using package `creator
ID` instead of `owner ID`.
Currently, users are allowed to create an unlimited number of
organizations, each of which has its own package limit quota, resulting
in the ability for users to have unlimited package space in different
organization scopes. This fix will calculate package quota based on
`package version creator ID` instead of `package version owner ID`
(which might be organization), so that users are not allowed to take
more space than configured package settings.
Also, there is a side case in which users can publish packages to a
specific package version, initially published by different user, taking
that user package size quota. Version in fix should be better because
the total amount of space is limited to the quota for users sharing the
same organization scope.
System users (Ghost, ActionsUser, etc) have a negative id and may be the
author of a comment, either because it was created by a now deleted user
or via an action using a transient token.
The GetPossibleUserByID function has special cases related to system
users and will not fail if given a negative id.
Refs: https://codeberg.org/forgejo/forgejo/issues/1425
(cherry picked from commit 6a2d2fa243)
Fixes https://codeberg.org/forgejo/forgejo/issues/1458
Some mails such as issue creation mails are missing the reply-to-comment
address. This PR fixes that and specifies which comment types should get
a reply-possibility.
## Bug in Gitea
I ran into this bug when I accidentally used the wrong redirect URL for
the oauth2 provider when using mssql. But the oauth2 provider still got
added.
Most of the time, we use `Delete(&some{id: some.id})` or
`In(condition).Delete(&some{})`, which specify the conditions. But the
function uses `Delete(source)` when `source.Cfg` is a `TEXT` field and
not empty. This will cause xorm `Delete` function not working in mssql.
61ff91f960/models/auth/source.go (L234-L240)
## Reason
Because the `TEXT` field can not be compared in mssql, xorm doesn't
support it according to [this
PR](https://gitea.com/xorm/xorm/pulls/2062)
[related
code](b23798dc98/internal/statements/statement.go (L552-L558))
in xorm
```go
if statement.dialect.URI().DBType == schemas.MSSQL && (col.SQLType.Name == schemas.Text ||
col.SQLType.IsBlob() || col.SQLType.Name == schemas.TimeStampz) {
if utils.IsValueZero(fieldValue) {
continue
}
return nil, fmt.Errorf("column %s is a TEXT type with data %#v which cannot be as compare condition", col.Name, fieldValue.Interface())
}
}
```
When using the `Delete` function in xorm, the non-empty fields will
auto-set as conditions(perhaps some special fields are not?). If `TEXT`
field is not empty, xorm will return an error. I only found this usage
after searching, but maybe there is something I missing.
---------
Co-authored-by: delvh <dev.lh@web.de>
- On user deletion, delete action runners that the user has created.
- Add a database consistency check to remove action runners that have
nonexistent belonging owner.
- Resolves https://codeberg.org/forgejo/forgejo/issues/1720
(cherry picked from commit 009ca7223d)
Co-authored-by: Gusted <postmaster@gusted.xyz>
The steps to reproduce it.
First, create a new oauth2 source.
Then, a user login with this oauth2 source.
Disable the oauth2 source.
Visit users -> settings -> security, 500 will be displayed.
This is because this page only load active Oauth2 sources but not all
Oauth2 sources.
See https://github.com/go-gitea/gitea/pull/27718#issuecomment-1773743014
. Add a test to ensure its behavior.
Why this test uses `ProjectBoardID=0`? Because in `SearchOptions`,
`ProjectBoardID=0` means what it is. But in `IssueOptions`,
`ProjectBoardID=0` means there is no condition, and
`ProjectBoardID=db.NoConditionID` means the board ID = 0.
It's really confusing. Probably it's better to separate the db search
engine and the other issue search code. It's really two different
systems. As far as I can see, `IssueOptions` is not necessary for most
of the code, which has very simple issue search conditions.
1. remove unused function `MoveIssueAcrossProjectBoards`
2. extract the project board condition into a function
3. use db.NoCondition instead of -1. (BTW, the usage of db.NoCondition
is too confusing. Is there any way to avoid that?)
4. remove the unnecessary comment since the ctx refactor is completed.
5. Change `b.ID != 0` to `b.ID > 0`. It's more intuitive but I think
they're the same since board ID can't be negative.
Closes#27455
> The mechanism responsible for long-term authentication (the 'remember
me' cookie) uses a weak construction technique. It will hash the user's
hashed password and the rands value; it will then call the secure cookie
code, which will encrypt the user's name with the computed hash. If one
were able to dump the database, they could extract those two values to
rebuild that cookie and impersonate a user. That vulnerability exists
from the date the dump was obtained until a user changed their password.
>
> To fix this security issue, the cookie could be created and verified
using a different technique such as the one explained at
https://paragonie.com/blog/2015/04/secure-authentication-php-with-long-term-persistence#secure-remember-me-cookies.
The PR removes the now obsolete setting `COOKIE_USERNAME`.
assert.Fail() will continue to execute the code while assert.FailNow()
not. I thought those uses of assert.Fail() should exit immediately.
PS: perhaps it's a good idea to use
[require](https://pkg.go.dev/github.com/stretchr/testify/require)
somewhere because the assert package's default behavior does not exit
when an error occurs, which makes it difficult to find the root error
reason.
Part of https://github.com/go-gitea/gitea/issues/27097:
- `gitea` theme is renamed to `gitea-light`
- `arc-green` theme is renamed to `gitea-dark`
- `auto` theme is renamed to `gitea-auto`
I put both themes in separate CSS files, removing all colors from the
base CSS. Existing users will be migrated to the new theme names. The
dark theme recolor will follow in a separate PR.
## ⚠️ BREAKING ⚠️
1. If there are existing custom themes with the names `gitea-light` or
`gitea-dark`, rename them before this upgrade and update the `theme`
column in the `user` table for each affected user.
2. The theme in `<html>` has moved from `class="theme-name"` to
`data-theme="name"`, existing customizations that depend on should be
updated.
---------
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: Giteabot <teabot@gitea.io>
This PR reduces the complexity of the system setting system.
It only needs one line to introduce a new option, and the option can be
used anywhere out-of-box.
It is still high-performant (and more performant) because the config
values are cached in the config system.