init commit
This commit is contained in:
4
vendor/github.com/dgrijalva/jwt-go/.gitignore
generated
vendored
Normal file
4
vendor/github.com/dgrijalva/jwt-go/.gitignore
generated
vendored
Normal file
@@ -0,0 +1,4 @@
|
||||
.DS_Store
|
||||
bin
|
||||
|
||||
|
||||
13
vendor/github.com/dgrijalva/jwt-go/.travis.yml
generated
vendored
Normal file
13
vendor/github.com/dgrijalva/jwt-go/.travis.yml
generated
vendored
Normal file
@@ -0,0 +1,13 @@
|
||||
language: go
|
||||
|
||||
script:
|
||||
- go vet ./...
|
||||
- go test -v ./...
|
||||
|
||||
go:
|
||||
- 1.3
|
||||
- 1.4
|
||||
- 1.5
|
||||
- 1.6
|
||||
- 1.7
|
||||
- tip
|
||||
8
vendor/github.com/dgrijalva/jwt-go/LICENSE
generated
vendored
Normal file
8
vendor/github.com/dgrijalva/jwt-go/LICENSE
generated
vendored
Normal file
@@ -0,0 +1,8 @@
|
||||
Copyright (c) 2012 Dave Grijalva
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
|
||||
|
||||
97
vendor/github.com/dgrijalva/jwt-go/MIGRATION_GUIDE.md
generated
vendored
Normal file
97
vendor/github.com/dgrijalva/jwt-go/MIGRATION_GUIDE.md
generated
vendored
Normal file
@@ -0,0 +1,97 @@
|
||||
## Migration Guide from v2 -> v3
|
||||
|
||||
Version 3 adds several new, frequently requested features. To do so, it introduces a few breaking changes. We've worked to keep these as minimal as possible. This guide explains the breaking changes and how you can quickly update your code.
|
||||
|
||||
### `Token.Claims` is now an interface type
|
||||
|
||||
The most requested feature from the 2.0 verison of this library was the ability to provide a custom type to the JSON parser for claims. This was implemented by introducing a new interface, `Claims`, to replace `map[string]interface{}`. We also included two concrete implementations of `Claims`: `MapClaims` and `StandardClaims`.
|
||||
|
||||
`MapClaims` is an alias for `map[string]interface{}` with built in validation behavior. It is the default claims type when using `Parse`. The usage is unchanged except you must type cast the claims property.
|
||||
|
||||
The old example for parsing a token looked like this..
|
||||
|
||||
```go
|
||||
if token, err := jwt.Parse(tokenString, keyLookupFunc); err == nil {
|
||||
fmt.Printf("Token for user %v expires %v", token.Claims["user"], token.Claims["exp"])
|
||||
}
|
||||
```
|
||||
|
||||
is now directly mapped to...
|
||||
|
||||
```go
|
||||
if token, err := jwt.Parse(tokenString, keyLookupFunc); err == nil {
|
||||
claims := token.Claims.(jwt.MapClaims)
|
||||
fmt.Printf("Token for user %v expires %v", claims["user"], claims["exp"])
|
||||
}
|
||||
```
|
||||
|
||||
`StandardClaims` is designed to be embedded in your custom type. You can supply a custom claims type with the new `ParseWithClaims` function. Here's an example of using a custom claims type.
|
||||
|
||||
```go
|
||||
type MyCustomClaims struct {
|
||||
User string
|
||||
*StandardClaims
|
||||
}
|
||||
|
||||
if token, err := jwt.ParseWithClaims(tokenString, &MyCustomClaims{}, keyLookupFunc); err == nil {
|
||||
claims := token.Claims.(*MyCustomClaims)
|
||||
fmt.Printf("Token for user %v expires %v", claims.User, claims.StandardClaims.ExpiresAt)
|
||||
}
|
||||
```
|
||||
|
||||
### `ParseFromRequest` has been moved
|
||||
|
||||
To keep this library focused on the tokens without becoming overburdened with complex request processing logic, `ParseFromRequest` and its new companion `ParseFromRequestWithClaims` have been moved to a subpackage, `request`. The method signatues have also been augmented to receive a new argument: `Extractor`.
|
||||
|
||||
`Extractors` do the work of picking the token string out of a request. The interface is simple and composable.
|
||||
|
||||
This simple parsing example:
|
||||
|
||||
```go
|
||||
if token, err := jwt.ParseFromRequest(tokenString, req, keyLookupFunc); err == nil {
|
||||
fmt.Printf("Token for user %v expires %v", token.Claims["user"], token.Claims["exp"])
|
||||
}
|
||||
```
|
||||
|
||||
is directly mapped to:
|
||||
|
||||
```go
|
||||
if token, err := request.ParseFromRequest(req, request.OAuth2Extractor, keyLookupFunc); err == nil {
|
||||
claims := token.Claims.(jwt.MapClaims)
|
||||
fmt.Printf("Token for user %v expires %v", claims["user"], claims["exp"])
|
||||
}
|
||||
```
|
||||
|
||||
There are several concrete `Extractor` types provided for your convenience:
|
||||
|
||||
* `HeaderExtractor` will search a list of headers until one contains content.
|
||||
* `ArgumentExtractor` will search a list of keys in request query and form arguments until one contains content.
|
||||
* `MultiExtractor` will try a list of `Extractors` in order until one returns content.
|
||||
* `AuthorizationHeaderExtractor` will look in the `Authorization` header for a `Bearer` token.
|
||||
* `OAuth2Extractor` searches the places an OAuth2 token would be specified (per the spec): `Authorization` header and `access_token` argument
|
||||
* `PostExtractionFilter` wraps an `Extractor`, allowing you to process the content before it's parsed. A simple example is stripping the `Bearer ` text from a header
|
||||
|
||||
|
||||
### RSA signing methods no longer accept `[]byte` keys
|
||||
|
||||
Due to a [critical vulnerability](https://auth0.com/blog/2015/03/31/critical-vulnerabilities-in-json-web-token-libraries/), we've decided the convenience of accepting `[]byte` instead of `rsa.PublicKey` or `rsa.PrivateKey` isn't worth the risk of misuse.
|
||||
|
||||
To replace this behavior, we've added two helper methods: `ParseRSAPrivateKeyFromPEM(key []byte) (*rsa.PrivateKey, error)` and `ParseRSAPublicKeyFromPEM(key []byte) (*rsa.PublicKey, error)`. These are just simple helpers for unpacking PEM encoded PKCS1 and PKCS8 keys. If your keys are encoded any other way, all you need to do is convert them to the `crypto/rsa` package's types.
|
||||
|
||||
```go
|
||||
func keyLookupFunc(*Token) (interface{}, error) {
|
||||
// Don't forget to validate the alg is what you expect:
|
||||
if _, ok := token.Method.(*jwt.SigningMethodRSA); !ok {
|
||||
return nil, fmt.Errorf("Unexpected signing method: %v", token.Header["alg"])
|
||||
}
|
||||
|
||||
// Look up key
|
||||
key, err := lookupPublicKey(token.Header["kid"])
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// Unpack key from PEM encoded PKCS8
|
||||
return jwt.ParseRSAPublicKeyFromPEM(key)
|
||||
}
|
||||
```
|
||||
100
vendor/github.com/dgrijalva/jwt-go/README.md
generated
vendored
Normal file
100
vendor/github.com/dgrijalva/jwt-go/README.md
generated
vendored
Normal file
@@ -0,0 +1,100 @@
|
||||
# jwt-go
|
||||
|
||||
[](https://travis-ci.org/dgrijalva/jwt-go)
|
||||
[](https://godoc.org/github.com/dgrijalva/jwt-go)
|
||||
|
||||
A [go](http://www.golang.org) (or 'golang' for search engine friendliness) implementation of [JSON Web Tokens](http://self-issued.info/docs/draft-ietf-oauth-json-web-token.html)
|
||||
|
||||
**NEW VERSION COMING:** There have been a lot of improvements suggested since the version 3.0.0 released in 2016. I'm working now on cutting two different releases: 3.2.0 will contain any non-breaking changes or enhancements. 4.0.0 will follow shortly which will include breaking changes. See the 4.0.0 milestone to get an idea of what's coming. If you have other ideas, or would like to participate in 4.0.0, now's the time. If you depend on this library and don't want to be interrupted, I recommend you use your dependency mangement tool to pin to version 3.
|
||||
|
||||
**SECURITY NOTICE:** Some older versions of Go have a security issue in the cryotp/elliptic. Recommendation is to upgrade to at least 1.8.3. See issue #216 for more detail.
|
||||
|
||||
**SECURITY NOTICE:** It's important that you [validate the `alg` presented is what you expect](https://auth0.com/blog/2015/03/31/critical-vulnerabilities-in-json-web-token-libraries/). This library attempts to make it easy to do the right thing by requiring key types match the expected alg, but you should take the extra step to verify it in your usage. See the examples provided.
|
||||
|
||||
## What the heck is a JWT?
|
||||
|
||||
JWT.io has [a great introduction](https://jwt.io/introduction) to JSON Web Tokens.
|
||||
|
||||
In short, it's a signed JSON object that does something useful (for example, authentication). It's commonly used for `Bearer` tokens in Oauth 2. A token is made of three parts, separated by `.`'s. The first two parts are JSON objects, that have been [base64url](http://tools.ietf.org/html/rfc4648) encoded. The last part is the signature, encoded the same way.
|
||||
|
||||
The first part is called the header. It contains the necessary information for verifying the last part, the signature. For example, which encryption method was used for signing and what key was used.
|
||||
|
||||
The part in the middle is the interesting bit. It's called the Claims and contains the actual stuff you care about. Refer to [the RFC](http://self-issued.info/docs/draft-jones-json-web-token.html) for information about reserved keys and the proper way to add your own.
|
||||
|
||||
## What's in the box?
|
||||
|
||||
This library supports the parsing and verification as well as the generation and signing of JWTs. Current supported signing algorithms are HMAC SHA, RSA, RSA-PSS, and ECDSA, though hooks are present for adding your own.
|
||||
|
||||
## Examples
|
||||
|
||||
See [the project documentation](https://godoc.org/github.com/dgrijalva/jwt-go) for examples of usage:
|
||||
|
||||
* [Simple example of parsing and validating a token](https://godoc.org/github.com/dgrijalva/jwt-go#example-Parse--Hmac)
|
||||
* [Simple example of building and signing a token](https://godoc.org/github.com/dgrijalva/jwt-go#example-New--Hmac)
|
||||
* [Directory of Examples](https://godoc.org/github.com/dgrijalva/jwt-go#pkg-examples)
|
||||
|
||||
## Extensions
|
||||
|
||||
This library publishes all the necessary components for adding your own signing methods. Simply implement the `SigningMethod` interface and register a factory method using `RegisterSigningMethod`.
|
||||
|
||||
Here's an example of an extension that integrates with the Google App Engine signing tools: https://github.com/someone1/gcp-jwt-go
|
||||
|
||||
## Compliance
|
||||
|
||||
This library was last reviewed to comply with [RTF 7519](http://www.rfc-editor.org/info/rfc7519) dated May 2015 with a few notable differences:
|
||||
|
||||
* In order to protect against accidental use of [Unsecured JWTs](http://self-issued.info/docs/draft-ietf-oauth-json-web-token.html#UnsecuredJWT), tokens using `alg=none` will only be accepted if the constant `jwt.UnsafeAllowNoneSignatureType` is provided as the key.
|
||||
|
||||
## Project Status & Versioning
|
||||
|
||||
This library is considered production ready. Feedback and feature requests are appreciated. The API should be considered stable. There should be very few backwards-incompatible changes outside of major version updates (and only with good reason).
|
||||
|
||||
This project uses [Semantic Versioning 2.0.0](http://semver.org). Accepted pull requests will land on `master`. Periodically, versions will be tagged from `master`. You can find all the releases on [the project releases page](https://github.com/dgrijalva/jwt-go/releases).
|
||||
|
||||
While we try to make it obvious when we make breaking changes, there isn't a great mechanism for pushing announcements out to users. You may want to use this alternative package include: `gopkg.in/dgrijalva/jwt-go.v3`. It will do the right thing WRT semantic versioning.
|
||||
|
||||
**BREAKING CHANGES:***
|
||||
* Version 3.0.0 includes _a lot_ of changes from the 2.x line, including a few that break the API. We've tried to break as few things as possible, so there should just be a few type signature changes. A full list of breaking changes is available in `VERSION_HISTORY.md`. See `MIGRATION_GUIDE.md` for more information on updating your code.
|
||||
|
||||
## Usage Tips
|
||||
|
||||
### Signing vs Encryption
|
||||
|
||||
A token is simply a JSON object that is signed by its author. this tells you exactly two things about the data:
|
||||
|
||||
* The author of the token was in the possession of the signing secret
|
||||
* The data has not been modified since it was signed
|
||||
|
||||
It's important to know that JWT does not provide encryption, which means anyone who has access to the token can read its contents. If you need to protect (encrypt) the data, there is a companion spec, `JWE`, that provides this functionality. JWE is currently outside the scope of this library.
|
||||
|
||||
### Choosing a Signing Method
|
||||
|
||||
There are several signing methods available, and you should probably take the time to learn about the various options before choosing one. The principal design decision is most likely going to be symmetric vs asymmetric.
|
||||
|
||||
Symmetric signing methods, such as HSA, use only a single secret. This is probably the simplest signing method to use since any `[]byte` can be used as a valid secret. They are also slightly computationally faster to use, though this rarely is enough to matter. Symmetric signing methods work the best when both producers and consumers of tokens are trusted, or even the same system. Since the same secret is used to both sign and validate tokens, you can't easily distribute the key for validation.
|
||||
|
||||
Asymmetric signing methods, such as RSA, use different keys for signing and verifying tokens. This makes it possible to produce tokens with a private key, and allow any consumer to access the public key for verification.
|
||||
|
||||
### Signing Methods and Key Types
|
||||
|
||||
Each signing method expects a different object type for its signing keys. See the package documentation for details. Here are the most common ones:
|
||||
|
||||
* The [HMAC signing method](https://godoc.org/github.com/dgrijalva/jwt-go#SigningMethodHMAC) (`HS256`,`HS384`,`HS512`) expect `[]byte` values for signing and validation
|
||||
* The [RSA signing method](https://godoc.org/github.com/dgrijalva/jwt-go#SigningMethodRSA) (`RS256`,`RS384`,`RS512`) expect `*rsa.PrivateKey` for signing and `*rsa.PublicKey` for validation
|
||||
* The [ECDSA signing method](https://godoc.org/github.com/dgrijalva/jwt-go#SigningMethodECDSA) (`ES256`,`ES384`,`ES512`) expect `*ecdsa.PrivateKey` for signing and `*ecdsa.PublicKey` for validation
|
||||
|
||||
### JWT and OAuth
|
||||
|
||||
It's worth mentioning that OAuth and JWT are not the same thing. A JWT token is simply a signed JSON object. It can be used anywhere such a thing is useful. There is some confusion, though, as JWT is the most common type of bearer token used in OAuth2 authentication.
|
||||
|
||||
Without going too far down the rabbit hole, here's a description of the interaction of these technologies:
|
||||
|
||||
* OAuth is a protocol for allowing an identity provider to be separate from the service a user is logging in to. For example, whenever you use Facebook to log into a different service (Yelp, Spotify, etc), you are using OAuth.
|
||||
* OAuth defines several options for passing around authentication data. One popular method is called a "bearer token". A bearer token is simply a string that _should_ only be held by an authenticated user. Thus, simply presenting this token proves your identity. You can probably derive from here why a JWT might make a good bearer token.
|
||||
* Because bearer tokens are used for authentication, it's important they're kept secret. This is why transactions that use bearer tokens typically happen over SSL.
|
||||
|
||||
## More
|
||||
|
||||
Documentation can be found [on godoc.org](http://godoc.org/github.com/dgrijalva/jwt-go).
|
||||
|
||||
The command line utility included in this project (cmd/jwt) provides a straightforward example of token creation and parsing as well as a useful tool for debugging your own integration. You'll also find several implementation examples in the documentation.
|
||||
118
vendor/github.com/dgrijalva/jwt-go/VERSION_HISTORY.md
generated
vendored
Normal file
118
vendor/github.com/dgrijalva/jwt-go/VERSION_HISTORY.md
generated
vendored
Normal file
@@ -0,0 +1,118 @@
|
||||
## `jwt-go` Version History
|
||||
|
||||
#### 3.2.0
|
||||
|
||||
* Added method `ParseUnverified` to allow users to split up the tasks of parsing and validation
|
||||
* HMAC signing method returns `ErrInvalidKeyType` instead of `ErrInvalidKey` where appropriate
|
||||
* Added options to `request.ParseFromRequest`, which allows for an arbitrary list of modifiers to parsing behavior. Initial set include `WithClaims` and `WithParser`. Existing usage of this function will continue to work as before.
|
||||
* Deprecated `ParseFromRequestWithClaims` to simplify API in the future.
|
||||
|
||||
#### 3.1.0
|
||||
|
||||
* Improvements to `jwt` command line tool
|
||||
* Added `SkipClaimsValidation` option to `Parser`
|
||||
* Documentation updates
|
||||
|
||||
#### 3.0.0
|
||||
|
||||
* **Compatibility Breaking Changes**: See MIGRATION_GUIDE.md for tips on updating your code
|
||||
* Dropped support for `[]byte` keys when using RSA signing methods. This convenience feature could contribute to security vulnerabilities involving mismatched key types with signing methods.
|
||||
* `ParseFromRequest` has been moved to `request` subpackage and usage has changed
|
||||
* The `Claims` property on `Token` is now type `Claims` instead of `map[string]interface{}`. The default value is type `MapClaims`, which is an alias to `map[string]interface{}`. This makes it possible to use a custom type when decoding claims.
|
||||
* Other Additions and Changes
|
||||
* Added `Claims` interface type to allow users to decode the claims into a custom type
|
||||
* Added `ParseWithClaims`, which takes a third argument of type `Claims`. Use this function instead of `Parse` if you have a custom type you'd like to decode into.
|
||||
* Dramatically improved the functionality and flexibility of `ParseFromRequest`, which is now in the `request` subpackage
|
||||
* Added `ParseFromRequestWithClaims` which is the `FromRequest` equivalent of `ParseWithClaims`
|
||||
* Added new interface type `Extractor`, which is used for extracting JWT strings from http requests. Used with `ParseFromRequest` and `ParseFromRequestWithClaims`.
|
||||
* Added several new, more specific, validation errors to error type bitmask
|
||||
* Moved examples from README to executable example files
|
||||
* Signing method registry is now thread safe
|
||||
* Added new property to `ValidationError`, which contains the raw error returned by calls made by parse/verify (such as those returned by keyfunc or json parser)
|
||||
|
||||
#### 2.7.0
|
||||
|
||||
This will likely be the last backwards compatible release before 3.0.0, excluding essential bug fixes.
|
||||
|
||||
* Added new option `-show` to the `jwt` command that will just output the decoded token without verifying
|
||||
* Error text for expired tokens includes how long it's been expired
|
||||
* Fixed incorrect error returned from `ParseRSAPublicKeyFromPEM`
|
||||
* Documentation updates
|
||||
|
||||
#### 2.6.0
|
||||
|
||||
* Exposed inner error within ValidationError
|
||||
* Fixed validation errors when using UseJSONNumber flag
|
||||
* Added several unit tests
|
||||
|
||||
#### 2.5.0
|
||||
|
||||
* Added support for signing method none. You shouldn't use this. The API tries to make this clear.
|
||||
* Updated/fixed some documentation
|
||||
* Added more helpful error message when trying to parse tokens that begin with `BEARER `
|
||||
|
||||
#### 2.4.0
|
||||
|
||||
* Added new type, Parser, to allow for configuration of various parsing parameters
|
||||
* You can now specify a list of valid signing methods. Anything outside this set will be rejected.
|
||||
* You can now opt to use the `json.Number` type instead of `float64` when parsing token JSON
|
||||
* Added support for [Travis CI](https://travis-ci.org/dgrijalva/jwt-go)
|
||||
* Fixed some bugs with ECDSA parsing
|
||||
|
||||
#### 2.3.0
|
||||
|
||||
* Added support for ECDSA signing methods
|
||||
* Added support for RSA PSS signing methods (requires go v1.4)
|
||||
|
||||
#### 2.2.0
|
||||
|
||||
* Gracefully handle a `nil` `Keyfunc` being passed to `Parse`. Result will now be the parsed token and an error, instead of a panic.
|
||||
|
||||
#### 2.1.0
|
||||
|
||||
Backwards compatible API change that was missed in 2.0.0.
|
||||
|
||||
* The `SignedString` method on `Token` now takes `interface{}` instead of `[]byte`
|
||||
|
||||
#### 2.0.0
|
||||
|
||||
There were two major reasons for breaking backwards compatibility with this update. The first was a refactor required to expand the width of the RSA and HMAC-SHA signing implementations. There will likely be no required code changes to support this change.
|
||||
|
||||
The second update, while unfortunately requiring a small change in integration, is required to open up this library to other signing methods. Not all keys used for all signing methods have a single standard on-disk representation. Requiring `[]byte` as the type for all keys proved too limiting. Additionally, this implementation allows for pre-parsed tokens to be reused, which might matter in an application that parses a high volume of tokens with a small set of keys. Backwards compatibilty has been maintained for passing `[]byte` to the RSA signing methods, but they will also accept `*rsa.PublicKey` and `*rsa.PrivateKey`.
|
||||
|
||||
It is likely the only integration change required here will be to change `func(t *jwt.Token) ([]byte, error)` to `func(t *jwt.Token) (interface{}, error)` when calling `Parse`.
|
||||
|
||||
* **Compatibility Breaking Changes**
|
||||
* `SigningMethodHS256` is now `*SigningMethodHMAC` instead of `type struct`
|
||||
* `SigningMethodRS256` is now `*SigningMethodRSA` instead of `type struct`
|
||||
* `KeyFunc` now returns `interface{}` instead of `[]byte`
|
||||
* `SigningMethod.Sign` now takes `interface{}` instead of `[]byte` for the key
|
||||
* `SigningMethod.Verify` now takes `interface{}` instead of `[]byte` for the key
|
||||
* Renamed type `SigningMethodHS256` to `SigningMethodHMAC`. Specific sizes are now just instances of this type.
|
||||
* Added public package global `SigningMethodHS256`
|
||||
* Added public package global `SigningMethodHS384`
|
||||
* Added public package global `SigningMethodHS512`
|
||||
* Renamed type `SigningMethodRS256` to `SigningMethodRSA`. Specific sizes are now just instances of this type.
|
||||
* Added public package global `SigningMethodRS256`
|
||||
* Added public package global `SigningMethodRS384`
|
||||
* Added public package global `SigningMethodRS512`
|
||||
* Moved sample private key for HMAC tests from an inline value to a file on disk. Value is unchanged.
|
||||
* Refactored the RSA implementation to be easier to read
|
||||
* Exposed helper methods `ParseRSAPrivateKeyFromPEM` and `ParseRSAPublicKeyFromPEM`
|
||||
|
||||
#### 1.0.2
|
||||
|
||||
* Fixed bug in parsing public keys from certificates
|
||||
* Added more tests around the parsing of keys for RS256
|
||||
* Code refactoring in RS256 implementation. No functional changes
|
||||
|
||||
#### 1.0.1
|
||||
|
||||
* Fixed panic if RS256 signing method was passed an invalid key
|
||||
|
||||
#### 1.0.0
|
||||
|
||||
* First versioned release
|
||||
* API stabilized
|
||||
* Supports creating, signing, parsing, and validating JWT tokens
|
||||
* Supports RS256 and HS256 signing methods
|
||||
134
vendor/github.com/dgrijalva/jwt-go/claims.go
generated
vendored
Normal file
134
vendor/github.com/dgrijalva/jwt-go/claims.go
generated
vendored
Normal file
@@ -0,0 +1,134 @@
|
||||
package jwt
|
||||
|
||||
import (
|
||||
"crypto/subtle"
|
||||
"fmt"
|
||||
"time"
|
||||
)
|
||||
|
||||
// For a type to be a Claims object, it must just have a Valid method that determines
|
||||
// if the token is invalid for any supported reason
|
||||
type Claims interface {
|
||||
Valid() error
|
||||
}
|
||||
|
||||
// Structured version of Claims Section, as referenced at
|
||||
// https://tools.ietf.org/html/rfc7519#section-4.1
|
||||
// See examples for how to use this with your own claim types
|
||||
type StandardClaims struct {
|
||||
Audience string `json:"aud,omitempty"`
|
||||
ExpiresAt int64 `json:"exp,omitempty"`
|
||||
Id string `json:"jti,omitempty"`
|
||||
IssuedAt int64 `json:"iat,omitempty"`
|
||||
Issuer string `json:"iss,omitempty"`
|
||||
NotBefore int64 `json:"nbf,omitempty"`
|
||||
Subject string `json:"sub,omitempty"`
|
||||
}
|
||||
|
||||
// Validates time based claims "exp, iat, nbf".
|
||||
// There is no accounting for clock skew.
|
||||
// As well, if any of the above claims are not in the token, it will still
|
||||
// be considered a valid claim.
|
||||
func (c StandardClaims) Valid() error {
|
||||
vErr := new(ValidationError)
|
||||
now := TimeFunc().Unix()
|
||||
|
||||
// The claims below are optional, by default, so if they are set to the
|
||||
// default value in Go, let's not fail the verification for them.
|
||||
if c.VerifyExpiresAt(now, false) == false {
|
||||
delta := time.Unix(now, 0).Sub(time.Unix(c.ExpiresAt, 0))
|
||||
vErr.Inner = fmt.Errorf("token is expired by %v", delta)
|
||||
vErr.Errors |= ValidationErrorExpired
|
||||
}
|
||||
|
||||
if c.VerifyIssuedAt(now, false) == false {
|
||||
vErr.Inner = fmt.Errorf("Token used before issued")
|
||||
vErr.Errors |= ValidationErrorIssuedAt
|
||||
}
|
||||
|
||||
if c.VerifyNotBefore(now, false) == false {
|
||||
vErr.Inner = fmt.Errorf("token is not valid yet")
|
||||
vErr.Errors |= ValidationErrorNotValidYet
|
||||
}
|
||||
|
||||
if vErr.valid() {
|
||||
return nil
|
||||
}
|
||||
|
||||
return vErr
|
||||
}
|
||||
|
||||
// Compares the aud claim against cmp.
|
||||
// If required is false, this method will return true if the value matches or is unset
|
||||
func (c *StandardClaims) VerifyAudience(cmp string, req bool) bool {
|
||||
return verifyAud(c.Audience, cmp, req)
|
||||
}
|
||||
|
||||
// Compares the exp claim against cmp.
|
||||
// If required is false, this method will return true if the value matches or is unset
|
||||
func (c *StandardClaims) VerifyExpiresAt(cmp int64, req bool) bool {
|
||||
return verifyExp(c.ExpiresAt, cmp, req)
|
||||
}
|
||||
|
||||
// Compares the iat claim against cmp.
|
||||
// If required is false, this method will return true if the value matches or is unset
|
||||
func (c *StandardClaims) VerifyIssuedAt(cmp int64, req bool) bool {
|
||||
return verifyIat(c.IssuedAt, cmp, req)
|
||||
}
|
||||
|
||||
// Compares the iss claim against cmp.
|
||||
// If required is false, this method will return true if the value matches or is unset
|
||||
func (c *StandardClaims) VerifyIssuer(cmp string, req bool) bool {
|
||||
return verifyIss(c.Issuer, cmp, req)
|
||||
}
|
||||
|
||||
// Compares the nbf claim against cmp.
|
||||
// If required is false, this method will return true if the value matches or is unset
|
||||
func (c *StandardClaims) VerifyNotBefore(cmp int64, req bool) bool {
|
||||
return verifyNbf(c.NotBefore, cmp, req)
|
||||
}
|
||||
|
||||
// ----- helpers
|
||||
|
||||
func verifyAud(aud string, cmp string, required bool) bool {
|
||||
if aud == "" {
|
||||
return !required
|
||||
}
|
||||
if subtle.ConstantTimeCompare([]byte(aud), []byte(cmp)) != 0 {
|
||||
return true
|
||||
} else {
|
||||
return false
|
||||
}
|
||||
}
|
||||
|
||||
func verifyExp(exp int64, now int64, required bool) bool {
|
||||
if exp == 0 {
|
||||
return !required
|
||||
}
|
||||
return now <= exp
|
||||
}
|
||||
|
||||
func verifyIat(iat int64, now int64, required bool) bool {
|
||||
if iat == 0 {
|
||||
return !required
|
||||
}
|
||||
return now >= iat
|
||||
}
|
||||
|
||||
func verifyIss(iss string, cmp string, required bool) bool {
|
||||
if iss == "" {
|
||||
return !required
|
||||
}
|
||||
if subtle.ConstantTimeCompare([]byte(iss), []byte(cmp)) != 0 {
|
||||
return true
|
||||
} else {
|
||||
return false
|
||||
}
|
||||
}
|
||||
|
||||
func verifyNbf(nbf int64, now int64, required bool) bool {
|
||||
if nbf == 0 {
|
||||
return !required
|
||||
}
|
||||
return now >= nbf
|
||||
}
|
||||
4
vendor/github.com/dgrijalva/jwt-go/doc.go
generated
vendored
Normal file
4
vendor/github.com/dgrijalva/jwt-go/doc.go
generated
vendored
Normal file
@@ -0,0 +1,4 @@
|
||||
// Package jwt is a Go implementation of JSON Web Tokens: http://self-issued.info/docs/draft-jones-json-web-token.html
|
||||
//
|
||||
// See README.md for more info.
|
||||
package jwt
|
||||
148
vendor/github.com/dgrijalva/jwt-go/ecdsa.go
generated
vendored
Normal file
148
vendor/github.com/dgrijalva/jwt-go/ecdsa.go
generated
vendored
Normal file
@@ -0,0 +1,148 @@
|
||||
package jwt
|
||||
|
||||
import (
|
||||
"crypto"
|
||||
"crypto/ecdsa"
|
||||
"crypto/rand"
|
||||
"errors"
|
||||
"math/big"
|
||||
)
|
||||
|
||||
var (
|
||||
// Sadly this is missing from crypto/ecdsa compared to crypto/rsa
|
||||
ErrECDSAVerification = errors.New("crypto/ecdsa: verification error")
|
||||
)
|
||||
|
||||
// Implements the ECDSA family of signing methods signing methods
|
||||
// Expects *ecdsa.PrivateKey for signing and *ecdsa.PublicKey for verification
|
||||
type SigningMethodECDSA struct {
|
||||
Name string
|
||||
Hash crypto.Hash
|
||||
KeySize int
|
||||
CurveBits int
|
||||
}
|
||||
|
||||
// Specific instances for EC256 and company
|
||||
var (
|
||||
SigningMethodES256 *SigningMethodECDSA
|
||||
SigningMethodES384 *SigningMethodECDSA
|
||||
SigningMethodES512 *SigningMethodECDSA
|
||||
)
|
||||
|
||||
func init() {
|
||||
// ES256
|
||||
SigningMethodES256 = &SigningMethodECDSA{"ES256", crypto.SHA256, 32, 256}
|
||||
RegisterSigningMethod(SigningMethodES256.Alg(), func() SigningMethod {
|
||||
return SigningMethodES256
|
||||
})
|
||||
|
||||
// ES384
|
||||
SigningMethodES384 = &SigningMethodECDSA{"ES384", crypto.SHA384, 48, 384}
|
||||
RegisterSigningMethod(SigningMethodES384.Alg(), func() SigningMethod {
|
||||
return SigningMethodES384
|
||||
})
|
||||
|
||||
// ES512
|
||||
SigningMethodES512 = &SigningMethodECDSA{"ES512", crypto.SHA512, 66, 521}
|
||||
RegisterSigningMethod(SigningMethodES512.Alg(), func() SigningMethod {
|
||||
return SigningMethodES512
|
||||
})
|
||||
}
|
||||
|
||||
func (m *SigningMethodECDSA) Alg() string {
|
||||
return m.Name
|
||||
}
|
||||
|
||||
// Implements the Verify method from SigningMethod
|
||||
// For this verify method, key must be an ecdsa.PublicKey struct
|
||||
func (m *SigningMethodECDSA) Verify(signingString, signature string, key interface{}) error {
|
||||
var err error
|
||||
|
||||
// Decode the signature
|
||||
var sig []byte
|
||||
if sig, err = DecodeSegment(signature); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// Get the key
|
||||
var ecdsaKey *ecdsa.PublicKey
|
||||
switch k := key.(type) {
|
||||
case *ecdsa.PublicKey:
|
||||
ecdsaKey = k
|
||||
default:
|
||||
return ErrInvalidKeyType
|
||||
}
|
||||
|
||||
if len(sig) != 2*m.KeySize {
|
||||
return ErrECDSAVerification
|
||||
}
|
||||
|
||||
r := big.NewInt(0).SetBytes(sig[:m.KeySize])
|
||||
s := big.NewInt(0).SetBytes(sig[m.KeySize:])
|
||||
|
||||
// Create hasher
|
||||
if !m.Hash.Available() {
|
||||
return ErrHashUnavailable
|
||||
}
|
||||
hasher := m.Hash.New()
|
||||
hasher.Write([]byte(signingString))
|
||||
|
||||
// Verify the signature
|
||||
if verifystatus := ecdsa.Verify(ecdsaKey, hasher.Sum(nil), r, s); verifystatus == true {
|
||||
return nil
|
||||
} else {
|
||||
return ErrECDSAVerification
|
||||
}
|
||||
}
|
||||
|
||||
// Implements the Sign method from SigningMethod
|
||||
// For this signing method, key must be an ecdsa.PrivateKey struct
|
||||
func (m *SigningMethodECDSA) Sign(signingString string, key interface{}) (string, error) {
|
||||
// Get the key
|
||||
var ecdsaKey *ecdsa.PrivateKey
|
||||
switch k := key.(type) {
|
||||
case *ecdsa.PrivateKey:
|
||||
ecdsaKey = k
|
||||
default:
|
||||
return "", ErrInvalidKeyType
|
||||
}
|
||||
|
||||
// Create the hasher
|
||||
if !m.Hash.Available() {
|
||||
return "", ErrHashUnavailable
|
||||
}
|
||||
|
||||
hasher := m.Hash.New()
|
||||
hasher.Write([]byte(signingString))
|
||||
|
||||
// Sign the string and return r, s
|
||||
if r, s, err := ecdsa.Sign(rand.Reader, ecdsaKey, hasher.Sum(nil)); err == nil {
|
||||
curveBits := ecdsaKey.Curve.Params().BitSize
|
||||
|
||||
if m.CurveBits != curveBits {
|
||||
return "", ErrInvalidKey
|
||||
}
|
||||
|
||||
keyBytes := curveBits / 8
|
||||
if curveBits%8 > 0 {
|
||||
keyBytes += 1
|
||||
}
|
||||
|
||||
// We serialize the outpus (r and s) into big-endian byte arrays and pad
|
||||
// them with zeros on the left to make sure the sizes work out. Both arrays
|
||||
// must be keyBytes long, and the output must be 2*keyBytes long.
|
||||
rBytes := r.Bytes()
|
||||
rBytesPadded := make([]byte, keyBytes)
|
||||
copy(rBytesPadded[keyBytes-len(rBytes):], rBytes)
|
||||
|
||||
sBytes := s.Bytes()
|
||||
sBytesPadded := make([]byte, keyBytes)
|
||||
copy(sBytesPadded[keyBytes-len(sBytes):], sBytes)
|
||||
|
||||
out := append(rBytesPadded, sBytesPadded...)
|
||||
|
||||
return EncodeSegment(out), nil
|
||||
} else {
|
||||
return "", err
|
||||
}
|
||||
}
|
||||
67
vendor/github.com/dgrijalva/jwt-go/ecdsa_utils.go
generated
vendored
Normal file
67
vendor/github.com/dgrijalva/jwt-go/ecdsa_utils.go
generated
vendored
Normal file
@@ -0,0 +1,67 @@
|
||||
package jwt
|
||||
|
||||
import (
|
||||
"crypto/ecdsa"
|
||||
"crypto/x509"
|
||||
"encoding/pem"
|
||||
"errors"
|
||||
)
|
||||
|
||||
var (
|
||||
ErrNotECPublicKey = errors.New("Key is not a valid ECDSA public key")
|
||||
ErrNotECPrivateKey = errors.New("Key is not a valid ECDSA private key")
|
||||
)
|
||||
|
||||
// Parse PEM encoded Elliptic Curve Private Key Structure
|
||||
func ParseECPrivateKeyFromPEM(key []byte) (*ecdsa.PrivateKey, error) {
|
||||
var err error
|
||||
|
||||
// Parse PEM block
|
||||
var block *pem.Block
|
||||
if block, _ = pem.Decode(key); block == nil {
|
||||
return nil, ErrKeyMustBePEMEncoded
|
||||
}
|
||||
|
||||
// Parse the key
|
||||
var parsedKey interface{}
|
||||
if parsedKey, err = x509.ParseECPrivateKey(block.Bytes); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
var pkey *ecdsa.PrivateKey
|
||||
var ok bool
|
||||
if pkey, ok = parsedKey.(*ecdsa.PrivateKey); !ok {
|
||||
return nil, ErrNotECPrivateKey
|
||||
}
|
||||
|
||||
return pkey, nil
|
||||
}
|
||||
|
||||
// Parse PEM encoded PKCS1 or PKCS8 public key
|
||||
func ParseECPublicKeyFromPEM(key []byte) (*ecdsa.PublicKey, error) {
|
||||
var err error
|
||||
|
||||
// Parse PEM block
|
||||
var block *pem.Block
|
||||
if block, _ = pem.Decode(key); block == nil {
|
||||
return nil, ErrKeyMustBePEMEncoded
|
||||
}
|
||||
|
||||
// Parse the key
|
||||
var parsedKey interface{}
|
||||
if parsedKey, err = x509.ParsePKIXPublicKey(block.Bytes); err != nil {
|
||||
if cert, err := x509.ParseCertificate(block.Bytes); err == nil {
|
||||
parsedKey = cert.PublicKey
|
||||
} else {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
|
||||
var pkey *ecdsa.PublicKey
|
||||
var ok bool
|
||||
if pkey, ok = parsedKey.(*ecdsa.PublicKey); !ok {
|
||||
return nil, ErrNotECPublicKey
|
||||
}
|
||||
|
||||
return pkey, nil
|
||||
}
|
||||
59
vendor/github.com/dgrijalva/jwt-go/errors.go
generated
vendored
Normal file
59
vendor/github.com/dgrijalva/jwt-go/errors.go
generated
vendored
Normal file
@@ -0,0 +1,59 @@
|
||||
package jwt
|
||||
|
||||
import (
|
||||
"errors"
|
||||
)
|
||||
|
||||
// Error constants
|
||||
var (
|
||||
ErrInvalidKey = errors.New("key is invalid")
|
||||
ErrInvalidKeyType = errors.New("key is of invalid type")
|
||||
ErrHashUnavailable = errors.New("the requested hash function is unavailable")
|
||||
)
|
||||
|
||||
// The errors that might occur when parsing and validating a token
|
||||
const (
|
||||
ValidationErrorMalformed uint32 = 1 << iota // Token is malformed
|
||||
ValidationErrorUnverifiable // Token could not be verified because of signing problems
|
||||
ValidationErrorSignatureInvalid // Signature validation failed
|
||||
|
||||
// Standard Claim validation errors
|
||||
ValidationErrorAudience // AUD validation failed
|
||||
ValidationErrorExpired // EXP validation failed
|
||||
ValidationErrorIssuedAt // IAT validation failed
|
||||
ValidationErrorIssuer // ISS validation failed
|
||||
ValidationErrorNotValidYet // NBF validation failed
|
||||
ValidationErrorId // JTI validation failed
|
||||
ValidationErrorClaimsInvalid // Generic claims validation error
|
||||
)
|
||||
|
||||
// Helper for constructing a ValidationError with a string error message
|
||||
func NewValidationError(errorText string, errorFlags uint32) *ValidationError {
|
||||
return &ValidationError{
|
||||
text: errorText,
|
||||
Errors: errorFlags,
|
||||
}
|
||||
}
|
||||
|
||||
// The error from Parse if token is not valid
|
||||
type ValidationError struct {
|
||||
Inner error // stores the error returned by external dependencies, i.e.: KeyFunc
|
||||
Errors uint32 // bitfield. see ValidationError... constants
|
||||
text string // errors that do not have a valid error just have text
|
||||
}
|
||||
|
||||
// Validation error is an error type
|
||||
func (e ValidationError) Error() string {
|
||||
if e.Inner != nil {
|
||||
return e.Inner.Error()
|
||||
} else if e.text != "" {
|
||||
return e.text
|
||||
} else {
|
||||
return "token is invalid"
|
||||
}
|
||||
}
|
||||
|
||||
// No errors
|
||||
func (e *ValidationError) valid() bool {
|
||||
return e.Errors == 0
|
||||
}
|
||||
95
vendor/github.com/dgrijalva/jwt-go/hmac.go
generated
vendored
Normal file
95
vendor/github.com/dgrijalva/jwt-go/hmac.go
generated
vendored
Normal file
@@ -0,0 +1,95 @@
|
||||
package jwt
|
||||
|
||||
import (
|
||||
"crypto"
|
||||
"crypto/hmac"
|
||||
"errors"
|
||||
)
|
||||
|
||||
// Implements the HMAC-SHA family of signing methods signing methods
|
||||
// Expects key type of []byte for both signing and validation
|
||||
type SigningMethodHMAC struct {
|
||||
Name string
|
||||
Hash crypto.Hash
|
||||
}
|
||||
|
||||
// Specific instances for HS256 and company
|
||||
var (
|
||||
SigningMethodHS256 *SigningMethodHMAC
|
||||
SigningMethodHS384 *SigningMethodHMAC
|
||||
SigningMethodHS512 *SigningMethodHMAC
|
||||
ErrSignatureInvalid = errors.New("signature is invalid")
|
||||
)
|
||||
|
||||
func init() {
|
||||
// HS256
|
||||
SigningMethodHS256 = &SigningMethodHMAC{"HS256", crypto.SHA256}
|
||||
RegisterSigningMethod(SigningMethodHS256.Alg(), func() SigningMethod {
|
||||
return SigningMethodHS256
|
||||
})
|
||||
|
||||
// HS384
|
||||
SigningMethodHS384 = &SigningMethodHMAC{"HS384", crypto.SHA384}
|
||||
RegisterSigningMethod(SigningMethodHS384.Alg(), func() SigningMethod {
|
||||
return SigningMethodHS384
|
||||
})
|
||||
|
||||
// HS512
|
||||
SigningMethodHS512 = &SigningMethodHMAC{"HS512", crypto.SHA512}
|
||||
RegisterSigningMethod(SigningMethodHS512.Alg(), func() SigningMethod {
|
||||
return SigningMethodHS512
|
||||
})
|
||||
}
|
||||
|
||||
func (m *SigningMethodHMAC) Alg() string {
|
||||
return m.Name
|
||||
}
|
||||
|
||||
// Verify the signature of HSXXX tokens. Returns nil if the signature is valid.
|
||||
func (m *SigningMethodHMAC) Verify(signingString, signature string, key interface{}) error {
|
||||
// Verify the key is the right type
|
||||
keyBytes, ok := key.([]byte)
|
||||
if !ok {
|
||||
return ErrInvalidKeyType
|
||||
}
|
||||
|
||||
// Decode signature, for comparison
|
||||
sig, err := DecodeSegment(signature)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// Can we use the specified hashing method?
|
||||
if !m.Hash.Available() {
|
||||
return ErrHashUnavailable
|
||||
}
|
||||
|
||||
// This signing method is symmetric, so we validate the signature
|
||||
// by reproducing the signature from the signing string and key, then
|
||||
// comparing that against the provided signature.
|
||||
hasher := hmac.New(m.Hash.New, keyBytes)
|
||||
hasher.Write([]byte(signingString))
|
||||
if !hmac.Equal(sig, hasher.Sum(nil)) {
|
||||
return ErrSignatureInvalid
|
||||
}
|
||||
|
||||
// No validation errors. Signature is good.
|
||||
return nil
|
||||
}
|
||||
|
||||
// Implements the Sign method from SigningMethod for this signing method.
|
||||
// Key must be []byte
|
||||
func (m *SigningMethodHMAC) Sign(signingString string, key interface{}) (string, error) {
|
||||
if keyBytes, ok := key.([]byte); ok {
|
||||
if !m.Hash.Available() {
|
||||
return "", ErrHashUnavailable
|
||||
}
|
||||
|
||||
hasher := hmac.New(m.Hash.New, keyBytes)
|
||||
hasher.Write([]byte(signingString))
|
||||
|
||||
return EncodeSegment(hasher.Sum(nil)), nil
|
||||
}
|
||||
|
||||
return "", ErrInvalidKeyType
|
||||
}
|
||||
94
vendor/github.com/dgrijalva/jwt-go/map_claims.go
generated
vendored
Normal file
94
vendor/github.com/dgrijalva/jwt-go/map_claims.go
generated
vendored
Normal file
@@ -0,0 +1,94 @@
|
||||
package jwt
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"errors"
|
||||
// "fmt"
|
||||
)
|
||||
|
||||
// Claims type that uses the map[string]interface{} for JSON decoding
|
||||
// This is the default claims type if you don't supply one
|
||||
type MapClaims map[string]interface{}
|
||||
|
||||
// Compares the aud claim against cmp.
|
||||
// If required is false, this method will return true if the value matches or is unset
|
||||
func (m MapClaims) VerifyAudience(cmp string, req bool) bool {
|
||||
aud, _ := m["aud"].(string)
|
||||
return verifyAud(aud, cmp, req)
|
||||
}
|
||||
|
||||
// Compares the exp claim against cmp.
|
||||
// If required is false, this method will return true if the value matches or is unset
|
||||
func (m MapClaims) VerifyExpiresAt(cmp int64, req bool) bool {
|
||||
switch exp := m["exp"].(type) {
|
||||
case float64:
|
||||
return verifyExp(int64(exp), cmp, req)
|
||||
case json.Number:
|
||||
v, _ := exp.Int64()
|
||||
return verifyExp(v, cmp, req)
|
||||
}
|
||||
return req == false
|
||||
}
|
||||
|
||||
// Compares the iat claim against cmp.
|
||||
// If required is false, this method will return true if the value matches or is unset
|
||||
func (m MapClaims) VerifyIssuedAt(cmp int64, req bool) bool {
|
||||
switch iat := m["iat"].(type) {
|
||||
case float64:
|
||||
return verifyIat(int64(iat), cmp, req)
|
||||
case json.Number:
|
||||
v, _ := iat.Int64()
|
||||
return verifyIat(v, cmp, req)
|
||||
}
|
||||
return req == false
|
||||
}
|
||||
|
||||
// Compares the iss claim against cmp.
|
||||
// If required is false, this method will return true if the value matches or is unset
|
||||
func (m MapClaims) VerifyIssuer(cmp string, req bool) bool {
|
||||
iss, _ := m["iss"].(string)
|
||||
return verifyIss(iss, cmp, req)
|
||||
}
|
||||
|
||||
// Compares the nbf claim against cmp.
|
||||
// If required is false, this method will return true if the value matches or is unset
|
||||
func (m MapClaims) VerifyNotBefore(cmp int64, req bool) bool {
|
||||
switch nbf := m["nbf"].(type) {
|
||||
case float64:
|
||||
return verifyNbf(int64(nbf), cmp, req)
|
||||
case json.Number:
|
||||
v, _ := nbf.Int64()
|
||||
return verifyNbf(v, cmp, req)
|
||||
}
|
||||
return req == false
|
||||
}
|
||||
|
||||
// Validates time based claims "exp, iat, nbf".
|
||||
// There is no accounting for clock skew.
|
||||
// As well, if any of the above claims are not in the token, it will still
|
||||
// be considered a valid claim.
|
||||
func (m MapClaims) Valid() error {
|
||||
vErr := new(ValidationError)
|
||||
now := TimeFunc().Unix()
|
||||
|
||||
if m.VerifyExpiresAt(now, false) == false {
|
||||
vErr.Inner = errors.New("Token is expired")
|
||||
vErr.Errors |= ValidationErrorExpired
|
||||
}
|
||||
|
||||
if m.VerifyIssuedAt(now, false) == false {
|
||||
vErr.Inner = errors.New("Token used before issued")
|
||||
vErr.Errors |= ValidationErrorIssuedAt
|
||||
}
|
||||
|
||||
if m.VerifyNotBefore(now, false) == false {
|
||||
vErr.Inner = errors.New("Token is not valid yet")
|
||||
vErr.Errors |= ValidationErrorNotValidYet
|
||||
}
|
||||
|
||||
if vErr.valid() {
|
||||
return nil
|
||||
}
|
||||
|
||||
return vErr
|
||||
}
|
||||
52
vendor/github.com/dgrijalva/jwt-go/none.go
generated
vendored
Normal file
52
vendor/github.com/dgrijalva/jwt-go/none.go
generated
vendored
Normal file
@@ -0,0 +1,52 @@
|
||||
package jwt
|
||||
|
||||
// Implements the none signing method. This is required by the spec
|
||||
// but you probably should never use it.
|
||||
var SigningMethodNone *signingMethodNone
|
||||
|
||||
const UnsafeAllowNoneSignatureType unsafeNoneMagicConstant = "none signing method allowed"
|
||||
|
||||
var NoneSignatureTypeDisallowedError error
|
||||
|
||||
type signingMethodNone struct{}
|
||||
type unsafeNoneMagicConstant string
|
||||
|
||||
func init() {
|
||||
SigningMethodNone = &signingMethodNone{}
|
||||
NoneSignatureTypeDisallowedError = NewValidationError("'none' signature type is not allowed", ValidationErrorSignatureInvalid)
|
||||
|
||||
RegisterSigningMethod(SigningMethodNone.Alg(), func() SigningMethod {
|
||||
return SigningMethodNone
|
||||
})
|
||||
}
|
||||
|
||||
func (m *signingMethodNone) Alg() string {
|
||||
return "none"
|
||||
}
|
||||
|
||||
// Only allow 'none' alg type if UnsafeAllowNoneSignatureType is specified as the key
|
||||
func (m *signingMethodNone) Verify(signingString, signature string, key interface{}) (err error) {
|
||||
// Key must be UnsafeAllowNoneSignatureType to prevent accidentally
|
||||
// accepting 'none' signing method
|
||||
if _, ok := key.(unsafeNoneMagicConstant); !ok {
|
||||
return NoneSignatureTypeDisallowedError
|
||||
}
|
||||
// If signing method is none, signature must be an empty string
|
||||
if signature != "" {
|
||||
return NewValidationError(
|
||||
"'none' signing method with non-empty signature",
|
||||
ValidationErrorSignatureInvalid,
|
||||
)
|
||||
}
|
||||
|
||||
// Accept 'none' signing method.
|
||||
return nil
|
||||
}
|
||||
|
||||
// Only allow 'none' signing if UnsafeAllowNoneSignatureType is specified as the key
|
||||
func (m *signingMethodNone) Sign(signingString string, key interface{}) (string, error) {
|
||||
if _, ok := key.(unsafeNoneMagicConstant); ok {
|
||||
return "", nil
|
||||
}
|
||||
return "", NoneSignatureTypeDisallowedError
|
||||
}
|
||||
148
vendor/github.com/dgrijalva/jwt-go/parser.go
generated
vendored
Normal file
148
vendor/github.com/dgrijalva/jwt-go/parser.go
generated
vendored
Normal file
@@ -0,0 +1,148 @@
|
||||
package jwt
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"strings"
|
||||
)
|
||||
|
||||
type Parser struct {
|
||||
ValidMethods []string // If populated, only these methods will be considered valid
|
||||
UseJSONNumber bool // Use JSON Number format in JSON decoder
|
||||
SkipClaimsValidation bool // Skip claims validation during token parsing
|
||||
}
|
||||
|
||||
// Parse, validate, and return a token.
|
||||
// keyFunc will receive the parsed token and should return the key for validating.
|
||||
// If everything is kosher, err will be nil
|
||||
func (p *Parser) Parse(tokenString string, keyFunc Keyfunc) (*Token, error) {
|
||||
return p.ParseWithClaims(tokenString, MapClaims{}, keyFunc)
|
||||
}
|
||||
|
||||
func (p *Parser) ParseWithClaims(tokenString string, claims Claims, keyFunc Keyfunc) (*Token, error) {
|
||||
token, parts, err := p.ParseUnverified(tokenString, claims)
|
||||
if err != nil {
|
||||
return token, err
|
||||
}
|
||||
|
||||
// Verify signing method is in the required set
|
||||
if p.ValidMethods != nil {
|
||||
var signingMethodValid = false
|
||||
var alg = token.Method.Alg()
|
||||
for _, m := range p.ValidMethods {
|
||||
if m == alg {
|
||||
signingMethodValid = true
|
||||
break
|
||||
}
|
||||
}
|
||||
if !signingMethodValid {
|
||||
// signing method is not in the listed set
|
||||
return token, NewValidationError(fmt.Sprintf("signing method %v is invalid", alg), ValidationErrorSignatureInvalid)
|
||||
}
|
||||
}
|
||||
|
||||
// Lookup key
|
||||
var key interface{}
|
||||
if keyFunc == nil {
|
||||
// keyFunc was not provided. short circuiting validation
|
||||
return token, NewValidationError("no Keyfunc was provided.", ValidationErrorUnverifiable)
|
||||
}
|
||||
if key, err = keyFunc(token); err != nil {
|
||||
// keyFunc returned an error
|
||||
if ve, ok := err.(*ValidationError); ok {
|
||||
return token, ve
|
||||
}
|
||||
return token, &ValidationError{Inner: err, Errors: ValidationErrorUnverifiable}
|
||||
}
|
||||
|
||||
vErr := &ValidationError{}
|
||||
|
||||
// Validate Claims
|
||||
if !p.SkipClaimsValidation {
|
||||
if err := token.Claims.Valid(); err != nil {
|
||||
|
||||
// If the Claims Valid returned an error, check if it is a validation error,
|
||||
// If it was another error type, create a ValidationError with a generic ClaimsInvalid flag set
|
||||
if e, ok := err.(*ValidationError); !ok {
|
||||
vErr = &ValidationError{Inner: err, Errors: ValidationErrorClaimsInvalid}
|
||||
} else {
|
||||
vErr = e
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Perform validation
|
||||
token.Signature = parts[2]
|
||||
if err = token.Method.Verify(strings.Join(parts[0:2], "."), token.Signature, key); err != nil {
|
||||
vErr.Inner = err
|
||||
vErr.Errors |= ValidationErrorSignatureInvalid
|
||||
}
|
||||
|
||||
if vErr.valid() {
|
||||
token.Valid = true
|
||||
return token, nil
|
||||
}
|
||||
|
||||
return token, vErr
|
||||
}
|
||||
|
||||
// WARNING: Don't use this method unless you know what you're doing
|
||||
//
|
||||
// This method parses the token but doesn't validate the signature. It's only
|
||||
// ever useful in cases where you know the signature is valid (because it has
|
||||
// been checked previously in the stack) and you want to extract values from
|
||||
// it.
|
||||
func (p *Parser) ParseUnverified(tokenString string, claims Claims) (token *Token, parts []string, err error) {
|
||||
parts = strings.Split(tokenString, ".")
|
||||
if len(parts) != 3 {
|
||||
return nil, parts, NewValidationError("token contains an invalid number of segments", ValidationErrorMalformed)
|
||||
}
|
||||
|
||||
token = &Token{Raw: tokenString}
|
||||
|
||||
// parse Header
|
||||
var headerBytes []byte
|
||||
if headerBytes, err = DecodeSegment(parts[0]); err != nil {
|
||||
if strings.HasPrefix(strings.ToLower(tokenString), "bearer ") {
|
||||
return token, parts, NewValidationError("tokenstring should not contain 'bearer '", ValidationErrorMalformed)
|
||||
}
|
||||
return token, parts, &ValidationError{Inner: err, Errors: ValidationErrorMalformed}
|
||||
}
|
||||
if err = json.Unmarshal(headerBytes, &token.Header); err != nil {
|
||||
return token, parts, &ValidationError{Inner: err, Errors: ValidationErrorMalformed}
|
||||
}
|
||||
|
||||
// parse Claims
|
||||
var claimBytes []byte
|
||||
token.Claims = claims
|
||||
|
||||
if claimBytes, err = DecodeSegment(parts[1]); err != nil {
|
||||
return token, parts, &ValidationError{Inner: err, Errors: ValidationErrorMalformed}
|
||||
}
|
||||
dec := json.NewDecoder(bytes.NewBuffer(claimBytes))
|
||||
if p.UseJSONNumber {
|
||||
dec.UseNumber()
|
||||
}
|
||||
// JSON Decode. Special case for map type to avoid weird pointer behavior
|
||||
if c, ok := token.Claims.(MapClaims); ok {
|
||||
err = dec.Decode(&c)
|
||||
} else {
|
||||
err = dec.Decode(&claims)
|
||||
}
|
||||
// Handle decode error
|
||||
if err != nil {
|
||||
return token, parts, &ValidationError{Inner: err, Errors: ValidationErrorMalformed}
|
||||
}
|
||||
|
||||
// Lookup signature method
|
||||
if method, ok := token.Header["alg"].(string); ok {
|
||||
if token.Method = GetSigningMethod(method); token.Method == nil {
|
||||
return token, parts, NewValidationError("signing method (alg) is unavailable.", ValidationErrorUnverifiable)
|
||||
}
|
||||
} else {
|
||||
return token, parts, NewValidationError("signing method (alg) is unspecified.", ValidationErrorUnverifiable)
|
||||
}
|
||||
|
||||
return token, parts, nil
|
||||
}
|
||||
101
vendor/github.com/dgrijalva/jwt-go/rsa.go
generated
vendored
Normal file
101
vendor/github.com/dgrijalva/jwt-go/rsa.go
generated
vendored
Normal file
@@ -0,0 +1,101 @@
|
||||
package jwt
|
||||
|
||||
import (
|
||||
"crypto"
|
||||
"crypto/rand"
|
||||
"crypto/rsa"
|
||||
)
|
||||
|
||||
// Implements the RSA family of signing methods signing methods
|
||||
// Expects *rsa.PrivateKey for signing and *rsa.PublicKey for validation
|
||||
type SigningMethodRSA struct {
|
||||
Name string
|
||||
Hash crypto.Hash
|
||||
}
|
||||
|
||||
// Specific instances for RS256 and company
|
||||
var (
|
||||
SigningMethodRS256 *SigningMethodRSA
|
||||
SigningMethodRS384 *SigningMethodRSA
|
||||
SigningMethodRS512 *SigningMethodRSA
|
||||
)
|
||||
|
||||
func init() {
|
||||
// RS256
|
||||
SigningMethodRS256 = &SigningMethodRSA{"RS256", crypto.SHA256}
|
||||
RegisterSigningMethod(SigningMethodRS256.Alg(), func() SigningMethod {
|
||||
return SigningMethodRS256
|
||||
})
|
||||
|
||||
// RS384
|
||||
SigningMethodRS384 = &SigningMethodRSA{"RS384", crypto.SHA384}
|
||||
RegisterSigningMethod(SigningMethodRS384.Alg(), func() SigningMethod {
|
||||
return SigningMethodRS384
|
||||
})
|
||||
|
||||
// RS512
|
||||
SigningMethodRS512 = &SigningMethodRSA{"RS512", crypto.SHA512}
|
||||
RegisterSigningMethod(SigningMethodRS512.Alg(), func() SigningMethod {
|
||||
return SigningMethodRS512
|
||||
})
|
||||
}
|
||||
|
||||
func (m *SigningMethodRSA) Alg() string {
|
||||
return m.Name
|
||||
}
|
||||
|
||||
// Implements the Verify method from SigningMethod
|
||||
// For this signing method, must be an *rsa.PublicKey structure.
|
||||
func (m *SigningMethodRSA) Verify(signingString, signature string, key interface{}) error {
|
||||
var err error
|
||||
|
||||
// Decode the signature
|
||||
var sig []byte
|
||||
if sig, err = DecodeSegment(signature); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
var rsaKey *rsa.PublicKey
|
||||
var ok bool
|
||||
|
||||
if rsaKey, ok = key.(*rsa.PublicKey); !ok {
|
||||
return ErrInvalidKeyType
|
||||
}
|
||||
|
||||
// Create hasher
|
||||
if !m.Hash.Available() {
|
||||
return ErrHashUnavailable
|
||||
}
|
||||
hasher := m.Hash.New()
|
||||
hasher.Write([]byte(signingString))
|
||||
|
||||
// Verify the signature
|
||||
return rsa.VerifyPKCS1v15(rsaKey, m.Hash, hasher.Sum(nil), sig)
|
||||
}
|
||||
|
||||
// Implements the Sign method from SigningMethod
|
||||
// For this signing method, must be an *rsa.PrivateKey structure.
|
||||
func (m *SigningMethodRSA) Sign(signingString string, key interface{}) (string, error) {
|
||||
var rsaKey *rsa.PrivateKey
|
||||
var ok bool
|
||||
|
||||
// Validate type of key
|
||||
if rsaKey, ok = key.(*rsa.PrivateKey); !ok {
|
||||
return "", ErrInvalidKey
|
||||
}
|
||||
|
||||
// Create the hasher
|
||||
if !m.Hash.Available() {
|
||||
return "", ErrHashUnavailable
|
||||
}
|
||||
|
||||
hasher := m.Hash.New()
|
||||
hasher.Write([]byte(signingString))
|
||||
|
||||
// Sign the string and return the encoded bytes
|
||||
if sigBytes, err := rsa.SignPKCS1v15(rand.Reader, rsaKey, m.Hash, hasher.Sum(nil)); err == nil {
|
||||
return EncodeSegment(sigBytes), nil
|
||||
} else {
|
||||
return "", err
|
||||
}
|
||||
}
|
||||
126
vendor/github.com/dgrijalva/jwt-go/rsa_pss.go
generated
vendored
Normal file
126
vendor/github.com/dgrijalva/jwt-go/rsa_pss.go
generated
vendored
Normal file
@@ -0,0 +1,126 @@
|
||||
// +build go1.4
|
||||
|
||||
package jwt
|
||||
|
||||
import (
|
||||
"crypto"
|
||||
"crypto/rand"
|
||||
"crypto/rsa"
|
||||
)
|
||||
|
||||
// Implements the RSAPSS family of signing methods signing methods
|
||||
type SigningMethodRSAPSS struct {
|
||||
*SigningMethodRSA
|
||||
Options *rsa.PSSOptions
|
||||
}
|
||||
|
||||
// Specific instances for RS/PS and company
|
||||
var (
|
||||
SigningMethodPS256 *SigningMethodRSAPSS
|
||||
SigningMethodPS384 *SigningMethodRSAPSS
|
||||
SigningMethodPS512 *SigningMethodRSAPSS
|
||||
)
|
||||
|
||||
func init() {
|
||||
// PS256
|
||||
SigningMethodPS256 = &SigningMethodRSAPSS{
|
||||
&SigningMethodRSA{
|
||||
Name: "PS256",
|
||||
Hash: crypto.SHA256,
|
||||
},
|
||||
&rsa.PSSOptions{
|
||||
SaltLength: rsa.PSSSaltLengthAuto,
|
||||
Hash: crypto.SHA256,
|
||||
},
|
||||
}
|
||||
RegisterSigningMethod(SigningMethodPS256.Alg(), func() SigningMethod {
|
||||
return SigningMethodPS256
|
||||
})
|
||||
|
||||
// PS384
|
||||
SigningMethodPS384 = &SigningMethodRSAPSS{
|
||||
&SigningMethodRSA{
|
||||
Name: "PS384",
|
||||
Hash: crypto.SHA384,
|
||||
},
|
||||
&rsa.PSSOptions{
|
||||
SaltLength: rsa.PSSSaltLengthAuto,
|
||||
Hash: crypto.SHA384,
|
||||
},
|
||||
}
|
||||
RegisterSigningMethod(SigningMethodPS384.Alg(), func() SigningMethod {
|
||||
return SigningMethodPS384
|
||||
})
|
||||
|
||||
// PS512
|
||||
SigningMethodPS512 = &SigningMethodRSAPSS{
|
||||
&SigningMethodRSA{
|
||||
Name: "PS512",
|
||||
Hash: crypto.SHA512,
|
||||
},
|
||||
&rsa.PSSOptions{
|
||||
SaltLength: rsa.PSSSaltLengthAuto,
|
||||
Hash: crypto.SHA512,
|
||||
},
|
||||
}
|
||||
RegisterSigningMethod(SigningMethodPS512.Alg(), func() SigningMethod {
|
||||
return SigningMethodPS512
|
||||
})
|
||||
}
|
||||
|
||||
// Implements the Verify method from SigningMethod
|
||||
// For this verify method, key must be an rsa.PublicKey struct
|
||||
func (m *SigningMethodRSAPSS) Verify(signingString, signature string, key interface{}) error {
|
||||
var err error
|
||||
|
||||
// Decode the signature
|
||||
var sig []byte
|
||||
if sig, err = DecodeSegment(signature); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
var rsaKey *rsa.PublicKey
|
||||
switch k := key.(type) {
|
||||
case *rsa.PublicKey:
|
||||
rsaKey = k
|
||||
default:
|
||||
return ErrInvalidKey
|
||||
}
|
||||
|
||||
// Create hasher
|
||||
if !m.Hash.Available() {
|
||||
return ErrHashUnavailable
|
||||
}
|
||||
hasher := m.Hash.New()
|
||||
hasher.Write([]byte(signingString))
|
||||
|
||||
return rsa.VerifyPSS(rsaKey, m.Hash, hasher.Sum(nil), sig, m.Options)
|
||||
}
|
||||
|
||||
// Implements the Sign method from SigningMethod
|
||||
// For this signing method, key must be an rsa.PrivateKey struct
|
||||
func (m *SigningMethodRSAPSS) Sign(signingString string, key interface{}) (string, error) {
|
||||
var rsaKey *rsa.PrivateKey
|
||||
|
||||
switch k := key.(type) {
|
||||
case *rsa.PrivateKey:
|
||||
rsaKey = k
|
||||
default:
|
||||
return "", ErrInvalidKeyType
|
||||
}
|
||||
|
||||
// Create the hasher
|
||||
if !m.Hash.Available() {
|
||||
return "", ErrHashUnavailable
|
||||
}
|
||||
|
||||
hasher := m.Hash.New()
|
||||
hasher.Write([]byte(signingString))
|
||||
|
||||
// Sign the string and return the encoded bytes
|
||||
if sigBytes, err := rsa.SignPSS(rand.Reader, rsaKey, m.Hash, hasher.Sum(nil), m.Options); err == nil {
|
||||
return EncodeSegment(sigBytes), nil
|
||||
} else {
|
||||
return "", err
|
||||
}
|
||||
}
|
||||
101
vendor/github.com/dgrijalva/jwt-go/rsa_utils.go
generated
vendored
Normal file
101
vendor/github.com/dgrijalva/jwt-go/rsa_utils.go
generated
vendored
Normal file
@@ -0,0 +1,101 @@
|
||||
package jwt
|
||||
|
||||
import (
|
||||
"crypto/rsa"
|
||||
"crypto/x509"
|
||||
"encoding/pem"
|
||||
"errors"
|
||||
)
|
||||
|
||||
var (
|
||||
ErrKeyMustBePEMEncoded = errors.New("Invalid Key: Key must be PEM encoded PKCS1 or PKCS8 private key")
|
||||
ErrNotRSAPrivateKey = errors.New("Key is not a valid RSA private key")
|
||||
ErrNotRSAPublicKey = errors.New("Key is not a valid RSA public key")
|
||||
)
|
||||
|
||||
// Parse PEM encoded PKCS1 or PKCS8 private key
|
||||
func ParseRSAPrivateKeyFromPEM(key []byte) (*rsa.PrivateKey, error) {
|
||||
var err error
|
||||
|
||||
// Parse PEM block
|
||||
var block *pem.Block
|
||||
if block, _ = pem.Decode(key); block == nil {
|
||||
return nil, ErrKeyMustBePEMEncoded
|
||||
}
|
||||
|
||||
var parsedKey interface{}
|
||||
if parsedKey, err = x509.ParsePKCS1PrivateKey(block.Bytes); err != nil {
|
||||
if parsedKey, err = x509.ParsePKCS8PrivateKey(block.Bytes); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
|
||||
var pkey *rsa.PrivateKey
|
||||
var ok bool
|
||||
if pkey, ok = parsedKey.(*rsa.PrivateKey); !ok {
|
||||
return nil, ErrNotRSAPrivateKey
|
||||
}
|
||||
|
||||
return pkey, nil
|
||||
}
|
||||
|
||||
// Parse PEM encoded PKCS1 or PKCS8 private key protected with password
|
||||
func ParseRSAPrivateKeyFromPEMWithPassword(key []byte, password string) (*rsa.PrivateKey, error) {
|
||||
var err error
|
||||
|
||||
// Parse PEM block
|
||||
var block *pem.Block
|
||||
if block, _ = pem.Decode(key); block == nil {
|
||||
return nil, ErrKeyMustBePEMEncoded
|
||||
}
|
||||
|
||||
var parsedKey interface{}
|
||||
|
||||
var blockDecrypted []byte
|
||||
if blockDecrypted, err = x509.DecryptPEMBlock(block, []byte(password)); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
if parsedKey, err = x509.ParsePKCS1PrivateKey(blockDecrypted); err != nil {
|
||||
if parsedKey, err = x509.ParsePKCS8PrivateKey(blockDecrypted); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
|
||||
var pkey *rsa.PrivateKey
|
||||
var ok bool
|
||||
if pkey, ok = parsedKey.(*rsa.PrivateKey); !ok {
|
||||
return nil, ErrNotRSAPrivateKey
|
||||
}
|
||||
|
||||
return pkey, nil
|
||||
}
|
||||
|
||||
// Parse PEM encoded PKCS1 or PKCS8 public key
|
||||
func ParseRSAPublicKeyFromPEM(key []byte) (*rsa.PublicKey, error) {
|
||||
var err error
|
||||
|
||||
// Parse PEM block
|
||||
var block *pem.Block
|
||||
if block, _ = pem.Decode(key); block == nil {
|
||||
return nil, ErrKeyMustBePEMEncoded
|
||||
}
|
||||
|
||||
// Parse the key
|
||||
var parsedKey interface{}
|
||||
if parsedKey, err = x509.ParsePKIXPublicKey(block.Bytes); err != nil {
|
||||
if cert, err := x509.ParseCertificate(block.Bytes); err == nil {
|
||||
parsedKey = cert.PublicKey
|
||||
} else {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
|
||||
var pkey *rsa.PublicKey
|
||||
var ok bool
|
||||
if pkey, ok = parsedKey.(*rsa.PublicKey); !ok {
|
||||
return nil, ErrNotRSAPublicKey
|
||||
}
|
||||
|
||||
return pkey, nil
|
||||
}
|
||||
35
vendor/github.com/dgrijalva/jwt-go/signing_method.go
generated
vendored
Normal file
35
vendor/github.com/dgrijalva/jwt-go/signing_method.go
generated
vendored
Normal file
@@ -0,0 +1,35 @@
|
||||
package jwt
|
||||
|
||||
import (
|
||||
"sync"
|
||||
)
|
||||
|
||||
var signingMethods = map[string]func() SigningMethod{}
|
||||
var signingMethodLock = new(sync.RWMutex)
|
||||
|
||||
// Implement SigningMethod to add new methods for signing or verifying tokens.
|
||||
type SigningMethod interface {
|
||||
Verify(signingString, signature string, key interface{}) error // Returns nil if signature is valid
|
||||
Sign(signingString string, key interface{}) (string, error) // Returns encoded signature or error
|
||||
Alg() string // returns the alg identifier for this method (example: 'HS256')
|
||||
}
|
||||
|
||||
// Register the "alg" name and a factory function for signing method.
|
||||
// This is typically done during init() in the method's implementation
|
||||
func RegisterSigningMethod(alg string, f func() SigningMethod) {
|
||||
signingMethodLock.Lock()
|
||||
defer signingMethodLock.Unlock()
|
||||
|
||||
signingMethods[alg] = f
|
||||
}
|
||||
|
||||
// Get a signing method from an "alg" string
|
||||
func GetSigningMethod(alg string) (method SigningMethod) {
|
||||
signingMethodLock.RLock()
|
||||
defer signingMethodLock.RUnlock()
|
||||
|
||||
if methodF, ok := signingMethods[alg]; ok {
|
||||
method = methodF()
|
||||
}
|
||||
return
|
||||
}
|
||||
108
vendor/github.com/dgrijalva/jwt-go/token.go
generated
vendored
Normal file
108
vendor/github.com/dgrijalva/jwt-go/token.go
generated
vendored
Normal file
@@ -0,0 +1,108 @@
|
||||
package jwt
|
||||
|
||||
import (
|
||||
"encoding/base64"
|
||||
"encoding/json"
|
||||
"strings"
|
||||
"time"
|
||||
)
|
||||
|
||||
// TimeFunc provides the current time when parsing token to validate "exp" claim (expiration time).
|
||||
// You can override it to use another time value. This is useful for testing or if your
|
||||
// server uses a different time zone than your tokens.
|
||||
var TimeFunc = time.Now
|
||||
|
||||
// Parse methods use this callback function to supply
|
||||
// the key for verification. The function receives the parsed,
|
||||
// but unverified Token. This allows you to use properties in the
|
||||
// Header of the token (such as `kid`) to identify which key to use.
|
||||
type Keyfunc func(*Token) (interface{}, error)
|
||||
|
||||
// A JWT Token. Different fields will be used depending on whether you're
|
||||
// creating or parsing/verifying a token.
|
||||
type Token struct {
|
||||
Raw string // The raw token. Populated when you Parse a token
|
||||
Method SigningMethod // The signing method used or to be used
|
||||
Header map[string]interface{} // The first segment of the token
|
||||
Claims Claims // The second segment of the token
|
||||
Signature string // The third segment of the token. Populated when you Parse a token
|
||||
Valid bool // Is the token valid? Populated when you Parse/Verify a token
|
||||
}
|
||||
|
||||
// Create a new Token. Takes a signing method
|
||||
func New(method SigningMethod) *Token {
|
||||
return NewWithClaims(method, MapClaims{})
|
||||
}
|
||||
|
||||
func NewWithClaims(method SigningMethod, claims Claims) *Token {
|
||||
return &Token{
|
||||
Header: map[string]interface{}{
|
||||
"typ": "JWT",
|
||||
"alg": method.Alg(),
|
||||
},
|
||||
Claims: claims,
|
||||
Method: method,
|
||||
}
|
||||
}
|
||||
|
||||
// Get the complete, signed token
|
||||
func (t *Token) SignedString(key interface{}) (string, error) {
|
||||
var sig, sstr string
|
||||
var err error
|
||||
if sstr, err = t.SigningString(); err != nil {
|
||||
return "", err
|
||||
}
|
||||
if sig, err = t.Method.Sign(sstr, key); err != nil {
|
||||
return "", err
|
||||
}
|
||||
return strings.Join([]string{sstr, sig}, "."), nil
|
||||
}
|
||||
|
||||
// Generate the signing string. This is the
|
||||
// most expensive part of the whole deal. Unless you
|
||||
// need this for something special, just go straight for
|
||||
// the SignedString.
|
||||
func (t *Token) SigningString() (string, error) {
|
||||
var err error
|
||||
parts := make([]string, 2)
|
||||
for i, _ := range parts {
|
||||
var jsonValue []byte
|
||||
if i == 0 {
|
||||
if jsonValue, err = json.Marshal(t.Header); err != nil {
|
||||
return "", err
|
||||
}
|
||||
} else {
|
||||
if jsonValue, err = json.Marshal(t.Claims); err != nil {
|
||||
return "", err
|
||||
}
|
||||
}
|
||||
|
||||
parts[i] = EncodeSegment(jsonValue)
|
||||
}
|
||||
return strings.Join(parts, "."), nil
|
||||
}
|
||||
|
||||
// Parse, validate, and return a token.
|
||||
// keyFunc will receive the parsed token and should return the key for validating.
|
||||
// If everything is kosher, err will be nil
|
||||
func Parse(tokenString string, keyFunc Keyfunc) (*Token, error) {
|
||||
return new(Parser).Parse(tokenString, keyFunc)
|
||||
}
|
||||
|
||||
func ParseWithClaims(tokenString string, claims Claims, keyFunc Keyfunc) (*Token, error) {
|
||||
return new(Parser).ParseWithClaims(tokenString, claims, keyFunc)
|
||||
}
|
||||
|
||||
// Encode JWT specific base64url encoding with padding stripped
|
||||
func EncodeSegment(seg []byte) string {
|
||||
return strings.TrimRight(base64.URLEncoding.EncodeToString(seg), "=")
|
||||
}
|
||||
|
||||
// Decode JWT specific base64url encoding with padding stripped
|
||||
func DecodeSegment(seg string) ([]byte, error) {
|
||||
if l := len(seg) % 4; l > 0 {
|
||||
seg += strings.Repeat("=", 4-l)
|
||||
}
|
||||
|
||||
return base64.URLEncoding.DecodeString(seg)
|
||||
}
|
||||
19
vendor/github.com/dhowden/tag/.editorconfig
generated
vendored
Normal file
19
vendor/github.com/dhowden/tag/.editorconfig
generated
vendored
Normal file
@@ -0,0 +1,19 @@
|
||||
# EditorConfig helps developers define and maintain consistent
|
||||
# coding styles between different editors and IDEs
|
||||
# editorconfig.org
|
||||
|
||||
root = true
|
||||
|
||||
[*.go]
|
||||
indent_style = tab
|
||||
indent_size = 3
|
||||
|
||||
[*.md]
|
||||
trim_trailing_whitespace = false
|
||||
|
||||
[*]
|
||||
# We recommend you to keep these unchanged
|
||||
end_of_line = lf
|
||||
charset = utf-8
|
||||
trim_trailing_whitespace = true
|
||||
insert_final_newline = true
|
||||
5
vendor/github.com/dhowden/tag/.travis.yml
generated
vendored
Normal file
5
vendor/github.com/dhowden/tag/.travis.yml
generated
vendored
Normal file
@@ -0,0 +1,5 @@
|
||||
language: go
|
||||
|
||||
go:
|
||||
- 1.7
|
||||
- tip
|
||||
23
vendor/github.com/dhowden/tag/LICENSE
generated
vendored
Normal file
23
vendor/github.com/dhowden/tag/LICENSE
generated
vendored
Normal file
@@ -0,0 +1,23 @@
|
||||
Copyright 2015, David Howden
|
||||
All rights reserved.
|
||||
|
||||
Redistribution and use in source and binary forms, with or without modification,
|
||||
are permitted provided that the following conditions are met:
|
||||
|
||||
Redistributions of source code must retain the above copyright notice, this
|
||||
list of conditions and the following disclaimer.
|
||||
|
||||
Redistributions in binary form must reproduce the above copyright notice, this
|
||||
list of conditions and the following disclaimer in the documentation and/or
|
||||
other materials provided with the distribution.
|
||||
|
||||
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
|
||||
ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
|
||||
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
|
||||
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR
|
||||
ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
|
||||
(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
|
||||
LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON
|
||||
ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
|
||||
SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
72
vendor/github.com/dhowden/tag/README.md
generated
vendored
Normal file
72
vendor/github.com/dhowden/tag/README.md
generated
vendored
Normal file
@@ -0,0 +1,72 @@
|
||||
# MP3/MP4/OGG/FLAC metadata parsing library
|
||||
[](https://travis-ci.org/dhowden/tag)
|
||||
[](https://godoc.org/github.com/dhowden/tag)
|
||||
|
||||
This package provides MP3 (ID3v1,2.{2,3,4}) and MP4 (ACC, M4A, ALAC), OGG and FLAC metadata detection, parsing and artwork extraction.
|
||||
|
||||
Detect and parse tag metadata from an `io.ReadSeeker` (i.e. an `*os.File`):
|
||||
|
||||
```go
|
||||
m, err := tag.ReadFrom(f)
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
log.Print(m.Format()) // The detected format.
|
||||
log.Print(m.Title()) // The title of the track (see Metadata interface for more details).
|
||||
```
|
||||
|
||||
Parsed metadata is exported via a single interface (giving a consistent API for all supported metadata formats).
|
||||
|
||||
```go
|
||||
// Metadata is an interface which is used to describe metadata retrieved by this package.
|
||||
type Metadata interface {
|
||||
Format() Format
|
||||
FileType() FileType
|
||||
|
||||
Title() string
|
||||
Album() string
|
||||
Artist() string
|
||||
AlbumArtist() string
|
||||
Composer() string
|
||||
Genre() string
|
||||
Year() int
|
||||
|
||||
Track() (int, int) // Number, Total
|
||||
Disc() (int, int) // Number, Total
|
||||
|
||||
Picture() *Picture // Artwork
|
||||
Lyrics() string
|
||||
Comment() string
|
||||
|
||||
Raw() map[string]interface{} // NB: raw tag names are not consistent across formats.
|
||||
}
|
||||
```
|
||||
|
||||
## Audio Data Checksum (SHA1)
|
||||
|
||||
This package also provides a metadata-invariant checksum for audio files: only the audio data is used to
|
||||
construct the checksum.
|
||||
|
||||
[http://godoc.org/github.com/dhowden/tag#Sum](http://godoc.org/github.com/dhowden/tag#Sum)
|
||||
|
||||
## Tools
|
||||
|
||||
There are simple command-line tools which demonstrate basic tag extraction and summing:
|
||||
|
||||
```console
|
||||
$ go get github.com/dhowden/tag/...
|
||||
$ cd $GOPATH/bin
|
||||
$ ./tag 11\ High\ Hopes.m4a
|
||||
Metadata Format: MP4
|
||||
Title: High Hopes
|
||||
Album: The Division Bell
|
||||
Artist: Pink Floyd
|
||||
Composer: Abbey Road Recording Studios/David Gilmour/Polly Samson
|
||||
Year: 1994
|
||||
Track: 11 of 11
|
||||
Disc: 1 of 1
|
||||
Picture: Picture{Ext: jpeg, MIMEType: image/jpeg, Type: , Description: , Data.Size: 606109}
|
||||
|
||||
$ ./sum 11\ High\ Hopes.m4a
|
||||
2ae208c5f00a1f21f5fac9b7f6e0b8e52c06da29
|
||||
```
|
||||
110
vendor/github.com/dhowden/tag/dsf.go
generated
vendored
Normal file
110
vendor/github.com/dhowden/tag/dsf.go
generated
vendored
Normal file
@@ -0,0 +1,110 @@
|
||||
// Copyright 2015, David Howden
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package tag
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"io"
|
||||
)
|
||||
|
||||
// ReadDSFTags reads DSF metadata from the io.ReadSeeker, returning the resulting
|
||||
// metadata in a Metadata implementation, or non-nil error if there was a problem.
|
||||
// samples: http://www.2l.no/hires/index.html
|
||||
func ReadDSFTags(r io.ReadSeeker) (Metadata, error) {
|
||||
dsd, err := readString(r, 4)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if dsd != "DSD " {
|
||||
return nil, errors.New("expected 'DSD '")
|
||||
}
|
||||
|
||||
_, err = r.Seek(int64(16), io.SeekCurrent)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
n4, err := readBytes(r, 8)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
id3Pointer := getIntLittleEndian(n4)
|
||||
|
||||
_, err = r.Seek(int64(id3Pointer), io.SeekStart)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
id3, err := ReadID3v2Tags(r)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return metadataDSF{id3}, nil
|
||||
}
|
||||
|
||||
type metadataDSF struct {
|
||||
id3 Metadata
|
||||
}
|
||||
|
||||
func (m metadataDSF) Format() Format {
|
||||
return m.id3.Format()
|
||||
}
|
||||
|
||||
func (m metadataDSF) FileType() FileType {
|
||||
return DSF
|
||||
}
|
||||
|
||||
func (m metadataDSF) Title() string {
|
||||
return m.id3.Title()
|
||||
}
|
||||
|
||||
func (m metadataDSF) Album() string {
|
||||
return m.id3.Album()
|
||||
}
|
||||
|
||||
func (m metadataDSF) Artist() string {
|
||||
return m.id3.Artist()
|
||||
}
|
||||
|
||||
func (m metadataDSF) AlbumArtist() string {
|
||||
return m.id3.AlbumArtist()
|
||||
}
|
||||
|
||||
func (m metadataDSF) Composer() string {
|
||||
return m.id3.Composer()
|
||||
}
|
||||
|
||||
func (m metadataDSF) Year() int {
|
||||
return m.id3.Year()
|
||||
}
|
||||
|
||||
func (m metadataDSF) Genre() string {
|
||||
return m.id3.Genre()
|
||||
}
|
||||
|
||||
func (m metadataDSF) Track() (int, int) {
|
||||
return m.id3.Track()
|
||||
}
|
||||
|
||||
func (m metadataDSF) Disc() (int, int) {
|
||||
return m.id3.Disc()
|
||||
}
|
||||
|
||||
func (m metadataDSF) Picture() *Picture {
|
||||
return m.id3.Picture()
|
||||
}
|
||||
|
||||
func (m metadataDSF) Lyrics() string {
|
||||
return m.id3.Lyrics()
|
||||
}
|
||||
|
||||
func (m metadataDSF) Comment() string {
|
||||
return m.id3.Comment()
|
||||
}
|
||||
|
||||
func (m metadataDSF) Raw() map[string]interface{} {
|
||||
return m.id3.Raw()
|
||||
}
|
||||
89
vendor/github.com/dhowden/tag/flac.go
generated
vendored
Normal file
89
vendor/github.com/dhowden/tag/flac.go
generated
vendored
Normal file
@@ -0,0 +1,89 @@
|
||||
// Copyright 2015, David Howden
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package tag
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"io"
|
||||
)
|
||||
|
||||
// blockType is a type which represents an enumeration of valid FLAC blocks
|
||||
type blockType byte
|
||||
|
||||
// FLAC block types.
|
||||
const (
|
||||
// Stream Info Block 0
|
||||
// Padding Block 1
|
||||
// Application Block 2
|
||||
// Seektable Block 3
|
||||
// Cue Sheet Block 5
|
||||
vorbisCommentBlock blockType = 4
|
||||
pictureBlock blockType = 6
|
||||
)
|
||||
|
||||
// ReadFLACTags reads FLAC metadata from the io.ReadSeeker, returning the resulting
|
||||
// metadata in a Metadata implementation, or non-nil error if there was a problem.
|
||||
func ReadFLACTags(r io.ReadSeeker) (Metadata, error) {
|
||||
flac, err := readString(r, 4)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if flac != "fLaC" {
|
||||
return nil, errors.New("expected 'fLaC'")
|
||||
}
|
||||
|
||||
m := &metadataFLAC{
|
||||
newMetadataVorbis(),
|
||||
}
|
||||
|
||||
for {
|
||||
last, err := m.readFLACMetadataBlock(r)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
if last {
|
||||
break
|
||||
}
|
||||
}
|
||||
return m, nil
|
||||
}
|
||||
|
||||
type metadataFLAC struct {
|
||||
*metadataVorbis
|
||||
}
|
||||
|
||||
func (m *metadataFLAC) readFLACMetadataBlock(r io.ReadSeeker) (last bool, err error) {
|
||||
blockHeader, err := readBytes(r, 1)
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
if getBit(blockHeader[0], 7) {
|
||||
blockHeader[0] ^= (1 << 7)
|
||||
last = true
|
||||
}
|
||||
|
||||
blockLen, err := readInt(r, 3)
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
switch blockType(blockHeader[0]) {
|
||||
case vorbisCommentBlock:
|
||||
err = m.readVorbisComment(r)
|
||||
|
||||
case pictureBlock:
|
||||
err = m.readPictureBlock(r)
|
||||
|
||||
default:
|
||||
_, err = r.Seek(int64(blockLen), io.SeekCurrent)
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
func (m *metadataFLAC) FileType() FileType {
|
||||
return FLAC
|
||||
}
|
||||
81
vendor/github.com/dhowden/tag/id.go
generated
vendored
Normal file
81
vendor/github.com/dhowden/tag/id.go
generated
vendored
Normal file
@@ -0,0 +1,81 @@
|
||||
package tag
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"io"
|
||||
)
|
||||
|
||||
// Identify identifies the format and file type of the data in the ReadSeeker.
|
||||
func Identify(r io.ReadSeeker) (format Format, fileType FileType, err error) {
|
||||
b, err := readBytes(r, 11)
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
_, err = r.Seek(-11, io.SeekCurrent)
|
||||
if err != nil {
|
||||
err = fmt.Errorf("could not seek back to original position: %v", err)
|
||||
return
|
||||
}
|
||||
|
||||
switch {
|
||||
case string(b[0:4]) == "fLaC":
|
||||
return VORBIS, FLAC, nil
|
||||
|
||||
case string(b[0:4]) == "OggS":
|
||||
return VORBIS, OGG, nil
|
||||
|
||||
case string(b[4:8]) == "ftyp":
|
||||
b = b[8:11]
|
||||
fileType = UnknownFileType
|
||||
switch string(b) {
|
||||
case "M4A":
|
||||
fileType = M4A
|
||||
|
||||
case "M4B":
|
||||
fileType = M4B
|
||||
|
||||
case "M4P":
|
||||
fileType = M4P
|
||||
}
|
||||
return MP4, fileType, nil
|
||||
|
||||
case string(b[0:3]) == "ID3":
|
||||
b := b[3:]
|
||||
switch uint(b[0]) {
|
||||
case 2:
|
||||
format = ID3v2_2
|
||||
case 3:
|
||||
format = ID3v2_3
|
||||
case 4:
|
||||
format = ID3v2_4
|
||||
case 0, 1:
|
||||
fallthrough
|
||||
default:
|
||||
err = fmt.Errorf("ID3 version: %v, expected: 2, 3 or 4", uint(b[0]))
|
||||
return
|
||||
}
|
||||
return format, MP3, nil
|
||||
}
|
||||
|
||||
n, err := r.Seek(-128, io.SeekEnd)
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
tag, err := readString(r, 3)
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
_, err = r.Seek(-n, io.SeekCurrent)
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
if tag != "TAG" {
|
||||
err = ErrNoTagsFound
|
||||
return
|
||||
}
|
||||
return ID3v1, MP3, nil
|
||||
}
|
||||
144
vendor/github.com/dhowden/tag/id3v1.go
generated
vendored
Normal file
144
vendor/github.com/dhowden/tag/id3v1.go
generated
vendored
Normal file
@@ -0,0 +1,144 @@
|
||||
// Copyright 2015, David Howden
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package tag
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"io"
|
||||
"strconv"
|
||||
"strings"
|
||||
)
|
||||
|
||||
// id3v1Genres is a list of genres as given in the ID3v1 specification.
|
||||
var id3v1Genres = [...]string{
|
||||
"Blues", "Classic Rock", "Country", "Dance", "Disco", "Funk", "Grunge",
|
||||
"Hip-Hop", "Jazz", "Metal", "New Age", "Oldies", "Other", "Pop", "R&B",
|
||||
"Rap", "Reggae", "Rock", "Techno", "Industrial", "Alternative", "Ska",
|
||||
"Death Metal", "Pranks", "Soundtrack", "Euro-Techno", "Ambient",
|
||||
"Trip-Hop", "Vocal", "Jazz+Funk", "Fusion", "Trance", "Classical",
|
||||
"Instrumental", "Acid", "House", "Game", "Sound Clip", "Gospel",
|
||||
"Noise", "AlternRock", "Bass", "Soul", "Punk", "Space", "Meditative",
|
||||
"Instrumental Pop", "Instrumental Rock", "Ethnic", "Gothic",
|
||||
"Darkwave", "Techno-Industrial", "Electronic", "Pop-Folk",
|
||||
"Eurodance", "Dream", "Southern Rock", "Comedy", "Cult", "Gangsta",
|
||||
"Top 40", "Christian Rap", "Pop/Funk", "Jungle", "Native American",
|
||||
"Cabaret", "New Wave", "Psychadelic", "Rave", "Showtunes", "Trailer",
|
||||
"Lo-Fi", "Tribal", "Acid Punk", "Acid Jazz", "Polka", "Retro",
|
||||
"Musical", "Rock & Roll", "Hard Rock", "Folk", "Folk-Rock",
|
||||
"National Folk", "Swing", "Fast Fusion", "Bebob", "Latin", "Revival",
|
||||
"Celtic", "Bluegrass", "Avantgarde", "Gothic Rock", "Progressive Rock",
|
||||
"Psychedelic Rock", "Symphonic Rock", "Slow Rock", "Big Band",
|
||||
"Chorus", "Easy Listening", "Acoustic", "Humour", "Speech", "Chanson",
|
||||
"Opera", "Chamber Music", "Sonata", "Symphony", "Booty Bass", "Primus",
|
||||
"Porn Groove", "Satire", "Slow Jam", "Club", "Tango", "Samba",
|
||||
"Folklore", "Ballad", "Power Ballad", "Rhythmic Soul", "Freestyle",
|
||||
"Duet", "Punk Rock", "Drum Solo", "Acapella", "Euro-House", "Dance Hall",
|
||||
}
|
||||
|
||||
// ErrNotID3v1 is an error which is returned when no ID3v1 header is found.
|
||||
var ErrNotID3v1 = errors.New("invalid ID3v1 header")
|
||||
|
||||
// ReadID3v1Tags reads ID3v1 tags from the io.ReadSeeker. Returns ErrNotID3v1
|
||||
// if there are no ID3v1 tags, otherwise non-nil error if there was a problem.
|
||||
func ReadID3v1Tags(r io.ReadSeeker) (Metadata, error) {
|
||||
_, err := r.Seek(-128, io.SeekEnd)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
if tag, err := readString(r, 3); err != nil {
|
||||
return nil, err
|
||||
} else if tag != "TAG" {
|
||||
return nil, ErrNotID3v1
|
||||
}
|
||||
|
||||
title, err := readString(r, 30)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
artist, err := readString(r, 30)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
album, err := readString(r, 30)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
year, err := readString(r, 4)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
commentBytes, err := readBytes(r, 30)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
var comment string
|
||||
var track int
|
||||
if commentBytes[28] == 0 {
|
||||
comment = trimString(string(commentBytes[:28]))
|
||||
track = int(commentBytes[29])
|
||||
} else {
|
||||
comment = trimString(string(commentBytes))
|
||||
}
|
||||
|
||||
var genre string
|
||||
genreID, err := readBytes(r, 1)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if int(genreID[0]) < len(id3v1Genres) {
|
||||
genre = id3v1Genres[int(genreID[0])]
|
||||
}
|
||||
|
||||
m := make(map[string]interface{})
|
||||
m["title"] = trimString(title)
|
||||
m["artist"] = trimString(artist)
|
||||
m["album"] = trimString(album)
|
||||
m["year"] = trimString(year)
|
||||
m["comment"] = trimString(comment)
|
||||
m["track"] = track
|
||||
m["genre"] = genre
|
||||
|
||||
return metadataID3v1(m), nil
|
||||
}
|
||||
|
||||
func trimString(x string) string {
|
||||
return strings.TrimSpace(strings.Trim(x, "\x00"))
|
||||
}
|
||||
|
||||
// metadataID3v1 is the implementation of Metadata used for ID3v1 tags.
|
||||
type metadataID3v1 map[string]interface{}
|
||||
|
||||
func (metadataID3v1) Format() Format { return ID3v1 }
|
||||
func (metadataID3v1) FileType() FileType { return MP3 }
|
||||
func (m metadataID3v1) Raw() map[string]interface{} { return m }
|
||||
|
||||
func (m metadataID3v1) Title() string { return m["title"].(string) }
|
||||
func (m metadataID3v1) Album() string { return m["album"].(string) }
|
||||
func (m metadataID3v1) Artist() string { return m["artist"].(string) }
|
||||
func (m metadataID3v1) Genre() string { return m["genre"].(string) }
|
||||
|
||||
func (m metadataID3v1) Year() int {
|
||||
y := m["year"].(string)
|
||||
n, err := strconv.Atoi(y)
|
||||
if err != nil {
|
||||
return 0
|
||||
}
|
||||
return n
|
||||
}
|
||||
|
||||
func (m metadataID3v1) Track() (int, int) { return m["track"].(int), 0 }
|
||||
|
||||
func (m metadataID3v1) AlbumArtist() string { return "" }
|
||||
func (m metadataID3v1) Composer() string { return "" }
|
||||
func (metadataID3v1) Disc() (int, int) { return 0, 0 }
|
||||
func (m metadataID3v1) Picture() *Picture { return nil }
|
||||
func (m metadataID3v1) Lyrics() string { return "" }
|
||||
func (m metadataID3v1) Comment() string { return m["comment"].(string) }
|
||||
434
vendor/github.com/dhowden/tag/id3v2.go
generated
vendored
Normal file
434
vendor/github.com/dhowden/tag/id3v2.go
generated
vendored
Normal file
@@ -0,0 +1,434 @@
|
||||
// Copyright 2015, David Howden
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package tag
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"io"
|
||||
"regexp"
|
||||
"strconv"
|
||||
"strings"
|
||||
)
|
||||
|
||||
var id3v2Genres = [...]string{
|
||||
"Blues", "Classic Rock", "Country", "Dance", "Disco", "Funk", "Grunge",
|
||||
"Hip-Hop", "Jazz", "Metal", "New Age", "Oldies", "Other", "Pop", "R&B",
|
||||
"Rap", "Reggae", "Rock", "Techno", "Industrial", "Alternative", "Ska",
|
||||
"Death Metal", "Pranks", "Soundtrack", "Euro-Techno", "Ambient",
|
||||
"Trip-Hop", "Vocal", "Jazz+Funk", "Fusion", "Trance", "Classical",
|
||||
"Instrumental", "Acid", "House", "Game", "Sound Clip", "Gospel",
|
||||
"Noise", "AlternRock", "Bass", "Soul", "Punk", "Space", "Meditative",
|
||||
"Instrumental Pop", "Instrumental Rock", "Ethnic", "Gothic",
|
||||
"Darkwave", "Techno-Industrial", "Electronic", "Pop-Folk",
|
||||
"Eurodance", "Dream", "Southern Rock", "Comedy", "Cult", "Gangsta",
|
||||
"Top 40", "Christian Rap", "Pop/Funk", "Jungle", "Native American",
|
||||
"Cabaret", "New Wave", "Psychedelic", "Rave", "Showtunes", "Trailer",
|
||||
"Lo-Fi", "Tribal", "Acid Punk", "Acid Jazz", "Polka", "Retro",
|
||||
"Musical", "Rock & Roll", "Hard Rock", "Folk", "Folk-Rock",
|
||||
"National Folk", "Swing", "Fast Fusion", "Bebob", "Latin", "Revival",
|
||||
"Celtic", "Bluegrass", "Avantgarde", "Gothic Rock", "Progressive Rock",
|
||||
"Psychedelic Rock", "Symphonic Rock", "Slow Rock", "Big Band",
|
||||
"Chorus", "Easy Listening", "Acoustic", "Humour", "Speech", "Chanson",
|
||||
"Opera", "Chamber Music", "Sonata", "Symphony", "Booty Bass", "Primus",
|
||||
"Porn Groove", "Satire", "Slow Jam", "Club", "Tango", "Samba",
|
||||
"Folklore", "Ballad", "Power Ballad", "Rhythmic Soul", "Freestyle",
|
||||
"Duet", "Punk Rock", "Drum Solo", "A capella", "Euro-House", "Dance Hall",
|
||||
"Goa", "Drum & Bass", "Club-House", "Hardcore", "Terror", "Indie",
|
||||
"Britpop", "Negerpunk", "Polsk Punk", "Beat", "Christian Gangsta Rap",
|
||||
"Heavy Metal", "Black Metal", "Crossover", "Contemporary Christian",
|
||||
"Christian Rock ", "Merengue", "Salsa", "Thrash Metal", "Anime", "JPop",
|
||||
"Synthpop",
|
||||
}
|
||||
|
||||
// id3v2Header is a type which represents an ID3v2 tag header.
|
||||
type id3v2Header struct {
|
||||
Version Format
|
||||
Unsynchronisation bool
|
||||
ExtendedHeader bool
|
||||
Experimental bool
|
||||
Size int
|
||||
}
|
||||
|
||||
// readID3v2Header reads the ID3v2 header from the given io.Reader.
|
||||
// offset it number of bytes of header that was read
|
||||
func readID3v2Header(r io.Reader) (h *id3v2Header, offset int, err error) {
|
||||
offset = 10
|
||||
b, err := readBytes(r, offset)
|
||||
if err != nil {
|
||||
return nil, 0, fmt.Errorf("expected to read 10 bytes (ID3v2Header): %v", err)
|
||||
}
|
||||
|
||||
if string(b[0:3]) != "ID3" {
|
||||
return nil, 0, fmt.Errorf("expected to read \"ID3\"")
|
||||
}
|
||||
|
||||
b = b[3:]
|
||||
var vers Format
|
||||
switch uint(b[0]) {
|
||||
case 2:
|
||||
vers = ID3v2_2
|
||||
case 3:
|
||||
vers = ID3v2_3
|
||||
case 4:
|
||||
vers = ID3v2_4
|
||||
case 0, 1:
|
||||
fallthrough
|
||||
default:
|
||||
return nil, 0, fmt.Errorf("ID3 version: %v, expected: 2, 3 or 4", uint(b[0]))
|
||||
}
|
||||
|
||||
// NB: We ignore b[1] (the revision) as we don't currently rely on it.
|
||||
h = &id3v2Header{
|
||||
Version: vers,
|
||||
Unsynchronisation: getBit(b[2], 7),
|
||||
ExtendedHeader: getBit(b[2], 6),
|
||||
Experimental: getBit(b[2], 5),
|
||||
Size: get7BitChunkedInt(b[3:7]),
|
||||
}
|
||||
|
||||
if h.ExtendedHeader {
|
||||
switch vers {
|
||||
case ID3v2_3:
|
||||
b, err := readBytes(r, 4)
|
||||
if err != nil {
|
||||
return nil, 0, fmt.Errorf("expected to read 4 bytes (ID3v23 extended header len): %v", err)
|
||||
}
|
||||
// skip header, size is excluding len bytes
|
||||
extendedHeaderSize := getInt(b)
|
||||
_, err = readBytes(r, extendedHeaderSize)
|
||||
if err != nil {
|
||||
return nil, 0, fmt.Errorf("expected to read %d bytes (ID3v23 skip extended header): %v", extendedHeaderSize, err)
|
||||
}
|
||||
offset += extendedHeaderSize
|
||||
case ID3v2_4:
|
||||
b, err := readBytes(r, 4)
|
||||
if err != nil {
|
||||
return nil, 0, fmt.Errorf("expected to read 4 bytes (ID3v24 extended header len): %v", err)
|
||||
}
|
||||
// skip header, size is synchsafe int including len bytes
|
||||
extendedHeaderSize := get7BitChunkedInt(b) - 4
|
||||
_, err = readBytes(r, extendedHeaderSize)
|
||||
if err != nil {
|
||||
return nil, 0, fmt.Errorf("expected to read %d bytes (ID3v24 skip extended header): %v", extendedHeaderSize, err)
|
||||
}
|
||||
offset += extendedHeaderSize
|
||||
default:
|
||||
// nop, only 2.3 and 2.4 should have extended header
|
||||
}
|
||||
}
|
||||
|
||||
return h, offset, nil
|
||||
}
|
||||
|
||||
// id3v2FrameFlags is a type which represents the flags which can be set on an ID3v2 frame.
|
||||
type id3v2FrameFlags struct {
|
||||
// Message (ID3 2.3.0 and 2.4.0)
|
||||
TagAlterPreservation bool
|
||||
FileAlterPreservation bool
|
||||
ReadOnly bool
|
||||
|
||||
// Format (ID3 2.3.0 and 2.4.0)
|
||||
Compression bool
|
||||
Encryption bool
|
||||
GroupIdentity bool
|
||||
// ID3 2.4.0 only (see http://id3.org/id3v2.4.0-structure sec 4.1)
|
||||
Unsynchronisation bool
|
||||
DataLengthIndicator bool
|
||||
}
|
||||
|
||||
func readID3v23FrameFlags(r io.Reader) (*id3v2FrameFlags, error) {
|
||||
b, err := readBytes(r, 2)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
msg := b[0]
|
||||
fmt := b[1]
|
||||
|
||||
return &id3v2FrameFlags{
|
||||
TagAlterPreservation: getBit(msg, 7),
|
||||
FileAlterPreservation: getBit(msg, 6),
|
||||
ReadOnly: getBit(msg, 5),
|
||||
Compression: getBit(fmt, 7),
|
||||
Encryption: getBit(fmt, 6),
|
||||
GroupIdentity: getBit(fmt, 5),
|
||||
}, nil
|
||||
}
|
||||
|
||||
func readID3v24FrameFlags(r io.Reader) (*id3v2FrameFlags, error) {
|
||||
b, err := readBytes(r, 2)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
msg := b[0]
|
||||
fmt := b[1]
|
||||
|
||||
return &id3v2FrameFlags{
|
||||
TagAlterPreservation: getBit(msg, 6),
|
||||
FileAlterPreservation: getBit(msg, 5),
|
||||
ReadOnly: getBit(msg, 4),
|
||||
GroupIdentity: getBit(fmt, 6),
|
||||
Compression: getBit(fmt, 3),
|
||||
Encryption: getBit(fmt, 2),
|
||||
Unsynchronisation: getBit(fmt, 1),
|
||||
DataLengthIndicator: getBit(fmt, 0),
|
||||
}, nil
|
||||
|
||||
}
|
||||
|
||||
func readID3v2_2FrameHeader(r io.Reader) (name string, size int, headerSize int, err error) {
|
||||
name, err = readString(r, 3)
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
size, err = readInt(r, 3)
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
headerSize = 6
|
||||
return
|
||||
}
|
||||
|
||||
func readID3v2_3FrameHeader(r io.Reader) (name string, size int, headerSize int, err error) {
|
||||
name, err = readString(r, 4)
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
size, err = readInt(r, 4)
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
headerSize = 8
|
||||
return
|
||||
}
|
||||
|
||||
func readID3v2_4FrameHeader(r io.Reader) (name string, size int, headerSize int, err error) {
|
||||
name, err = readString(r, 4)
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
size, err = read7BitChunkedInt(r, 4)
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
headerSize = 8
|
||||
return
|
||||
}
|
||||
|
||||
// readID3v2Frames reads ID3v2 frames from the given reader using the ID3v2Header.
|
||||
func readID3v2Frames(r io.Reader, offset int, h *id3v2Header) (map[string]interface{}, error) {
|
||||
result := make(map[string]interface{})
|
||||
|
||||
for offset < h.Size {
|
||||
var err error
|
||||
var name string
|
||||
var size, headerSize int
|
||||
var flags *id3v2FrameFlags
|
||||
|
||||
switch h.Version {
|
||||
case ID3v2_2:
|
||||
name, size, headerSize, err = readID3v2_2FrameHeader(r)
|
||||
|
||||
case ID3v2_3:
|
||||
name, size, headerSize, err = readID3v2_3FrameHeader(r)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
flags, err = readID3v23FrameFlags(r)
|
||||
headerSize += 2
|
||||
|
||||
case ID3v2_4:
|
||||
name, size, headerSize, err = readID3v2_4FrameHeader(r)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
flags, err = readID3v24FrameFlags(r)
|
||||
headerSize += 2
|
||||
}
|
||||
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// FIXME: Do we still need this?
|
||||
// if size=0, we certainly are in a padding zone. ignore the rest of
|
||||
// the tags
|
||||
if size == 0 {
|
||||
break
|
||||
}
|
||||
|
||||
offset += headerSize + size
|
||||
|
||||
// Avoid corrupted padding (see http://id3.org/Compliance%20Issues).
|
||||
if !validID3Frame(h.Version, name) && offset > h.Size {
|
||||
break
|
||||
}
|
||||
|
||||
if flags != nil {
|
||||
if flags.Compression {
|
||||
_, err = read7BitChunkedInt(r, 4) // read 4
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
size -= 4
|
||||
}
|
||||
|
||||
if flags.Encryption {
|
||||
_, err = readBytes(r, 1) // read 1 byte of encryption method
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
size -= 1
|
||||
}
|
||||
}
|
||||
|
||||
b, err := readBytes(r, size)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// There can be multiple tag with the same name. Append a number to the
|
||||
// name if there is more than one.
|
||||
rawName := name
|
||||
if _, ok := result[rawName]; ok {
|
||||
for i := 0; ok; i++ {
|
||||
rawName = name + "_" + strconv.Itoa(i)
|
||||
_, ok = result[rawName]
|
||||
}
|
||||
}
|
||||
|
||||
switch {
|
||||
case name == "TXXX" || name == "TXX":
|
||||
t, err := readTextWithDescrFrame(b, false, true) // no lang, but enc
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
result[rawName] = t
|
||||
|
||||
case name[0] == 'T':
|
||||
txt, err := readTFrame(b)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
result[rawName] = txt
|
||||
|
||||
case name == "UFID" || name == "UFI":
|
||||
t, err := readUFID(b)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
result[rawName] = t
|
||||
|
||||
case name == "WXXX" || name == "WXX":
|
||||
t, err := readTextWithDescrFrame(b, false, false) // no lang, no enc
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
result[rawName] = t
|
||||
|
||||
case name[0] == 'W':
|
||||
txt, err := readWFrame(b)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
result[rawName] = txt
|
||||
|
||||
case name == "COMM" || name == "COM" || name == "USLT" || name == "ULT":
|
||||
t, err := readTextWithDescrFrame(b, true, true) // both lang and enc
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
result[rawName] = t
|
||||
|
||||
case name == "APIC":
|
||||
p, err := readAPICFrame(b)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
result[rawName] = p
|
||||
|
||||
case name == "PIC":
|
||||
p, err := readPICFrame(b)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
result[rawName] = p
|
||||
|
||||
default:
|
||||
result[rawName] = b
|
||||
}
|
||||
}
|
||||
return result, nil
|
||||
}
|
||||
|
||||
type unsynchroniser struct {
|
||||
io.Reader
|
||||
ff bool
|
||||
}
|
||||
|
||||
// filter io.Reader which skip the Unsynchronisation bytes
|
||||
func (r *unsynchroniser) Read(p []byte) (int, error) {
|
||||
b := make([]byte, 1)
|
||||
i := 0
|
||||
for i < len(p) {
|
||||
if n, err := r.Reader.Read(b); err != nil || n == 0 {
|
||||
return i, err
|
||||
}
|
||||
if r.ff && b[0] == 0x00 {
|
||||
r.ff = false
|
||||
continue
|
||||
}
|
||||
p[i] = b[0]
|
||||
i++
|
||||
r.ff = (b[0] == 0xFF)
|
||||
}
|
||||
return i, nil
|
||||
}
|
||||
|
||||
// ReadID3v2Tags parses ID3v2.{2,3,4} tags from the io.ReadSeeker into a Metadata, returning
|
||||
// non-nil error on failure.
|
||||
func ReadID3v2Tags(r io.ReadSeeker) (Metadata, error) {
|
||||
h, offset, err := readID3v2Header(r)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
var ur io.Reader = r
|
||||
if h.Unsynchronisation {
|
||||
ur = &unsynchroniser{Reader: r}
|
||||
}
|
||||
|
||||
f, err := readID3v2Frames(ur, offset, h)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return metadataID3v2{header: h, frames: f}, nil
|
||||
}
|
||||
|
||||
var id3v2genreRe = regexp.MustCompile(`(.*[^(]|.* |^)\(([0-9]+)\) *(.*)$`)
|
||||
|
||||
// id3v2genre parse a id3v2 genre tag and expand the numeric genres
|
||||
func id3v2genre(genre string) string {
|
||||
c := true
|
||||
for c {
|
||||
orig := genre
|
||||
if match := id3v2genreRe.FindStringSubmatch(genre); len(match) > 0 {
|
||||
if genreID, err := strconv.Atoi(match[2]); err == nil {
|
||||
if genreID < len(id3v2Genres) {
|
||||
genre = id3v2Genres[genreID]
|
||||
if match[1] != "" {
|
||||
genre = strings.TrimSpace(match[1]) + " " + genre
|
||||
}
|
||||
if match[3] != "" {
|
||||
genre = genre + " " + match[3]
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
c = (orig != genre)
|
||||
}
|
||||
return strings.Replace(genre, "((", "(", -1)
|
||||
}
|
||||
638
vendor/github.com/dhowden/tag/id3v2frames.go
generated
vendored
Normal file
638
vendor/github.com/dhowden/tag/id3v2frames.go
generated
vendored
Normal file
@@ -0,0 +1,638 @@
|
||||
// Copyright 2015, David Howden
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package tag
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"encoding/binary"
|
||||
"errors"
|
||||
"fmt"
|
||||
"strings"
|
||||
"unicode/utf16"
|
||||
)
|
||||
|
||||
// DefaultUTF16WithBOMByteOrder is the byte order used when the "UTF16 with BOM" encoding
|
||||
// is specified without a corresponding BOM in the data.
|
||||
var DefaultUTF16WithBOMByteOrder binary.ByteOrder = binary.LittleEndian
|
||||
|
||||
// ID3v2.2.0 frames (see http://id3.org/id3v2-00, sec 4).
|
||||
var id3v22Frames = map[string]string{
|
||||
"BUF": "Recommended buffer size",
|
||||
|
||||
"CNT": "Play counter",
|
||||
"COM": "Comments",
|
||||
"CRA": "Audio encryption",
|
||||
"CRM": "Encrypted meta frame",
|
||||
|
||||
"ETC": "Event timing codes",
|
||||
"EQU": "Equalization",
|
||||
|
||||
"GEO": "General encapsulated object",
|
||||
|
||||
"IPL": "Involved people list",
|
||||
|
||||
"LNK": "Linked information",
|
||||
|
||||
"MCI": "Music CD Identifier",
|
||||
"MLL": "MPEG location lookup table",
|
||||
|
||||
"PIC": "Attached picture",
|
||||
"POP": "Popularimeter",
|
||||
|
||||
"REV": "Reverb",
|
||||
"RVA": "Relative volume adjustment",
|
||||
|
||||
"SLT": "Synchronized lyric/text",
|
||||
"STC": "Synced tempo codes",
|
||||
|
||||
"TAL": "Album/Movie/Show title",
|
||||
"TBP": "BPM (Beats Per Minute)",
|
||||
"TCM": "Composer",
|
||||
"TCO": "Content type",
|
||||
"TCR": "Copyright message",
|
||||
"TDA": "Date",
|
||||
"TDY": "Playlist delay",
|
||||
"TEN": "Encoded by",
|
||||
"TFT": "File type",
|
||||
"TIM": "Time",
|
||||
"TKE": "Initial key",
|
||||
"TLA": "Language(s)",
|
||||
"TLE": "Length",
|
||||
"TMT": "Media type",
|
||||
"TOA": "Original artist(s)/performer(s)",
|
||||
"TOF": "Original filename",
|
||||
"TOL": "Original Lyricist(s)/text writer(s)",
|
||||
"TOR": "Original release year",
|
||||
"TOT": "Original album/Movie/Show title",
|
||||
"TP1": "Lead artist(s)/Lead performer(s)/Soloist(s)/Performing group",
|
||||
"TP2": "Band/Orchestra/Accompaniment",
|
||||
"TP3": "Conductor/Performer refinement",
|
||||
"TP4": "Interpreted, remixed, or otherwise modified by",
|
||||
"TPA": "Part of a set",
|
||||
"TPB": "Publisher",
|
||||
"TRC": "ISRC (International Standard Recording Code)",
|
||||
"TRD": "Recording dates",
|
||||
"TRK": "Track number/Position in set",
|
||||
"TSI": "Size",
|
||||
"TSS": "Software/hardware and settings used for encoding",
|
||||
"TT1": "Content group description",
|
||||
"TT2": "Title/Songname/Content description",
|
||||
"TT3": "Subtitle/Description refinement",
|
||||
"TXT": "Lyricist/text writer",
|
||||
"TXX": "User defined text information frame",
|
||||
"TYE": "Year",
|
||||
|
||||
"UFI": "Unique file identifier",
|
||||
"ULT": "Unsychronized lyric/text transcription",
|
||||
|
||||
"WAF": "Official audio file webpage",
|
||||
"WAR": "Official artist/performer webpage",
|
||||
"WAS": "Official audio source webpage",
|
||||
"WCM": "Commercial information",
|
||||
"WCP": "Copyright/Legal information",
|
||||
"WPB": "Publishers official webpage",
|
||||
"WXX": "User defined URL link frame",
|
||||
}
|
||||
|
||||
// ID3v2.3.0 frames (see http://id3.org/id3v2.3.0#Declared_ID3v2_frames).
|
||||
var id3v23Frames = map[string]string{
|
||||
"AENC": "Audio encryption]",
|
||||
"APIC": "Attached picture",
|
||||
"COMM": "Comments",
|
||||
"COMR": "Commercial frame",
|
||||
"ENCR": "Encryption method registration",
|
||||
"EQUA": "Equalization",
|
||||
"ETCO": "Event timing codes",
|
||||
"GEOB": "General encapsulated object",
|
||||
"GRID": "Group identification registration",
|
||||
"IPLS": "Involved people list",
|
||||
"LINK": "Linked information",
|
||||
"MCDI": "Music CD identifier",
|
||||
"MLLT": "MPEG location lookup table",
|
||||
"OWNE": "Ownership frame",
|
||||
"PRIV": "Private frame",
|
||||
"PCNT": "Play counter",
|
||||
"POPM": "Popularimeter",
|
||||
"POSS": "Position synchronisation frame",
|
||||
"RBUF": "Recommended buffer size",
|
||||
"RVAD": "Relative volume adjustment",
|
||||
"RVRB": "Reverb",
|
||||
"SYLT": "Synchronized lyric/text",
|
||||
"SYTC": "Synchronized tempo codes",
|
||||
"TALB": "Album/Movie/Show title",
|
||||
"TBPM": "BPM (beats per minute)",
|
||||
"TCMP": "iTunes Compilation Flag",
|
||||
"TCOM": "Composer",
|
||||
"TCON": "Content type",
|
||||
"TCOP": "Copyright message",
|
||||
"TDAT": "Date",
|
||||
"TDLY": "Playlist delay",
|
||||
"TENC": "Encoded by",
|
||||
"TEXT": "Lyricist/Text writer",
|
||||
"TFLT": "File type",
|
||||
"TIME": "Time",
|
||||
"TIT1": "Content group description",
|
||||
"TIT2": "Title/songname/content description",
|
||||
"TIT3": "Subtitle/Description refinement",
|
||||
"TKEY": "Initial key",
|
||||
"TLAN": "Language(s)",
|
||||
"TLEN": "Length",
|
||||
"TMED": "Media type",
|
||||
"TOAL": "Original album/movie/show title",
|
||||
"TOFN": "Original filename",
|
||||
"TOLY": "Original lyricist(s)/text writer(s)",
|
||||
"TOPE": "Original artist(s)/performer(s)",
|
||||
"TORY": "Original release year",
|
||||
"TOWN": "File owner/licensee",
|
||||
"TPE1": "Lead performer(s)/Soloist(s)",
|
||||
"TPE2": "Band/orchestra/accompaniment",
|
||||
"TPE3": "Conductor/performer refinement",
|
||||
"TPE4": "Interpreted, remixed, or otherwise modified by",
|
||||
"TPOS": "Part of a set",
|
||||
"TPUB": "Publisher",
|
||||
"TRCK": "Track number/Position in set",
|
||||
"TRDA": "Recording dates",
|
||||
"TRSN": "Internet radio station name",
|
||||
"TRSO": "Internet radio station owner",
|
||||
"TSIZ": "Size",
|
||||
"TSO2": "iTunes uses this for Album Artist sort order",
|
||||
"TSOC": "iTunes uses this for Composer sort order",
|
||||
"TSRC": "ISRC (international standard recording code)",
|
||||
"TSSE": "Software/Hardware and settings used for encoding",
|
||||
"TYER": "Year",
|
||||
"TXXX": "User defined text information frame",
|
||||
"UFID": "Unique file identifier",
|
||||
"USER": "Terms of use",
|
||||
"USLT": "Unsychronized lyric/text transcription",
|
||||
"WCOM": "Commercial information",
|
||||
"WCOP": "Copyright/Legal information",
|
||||
"WOAF": "Official audio file webpage",
|
||||
"WOAR": "Official artist/performer webpage",
|
||||
"WOAS": "Official audio source webpage",
|
||||
"WORS": "Official internet radio station homepage",
|
||||
"WPAY": "Payment",
|
||||
"WPUB": "Publishers official webpage",
|
||||
"WXXX": "User defined URL link frame",
|
||||
}
|
||||
|
||||
// ID3v2.4.0 frames (see http://id3.org/id3v2.4.0-frames, sec 4).
|
||||
var id3v24Frames = map[string]string{
|
||||
"AENC": "Audio encryption",
|
||||
"APIC": "Attached picture",
|
||||
"ASPI": "Audio seek point index",
|
||||
|
||||
"COMM": "Comments",
|
||||
"COMR": "Commercial frame",
|
||||
|
||||
"ENCR": "Encryption method registration",
|
||||
"EQU2": "Equalisation (2)",
|
||||
"ETCO": "Event timing codes",
|
||||
|
||||
"GEOB": "General encapsulated object",
|
||||
"GRID": "Group identification registration",
|
||||
|
||||
"LINK": "Linked information",
|
||||
|
||||
"MCDI": "Music CD identifier",
|
||||
"MLLT": "MPEG location lookup table",
|
||||
|
||||
"OWNE": "Ownership frame",
|
||||
|
||||
"PRIV": "Private frame",
|
||||
"PCNT": "Play counter",
|
||||
"POPM": "Popularimeter",
|
||||
"POSS": "Position synchronisation frame",
|
||||
|
||||
"RBUF": "Recommended buffer size",
|
||||
"RVA2": "Relative volume adjustment (2)",
|
||||
"RVRB": "Reverb",
|
||||
|
||||
"SEEK": "Seek frame",
|
||||
"SIGN": "Signature frame",
|
||||
"SYLT": "Synchronised lyric/text",
|
||||
"SYTC": "Synchronised tempo codes",
|
||||
|
||||
"TALB": "Album/Movie/Show title",
|
||||
"TBPM": "BPM (beats per minute)",
|
||||
"TCMP": "iTunes Compilation Flag",
|
||||
"TCOM": "Composer",
|
||||
"TCON": "Content type",
|
||||
"TCOP": "Copyright message",
|
||||
"TDEN": "Encoding time",
|
||||
"TDLY": "Playlist delay",
|
||||
"TDOR": "Original release time",
|
||||
"TDRC": "Recording time",
|
||||
"TDRL": "Release time",
|
||||
"TDTG": "Tagging time",
|
||||
"TENC": "Encoded by",
|
||||
"TEXT": "Lyricist/Text writer",
|
||||
"TFLT": "File type",
|
||||
"TIPL": "Involved people list",
|
||||
"TIT1": "Content group description",
|
||||
"TIT2": "Title/songname/content description",
|
||||
"TIT3": "Subtitle/Description refinement",
|
||||
"TKEY": "Initial key",
|
||||
"TLAN": "Language(s)",
|
||||
"TLEN": "Length",
|
||||
"TMCL": "Musician credits list",
|
||||
"TMED": "Media type",
|
||||
"TMOO": "Mood",
|
||||
"TOAL": "Original album/movie/show title",
|
||||
"TOFN": "Original filename",
|
||||
"TOLY": "Original lyricist(s)/text writer(s)",
|
||||
"TOPE": "Original artist(s)/performer(s)",
|
||||
"TOWN": "File owner/licensee",
|
||||
"TPE1": "Lead performer(s)/Soloist(s)",
|
||||
"TPE2": "Band/orchestra/accompaniment",
|
||||
"TPE3": "Conductor/performer refinement",
|
||||
"TPE4": "Interpreted, remixed, or otherwise modified by",
|
||||
"TPOS": "Part of a set",
|
||||
"TPRO": "Produced notice",
|
||||
"TPUB": "Publisher",
|
||||
"TRCK": "Track number/Position in set",
|
||||
"TRSN": "Internet radio station name",
|
||||
"TRSO": "Internet radio station owner",
|
||||
"TSO2": "iTunes uses this for Album Artist sort order",
|
||||
"TSOA": "Album sort order",
|
||||
"TSOC": "iTunes uses this for Composer sort order",
|
||||
"TSOP": "Performer sort order",
|
||||
"TSOT": "Title sort order",
|
||||
"TSRC": "ISRC (international standard recording code)",
|
||||
"TSSE": "Software/Hardware and settings used for encoding",
|
||||
"TSST": "Set subtitle",
|
||||
"TXXX": "User defined text information frame",
|
||||
|
||||
"UFID": "Unique file identifier",
|
||||
"USER": "Terms of use",
|
||||
"USLT": "Unsynchronised lyric/text transcription",
|
||||
|
||||
"WCOM": "Commercial information",
|
||||
"WCOP": "Copyright/Legal information",
|
||||
"WOAF": "Official audio file webpage",
|
||||
"WOAR": "Official artist/performer webpage",
|
||||
"WOAS": "Official audio source webpage",
|
||||
"WORS": "Official Internet radio station homepage",
|
||||
"WPAY": "Payment",
|
||||
"WPUB": "Publishers official webpage",
|
||||
"WXXX": "User defined URL link frame",
|
||||
}
|
||||
|
||||
// ID3 frames that are defined in the specs.
|
||||
var id3Frames = map[Format]map[string]string{
|
||||
ID3v2_2: id3v22Frames,
|
||||
ID3v2_3: id3v23Frames,
|
||||
ID3v2_4: id3v24Frames,
|
||||
}
|
||||
|
||||
func validID3Frame(version Format, name string) bool {
|
||||
names, ok := id3Frames[version]
|
||||
if !ok {
|
||||
return false
|
||||
}
|
||||
_, ok = names[name]
|
||||
return ok
|
||||
}
|
||||
|
||||
func readWFrame(b []byte) (string, error) {
|
||||
// Frame text is always encoded in ISO-8859-1
|
||||
b = append([]byte{0}, b...)
|
||||
return readTFrame(b)
|
||||
}
|
||||
|
||||
func readTFrame(b []byte) (string, error) {
|
||||
if len(b) == 0 {
|
||||
return "", nil
|
||||
}
|
||||
|
||||
txt, err := decodeText(b[0], b[1:])
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
return strings.Join(strings.Split(txt, string(singleZero)), ""), nil
|
||||
}
|
||||
|
||||
const (
|
||||
encodingISO8859 byte = 0
|
||||
encodingUTF16WithBOM byte = 1
|
||||
encodingUTF16 byte = 2
|
||||
encodingUTF8 byte = 3
|
||||
)
|
||||
|
||||
func decodeText(enc byte, b []byte) (string, error) {
|
||||
if len(b) == 0 {
|
||||
return "", nil
|
||||
}
|
||||
|
||||
switch enc {
|
||||
case encodingISO8859: // ISO-8859-1
|
||||
return decodeISO8859(b), nil
|
||||
|
||||
case encodingUTF16WithBOM: // UTF-16 with byte order marker
|
||||
if len(b) == 1 {
|
||||
return "", nil
|
||||
}
|
||||
return decodeUTF16WithBOM(b)
|
||||
|
||||
case encodingUTF16: // UTF-16 without byte order (assuming BigEndian)
|
||||
if len(b) == 1 {
|
||||
return "", nil
|
||||
}
|
||||
return decodeUTF16(b, binary.BigEndian)
|
||||
|
||||
case encodingUTF8: // UTF-8
|
||||
return string(b), nil
|
||||
|
||||
default: // Fallback to ISO-8859-1
|
||||
return decodeISO8859(b), nil
|
||||
}
|
||||
}
|
||||
|
||||
var (
|
||||
singleZero = []byte{0}
|
||||
doubleZero = []byte{0, 0}
|
||||
)
|
||||
|
||||
func dataSplit(b []byte, enc byte) [][]byte {
|
||||
delim := singleZero
|
||||
if enc == encodingUTF16 || enc == encodingUTF16WithBOM {
|
||||
delim = doubleZero
|
||||
}
|
||||
|
||||
result := bytes.SplitN(b, delim, 2)
|
||||
if len(result) != 2 {
|
||||
return result
|
||||
}
|
||||
|
||||
if len(result[1]) == 0 {
|
||||
return result
|
||||
}
|
||||
|
||||
if result[1][0] == 0 {
|
||||
// there was a double (or triple) 0 and we cut too early
|
||||
result[0] = append(result[0], result[1][0])
|
||||
result[1] = result[1][1:]
|
||||
}
|
||||
return result
|
||||
}
|
||||
|
||||
func decodeISO8859(b []byte) string {
|
||||
r := make([]rune, len(b))
|
||||
for i, x := range b {
|
||||
r[i] = rune(x)
|
||||
}
|
||||
return string(r)
|
||||
}
|
||||
|
||||
func decodeUTF16WithBOM(b []byte) (string, error) {
|
||||
if len(b) < 2 {
|
||||
return "", errors.New("invalid encoding: expected at least 2 bytes for UTF-16 byte order mark")
|
||||
}
|
||||
|
||||
var bo binary.ByteOrder
|
||||
switch {
|
||||
case b[0] == 0xFE && b[1] == 0xFF:
|
||||
bo = binary.BigEndian
|
||||
b = b[2:]
|
||||
|
||||
case b[0] == 0xFF && b[1] == 0xFE:
|
||||
bo = binary.LittleEndian
|
||||
b = b[2:]
|
||||
|
||||
default:
|
||||
bo = DefaultUTF16WithBOMByteOrder
|
||||
}
|
||||
return decodeUTF16(b, bo)
|
||||
}
|
||||
|
||||
func decodeUTF16(b []byte, bo binary.ByteOrder) (string, error) {
|
||||
if len(b)%2 != 0 {
|
||||
return "", errors.New("invalid encoding: expected even number of bytes for UTF-16 encoded text")
|
||||
}
|
||||
s := make([]uint16, 0, len(b)/2)
|
||||
for i := 0; i < len(b); i += 2 {
|
||||
s = append(s, bo.Uint16(b[i:i+2]))
|
||||
}
|
||||
return string(utf16.Decode(s)), nil
|
||||
}
|
||||
|
||||
// Comm is a type used in COMM, UFID, TXXX, WXXX and USLT tag.
|
||||
// It's a text with a description and a specified language
|
||||
// For WXXX, TXXX and UFID, we don't set a Language
|
||||
type Comm struct {
|
||||
Language string
|
||||
Description string
|
||||
Text string
|
||||
}
|
||||
|
||||
// String returns a string representation of the underlying Comm instance.
|
||||
func (t Comm) String() string {
|
||||
if t.Language != "" {
|
||||
return fmt.Sprintf("Text{Lang: '%v', Description: '%v', %v lines}",
|
||||
t.Language, t.Description, strings.Count(t.Text, "\n"))
|
||||
}
|
||||
return fmt.Sprintf("Text{Description: '%v', %v}", t.Description, t.Text)
|
||||
}
|
||||
|
||||
// IDv2.{3,4}
|
||||
// -- Header
|
||||
// <Header for 'Unsynchronised lyrics/text transcription', ID: "USLT">
|
||||
// <Header for 'Comment', ID: "COMM">
|
||||
// -- readTextWithDescrFrame(data, true, true)
|
||||
// Text encoding $xx
|
||||
// Language $xx xx xx
|
||||
// Content descriptor <text string according to encoding> $00 (00)
|
||||
// Lyrics/text <full text string according to encoding>
|
||||
// -- Header
|
||||
// <Header for 'User defined text information frame', ID: "TXXX">
|
||||
// <Header for 'User defined URL link frame', ID: "WXXX">
|
||||
// -- readTextWithDescrFrame(data, false, <isDataEncoded>)
|
||||
// Text encoding $xx
|
||||
// Description <text string according to encoding> $00 (00)
|
||||
// Value <text string according to encoding>
|
||||
func readTextWithDescrFrame(b []byte, hasLang bool, encoded bool) (*Comm, error) {
|
||||
enc := b[0]
|
||||
b = b[1:]
|
||||
|
||||
c := &Comm{}
|
||||
if hasLang {
|
||||
c.Language = string(b[:3])
|
||||
b = b[3:]
|
||||
}
|
||||
|
||||
descTextSplit := dataSplit(b, enc)
|
||||
if len(descTextSplit) < 1 {
|
||||
return nil, fmt.Errorf("error decoding tag description text: invalid encoding")
|
||||
}
|
||||
|
||||
desc, err := decodeText(enc, descTextSplit[0])
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("error decoding tag description text: %v", err)
|
||||
}
|
||||
c.Description = desc
|
||||
|
||||
if len(descTextSplit) == 1 {
|
||||
return c, nil
|
||||
}
|
||||
|
||||
if !encoded {
|
||||
enc = byte(0)
|
||||
}
|
||||
text, err := decodeText(enc, descTextSplit[1])
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("error decoding tag text: %v", err)
|
||||
}
|
||||
c.Text = text
|
||||
|
||||
return c, nil
|
||||
}
|
||||
|
||||
// UFID is composed of a provider (frequently a URL and a binary identifier)
|
||||
// The identifier can be a text (Musicbrainz use texts, but not necessary)
|
||||
type UFID struct {
|
||||
Provider string
|
||||
Identifier []byte
|
||||
}
|
||||
|
||||
func (u UFID) String() string {
|
||||
return fmt.Sprintf("%v (%v)", u.Provider, string(u.Identifier))
|
||||
}
|
||||
|
||||
func readUFID(b []byte) (*UFID, error) {
|
||||
result := bytes.SplitN(b, singleZero, 2)
|
||||
if len(result) != 2 {
|
||||
return nil, errors.New("expected to split UFID data into 2 pieces")
|
||||
}
|
||||
|
||||
return &UFID{
|
||||
Provider: string(result[0]),
|
||||
Identifier: result[1],
|
||||
}, nil
|
||||
}
|
||||
|
||||
var pictureTypes = map[byte]string{
|
||||
0x00: "Other",
|
||||
0x01: "32x32 pixels 'file icon' (PNG only)",
|
||||
0x02: "Other file icon",
|
||||
0x03: "Cover (front)",
|
||||
0x04: "Cover (back)",
|
||||
0x05: "Leaflet page",
|
||||
0x06: "Media (e.g. lable side of CD)",
|
||||
0x07: "Lead artist/lead performer/soloist",
|
||||
0x08: "Artist/performer",
|
||||
0x09: "Conductor",
|
||||
0x0A: "Band/Orchestra",
|
||||
0x0B: "Composer",
|
||||
0x0C: "Lyricist/text writer",
|
||||
0x0D: "Recording Location",
|
||||
0x0E: "During recording",
|
||||
0x0F: "During performance",
|
||||
0x10: "Movie/video screen capture",
|
||||
0x11: "A bright coloured fish",
|
||||
0x12: "Illustration",
|
||||
0x13: "Band/artist logotype",
|
||||
0x14: "Publisher/Studio logotype",
|
||||
}
|
||||
|
||||
// Picture is a type which represents an attached picture extracted from metadata.
|
||||
type Picture struct {
|
||||
Ext string // Extension of the picture file.
|
||||
MIMEType string // MIMEType of the picture.
|
||||
Type string // Type of the picture (see pictureTypes).
|
||||
Description string // Description.
|
||||
Data []byte // Raw picture data.
|
||||
}
|
||||
|
||||
// String returns a string representation of the underlying Picture instance.
|
||||
func (p Picture) String() string {
|
||||
return fmt.Sprintf("Picture{Ext: %v, MIMEType: %v, Type: %v, Description: %v, Data.Size: %v}",
|
||||
p.Ext, p.MIMEType, p.Type, p.Description, len(p.Data))
|
||||
}
|
||||
|
||||
// IDv2.2
|
||||
// -- Header
|
||||
// Attached picture "PIC"
|
||||
// Frame size $xx xx xx
|
||||
// -- readPICFrame
|
||||
// Text encoding $xx
|
||||
// Image format $xx xx xx
|
||||
// Picture type $xx
|
||||
// Description <textstring> $00 (00)
|
||||
// Picture data <binary data>
|
||||
func readPICFrame(b []byte) (*Picture, error) {
|
||||
enc := b[0]
|
||||
ext := string(b[1:4])
|
||||
picType := b[4]
|
||||
|
||||
descDataSplit := dataSplit(b[5:], enc)
|
||||
if len(descDataSplit) != 2 {
|
||||
return nil, errors.New("error decoding PIC description text: invalid encoding")
|
||||
}
|
||||
desc, err := decodeText(enc, descDataSplit[0])
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("error decoding PIC description text: %v", err)
|
||||
}
|
||||
|
||||
var mimeType string
|
||||
switch ext {
|
||||
case "jpeg", "jpg":
|
||||
mimeType = "image/jpeg"
|
||||
case "png":
|
||||
mimeType = "image/png"
|
||||
}
|
||||
|
||||
return &Picture{
|
||||
Ext: ext,
|
||||
MIMEType: mimeType,
|
||||
Type: pictureTypes[picType],
|
||||
Description: desc,
|
||||
Data: descDataSplit[1],
|
||||
}, nil
|
||||
}
|
||||
|
||||
// IDv2.{3,4}
|
||||
// -- Header
|
||||
// <Header for 'Attached picture', ID: "APIC">
|
||||
// -- readAPICFrame
|
||||
// Text encoding $xx
|
||||
// MIME type <text string> $00
|
||||
// Picture type $xx
|
||||
// Description <text string according to encoding> $00 (00)
|
||||
// Picture data <binary data>
|
||||
func readAPICFrame(b []byte) (*Picture, error) {
|
||||
enc := b[0]
|
||||
mimeDataSplit := bytes.SplitN(b[1:], singleZero, 2)
|
||||
mimeType := string(mimeDataSplit[0])
|
||||
|
||||
b = mimeDataSplit[1]
|
||||
if len(b) < 1 {
|
||||
return nil, fmt.Errorf("error decoding APIC mimetype")
|
||||
}
|
||||
picType := b[0]
|
||||
|
||||
descDataSplit := dataSplit(b[1:], enc)
|
||||
if len(descDataSplit) != 2 {
|
||||
return nil, errors.New("error decoding APIC description text: invalid encoding")
|
||||
}
|
||||
desc, err := decodeText(enc, descDataSplit[0])
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("error decoding APIC description text: %v", err)
|
||||
}
|
||||
|
||||
var ext string
|
||||
switch mimeType {
|
||||
case "image/jpeg":
|
||||
ext = "jpg"
|
||||
case "image/png":
|
||||
ext = "png"
|
||||
}
|
||||
|
||||
return &Picture{
|
||||
Ext: ext,
|
||||
MIMEType: mimeType,
|
||||
Type: pictureTypes[picType],
|
||||
Description: desc,
|
||||
Data: descDataSplit[1],
|
||||
}, nil
|
||||
}
|
||||
141
vendor/github.com/dhowden/tag/id3v2metadata.go
generated
vendored
Normal file
141
vendor/github.com/dhowden/tag/id3v2metadata.go
generated
vendored
Normal file
@@ -0,0 +1,141 @@
|
||||
// Copyright 2015, David Howden
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package tag
|
||||
|
||||
import (
|
||||
"strconv"
|
||||
"strings"
|
||||
)
|
||||
|
||||
type frameNames map[string][2]string
|
||||
|
||||
func (f frameNames) Name(s string, fm Format) string {
|
||||
l, ok := f[s]
|
||||
if !ok {
|
||||
return ""
|
||||
}
|
||||
|
||||
switch fm {
|
||||
case ID3v2_2:
|
||||
return l[0]
|
||||
case ID3v2_3:
|
||||
return l[1]
|
||||
case ID3v2_4:
|
||||
if s == "year" {
|
||||
return "TDRC"
|
||||
}
|
||||
return l[1]
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
var frames = frameNames(map[string][2]string{
|
||||
"title": [2]string{"TT2", "TIT2"},
|
||||
"artist": [2]string{"TP1", "TPE1"},
|
||||
"album": [2]string{"TAL", "TALB"},
|
||||
"album_artist": [2]string{"TP2", "TPE2"},
|
||||
"composer": [2]string{"TCM", "TCOM"},
|
||||
"year": [2]string{"TYE", "TYER"},
|
||||
"track": [2]string{"TRK", "TRCK"},
|
||||
"disc": [2]string{"TPA", "TPOS"},
|
||||
"genre": [2]string{"TCO", "TCON"},
|
||||
"picture": [2]string{"PIC", "APIC"},
|
||||
"lyrics": [2]string{"", "USLT"},
|
||||
"comment": [2]string{"COM", "COMM"},
|
||||
})
|
||||
|
||||
// metadataID3v2 is the implementation of Metadata used for ID3v2 tags.
|
||||
type metadataID3v2 struct {
|
||||
header *id3v2Header
|
||||
frames map[string]interface{}
|
||||
}
|
||||
|
||||
func (m metadataID3v2) getString(k string) string {
|
||||
v, ok := m.frames[k]
|
||||
if !ok {
|
||||
return ""
|
||||
}
|
||||
return v.(string)
|
||||
}
|
||||
|
||||
func (m metadataID3v2) Format() Format { return m.header.Version }
|
||||
func (m metadataID3v2) FileType() FileType { return MP3 }
|
||||
func (m metadataID3v2) Raw() map[string]interface{} { return m.frames }
|
||||
|
||||
func (m metadataID3v2) Title() string {
|
||||
return m.getString(frames.Name("title", m.Format()))
|
||||
}
|
||||
|
||||
func (m metadataID3v2) Artist() string {
|
||||
return m.getString(frames.Name("artist", m.Format()))
|
||||
}
|
||||
|
||||
func (m metadataID3v2) Album() string {
|
||||
return m.getString(frames.Name("album", m.Format()))
|
||||
}
|
||||
|
||||
func (m metadataID3v2) AlbumArtist() string {
|
||||
return m.getString(frames.Name("album_artist", m.Format()))
|
||||
}
|
||||
|
||||
func (m metadataID3v2) Composer() string {
|
||||
return m.getString(frames.Name("composer", m.Format()))
|
||||
}
|
||||
|
||||
func (m metadataID3v2) Genre() string {
|
||||
return id3v2genre(m.getString(frames.Name("genre", m.Format())))
|
||||
}
|
||||
|
||||
func (m metadataID3v2) Year() int {
|
||||
year, _ := strconv.Atoi(m.getString(frames.Name("year", m.Format())))
|
||||
return year
|
||||
}
|
||||
|
||||
func parseXofN(s string) (x, n int) {
|
||||
xn := strings.Split(s, "/")
|
||||
if len(xn) != 2 {
|
||||
x, _ = strconv.Atoi(s)
|
||||
return x, 0
|
||||
}
|
||||
x, _ = strconv.Atoi(strings.TrimSpace(xn[0]))
|
||||
n, _ = strconv.Atoi(strings.TrimSpace(xn[1]))
|
||||
return x, n
|
||||
}
|
||||
|
||||
func (m metadataID3v2) Track() (int, int) {
|
||||
return parseXofN(m.getString(frames.Name("track", m.Format())))
|
||||
}
|
||||
|
||||
func (m metadataID3v2) Disc() (int, int) {
|
||||
return parseXofN(m.getString(frames.Name("disc", m.Format())))
|
||||
}
|
||||
|
||||
func (m metadataID3v2) Lyrics() string {
|
||||
t, ok := m.frames[frames.Name("lyrics", m.Format())]
|
||||
if !ok {
|
||||
return ""
|
||||
}
|
||||
return t.(*Comm).Text
|
||||
}
|
||||
|
||||
func (m metadataID3v2) Comment() string {
|
||||
t, ok := m.frames[frames.Name("comment", m.Format())]
|
||||
if !ok {
|
||||
return ""
|
||||
}
|
||||
// id3v23 has Text, id3v24 has Description
|
||||
if t.(*Comm).Description == "" {
|
||||
return trimString(t.(*Comm).Text)
|
||||
}
|
||||
return trimString(t.(*Comm).Description)
|
||||
}
|
||||
|
||||
func (m metadataID3v2) Picture() *Picture {
|
||||
v, ok := m.frames[frames.Name("picture", m.Format())]
|
||||
if !ok {
|
||||
return nil
|
||||
}
|
||||
return v.(*Picture)
|
||||
}
|
||||
362
vendor/github.com/dhowden/tag/mp4.go
generated
vendored
Normal file
362
vendor/github.com/dhowden/tag/mp4.go
generated
vendored
Normal file
@@ -0,0 +1,362 @@
|
||||
// Copyright 2015, David Howden
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package tag
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"encoding/binary"
|
||||
"errors"
|
||||
"fmt"
|
||||
"io"
|
||||
"strconv"
|
||||
)
|
||||
|
||||
var atomTypes = map[int]string{
|
||||
0: "implicit", // automatic based on atom name
|
||||
1: "text",
|
||||
13: "jpeg",
|
||||
14: "png",
|
||||
21: "uint8",
|
||||
}
|
||||
|
||||
// NB: atoms does not include "----", this is handled separately
|
||||
var atoms = atomNames(map[string]string{
|
||||
"\xa9alb": "album",
|
||||
"\xa9art": "artist",
|
||||
"\xa9ART": "artist",
|
||||
"aART": "album_artist",
|
||||
"\xa9day": "year",
|
||||
"\xa9nam": "title",
|
||||
"\xa9gen": "genre",
|
||||
"trkn": "track",
|
||||
"\xa9wrt": "composer",
|
||||
"\xa9too": "encoder",
|
||||
"cprt": "copyright",
|
||||
"covr": "picture",
|
||||
"\xa9grp": "grouping",
|
||||
"keyw": "keyword",
|
||||
"\xa9lyr": "lyrics",
|
||||
"\xa9cmt": "comment",
|
||||
"tmpo": "tempo",
|
||||
"cpil": "compilation",
|
||||
"disk": "disc",
|
||||
})
|
||||
|
||||
// Detect PNG image if "implicit" class is used
|
||||
var pngHeader = []byte{137, 80, 78, 71, 13, 10, 26, 10}
|
||||
|
||||
type atomNames map[string]string
|
||||
|
||||
func (f atomNames) Name(n string) []string {
|
||||
res := make([]string, 1)
|
||||
for k, v := range f {
|
||||
if v == n {
|
||||
res = append(res, k)
|
||||
}
|
||||
}
|
||||
return res
|
||||
}
|
||||
|
||||
// metadataMP4 is the implementation of Metadata for MP4 tag (atom) data.
|
||||
type metadataMP4 struct {
|
||||
fileType FileType
|
||||
data map[string]interface{}
|
||||
}
|
||||
|
||||
// ReadAtoms reads MP4 metadata atoms from the io.ReadSeeker into a Metadata, returning
|
||||
// non-nil error if there was a problem.
|
||||
func ReadAtoms(r io.ReadSeeker) (Metadata, error) {
|
||||
m := metadataMP4{
|
||||
data: make(map[string]interface{}),
|
||||
fileType: UnknownFileType,
|
||||
}
|
||||
err := m.readAtoms(r)
|
||||
return m, err
|
||||
}
|
||||
|
||||
func (m metadataMP4) readAtoms(r io.ReadSeeker) error {
|
||||
for {
|
||||
name, size, err := readAtomHeader(r)
|
||||
if err != nil {
|
||||
if err == io.EOF {
|
||||
return nil
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
switch name {
|
||||
case "meta":
|
||||
// next_item_id (int32)
|
||||
_, err := readBytes(r, 4)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
fallthrough
|
||||
|
||||
case "moov", "udta", "ilst":
|
||||
return m.readAtoms(r)
|
||||
}
|
||||
|
||||
_, ok := atoms[name]
|
||||
if name == "----" {
|
||||
name, size, err = readCustomAtom(r, size)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if name != "----" {
|
||||
ok = true
|
||||
}
|
||||
}
|
||||
|
||||
if !ok {
|
||||
_, err := r.Seek(int64(size-8), io.SeekCurrent)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
continue
|
||||
}
|
||||
|
||||
err = m.readAtomData(r, name, size-8)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (m metadataMP4) readAtomData(r io.ReadSeeker, name string, size uint32) error {
|
||||
b, err := readBytes(r, int(size))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if len(b) < 8 {
|
||||
return fmt.Errorf("invalid encoding: expected at least %d bytes, got %d", 8, len(b))
|
||||
}
|
||||
|
||||
// "data" + size (4 bytes each)
|
||||
b = b[8:]
|
||||
|
||||
if len(b) < 3 {
|
||||
return fmt.Errorf("invalid encoding: expected at least %d bytes, for class, got %d", 3, len(b))
|
||||
}
|
||||
class := getInt(b[1:4])
|
||||
contentType, ok := atomTypes[class]
|
||||
if !ok {
|
||||
return fmt.Errorf("invalid content type: %v (%x) (%x)", class, b[1:4], b)
|
||||
}
|
||||
|
||||
// 4: atom version (1 byte) + atom flags (3 bytes)
|
||||
// 4: NULL (usually locale indicator)
|
||||
if len(b) < 8 {
|
||||
return fmt.Errorf("invalid encoding: expected at least %d bytes, for atom version and flags, got %d", 8, len(b))
|
||||
}
|
||||
b = b[8:]
|
||||
|
||||
if name == "trkn" || name == "disk" {
|
||||
if len(b) < 6 {
|
||||
return fmt.Errorf("invalid encoding: expected at least %d bytes, for track and disk numbers, got %d", 6, len(b))
|
||||
}
|
||||
|
||||
m.data[name] = int(b[3])
|
||||
m.data[name+"_count"] = int(b[5])
|
||||
return nil
|
||||
}
|
||||
|
||||
if contentType == "implicit" {
|
||||
if name == "covr" {
|
||||
if bytes.HasPrefix(b, pngHeader) {
|
||||
contentType = "png"
|
||||
}
|
||||
// TODO(dhowden): Detect JPEG formats too (harder).
|
||||
}
|
||||
}
|
||||
|
||||
var data interface{}
|
||||
switch contentType {
|
||||
case "implicit":
|
||||
if _, ok := atoms[name]; ok {
|
||||
return fmt.Errorf("unhandled implicit content type for required atom: %q", name)
|
||||
}
|
||||
return nil
|
||||
|
||||
case "text":
|
||||
data = string(b)
|
||||
|
||||
case "uint8":
|
||||
if len(b) < 1 {
|
||||
return fmt.Errorf("invalid encoding: expected at least %d bytes, for integer tag data, got %d", 1, len(b))
|
||||
}
|
||||
data = getInt(b[:1])
|
||||
|
||||
case "jpeg", "png":
|
||||
data = &Picture{
|
||||
Ext: contentType,
|
||||
MIMEType: "image/" + contentType,
|
||||
Data: b,
|
||||
}
|
||||
}
|
||||
m.data[name] = data
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func readAtomHeader(r io.ReadSeeker) (name string, size uint32, err error) {
|
||||
err = binary.Read(r, binary.BigEndian, &size)
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
name, err = readString(r, 4)
|
||||
return
|
||||
}
|
||||
|
||||
// Generic atom.
|
||||
// Should have 3 sub atoms : mean, name and data.
|
||||
// We check that mean is "com.apple.iTunes" and we use the subname as
|
||||
// the name, and move to the data atom.
|
||||
// If anything goes wrong, we jump at the end of the "----" atom.
|
||||
func readCustomAtom(r io.ReadSeeker, size uint32) (string, uint32, error) {
|
||||
subNames := make(map[string]string)
|
||||
var dataSize uint32
|
||||
|
||||
for size > 8 {
|
||||
subName, subSize, err := readAtomHeader(r)
|
||||
if err != nil {
|
||||
return "", 0, err
|
||||
}
|
||||
|
||||
// Remove the size of the atom from the size counter
|
||||
size -= subSize
|
||||
|
||||
switch subName {
|
||||
case "mean", "name":
|
||||
b, err := readBytes(r, int(subSize-8))
|
||||
if err != nil {
|
||||
return "", 0, err
|
||||
}
|
||||
|
||||
if len(b) < 4 {
|
||||
return "", 0, fmt.Errorf("invalid encoding: expected at least %d bytes, got %d", 4, len(b))
|
||||
}
|
||||
subNames[subName] = string(b[4:])
|
||||
|
||||
case "data":
|
||||
// Found the "data" atom, rewind
|
||||
dataSize = subSize + 8 // will need to re-read "data" + size (4 + 4)
|
||||
_, err := r.Seek(-8, io.SeekCurrent)
|
||||
if err != nil {
|
||||
return "", 0, err
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// there should remain only the header size
|
||||
if size != 8 {
|
||||
err := errors.New("---- atom out of bounds")
|
||||
return "", 0, err
|
||||
}
|
||||
|
||||
if subNames["mean"] != "com.apple.iTunes" || subNames["name"] == "" || dataSize == 0 {
|
||||
return "----", 0, nil
|
||||
}
|
||||
return subNames["name"], dataSize, nil
|
||||
}
|
||||
|
||||
func (metadataMP4) Format() Format { return MP4 }
|
||||
func (m metadataMP4) FileType() FileType { return m.fileType }
|
||||
|
||||
func (m metadataMP4) Raw() map[string]interface{} { return m.data }
|
||||
|
||||
func (m metadataMP4) getString(n []string) string {
|
||||
for _, k := range n {
|
||||
if x, ok := m.data[k]; ok {
|
||||
return x.(string)
|
||||
}
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
func (m metadataMP4) getInt(n []string) int {
|
||||
for _, k := range n {
|
||||
if x, ok := m.data[k]; ok {
|
||||
return x.(int)
|
||||
}
|
||||
}
|
||||
return 0
|
||||
}
|
||||
|
||||
func (m metadataMP4) Title() string {
|
||||
return m.getString(atoms.Name("title"))
|
||||
}
|
||||
|
||||
func (m metadataMP4) Artist() string {
|
||||
return m.getString(atoms.Name("artist"))
|
||||
}
|
||||
|
||||
func (m metadataMP4) Album() string {
|
||||
return m.getString(atoms.Name("album"))
|
||||
}
|
||||
|
||||
func (m metadataMP4) AlbumArtist() string {
|
||||
return m.getString(atoms.Name("album_artist"))
|
||||
}
|
||||
|
||||
func (m metadataMP4) Composer() string {
|
||||
return m.getString(atoms.Name("composer"))
|
||||
}
|
||||
|
||||
func (m metadataMP4) Genre() string {
|
||||
return m.getString(atoms.Name("genre"))
|
||||
}
|
||||
|
||||
func (m metadataMP4) Year() int {
|
||||
date := m.getString(atoms.Name("year"))
|
||||
if len(date) >= 4 {
|
||||
year, _ := strconv.Atoi(date[:4])
|
||||
return year
|
||||
}
|
||||
return 0
|
||||
}
|
||||
|
||||
func (m metadataMP4) Track() (int, int) {
|
||||
x := m.getInt([]string{"trkn"})
|
||||
if n, ok := m.data["trkn_count"]; ok {
|
||||
return x, n.(int)
|
||||
}
|
||||
return x, 0
|
||||
}
|
||||
|
||||
func (m metadataMP4) Disc() (int, int) {
|
||||
x := m.getInt([]string{"disk"})
|
||||
if n, ok := m.data["disk_count"]; ok {
|
||||
return x, n.(int)
|
||||
}
|
||||
return x, 0
|
||||
}
|
||||
|
||||
func (m metadataMP4) Lyrics() string {
|
||||
t, ok := m.data["\xa9lyr"]
|
||||
if !ok {
|
||||
return ""
|
||||
}
|
||||
return t.(string)
|
||||
}
|
||||
|
||||
func (m metadataMP4) Comment() string {
|
||||
t, ok := m.data["\xa9cmt"]
|
||||
if !ok {
|
||||
return ""
|
||||
}
|
||||
return t.(string)
|
||||
}
|
||||
|
||||
func (m metadataMP4) Picture() *Picture {
|
||||
v, ok := m.data["covr"]
|
||||
if !ok {
|
||||
return nil
|
||||
}
|
||||
p, _ := v.(*Picture)
|
||||
return p
|
||||
}
|
||||
119
vendor/github.com/dhowden/tag/ogg.go
generated
vendored
Normal file
119
vendor/github.com/dhowden/tag/ogg.go
generated
vendored
Normal file
@@ -0,0 +1,119 @@
|
||||
// Copyright 2015, David Howden
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package tag
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"io"
|
||||
)
|
||||
|
||||
const (
|
||||
idType int = 1
|
||||
commentType int = 3
|
||||
)
|
||||
|
||||
// ReadOGGTags reads OGG metadata from the io.ReadSeeker, returning the resulting
|
||||
// metadata in a Metadata implementation, or non-nil error if there was a problem.
|
||||
// See http://www.xiph.org/vorbis/doc/Vorbis_I_spec.html
|
||||
// and http://www.xiph.org/ogg/doc/framing.html for details.
|
||||
func ReadOGGTags(r io.ReadSeeker) (Metadata, error) {
|
||||
oggs, err := readString(r, 4)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if oggs != "OggS" {
|
||||
return nil, errors.New("expected 'OggS'")
|
||||
}
|
||||
|
||||
// Skip 22 bytes of Page header to read page_segments length byte at position 26
|
||||
// See http://www.xiph.org/ogg/doc/framing.html
|
||||
_, err = r.Seek(22, io.SeekCurrent)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
nS, err := readInt(r, 1)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// Seek and discard the segments
|
||||
_, err = r.Seek(int64(nS), io.SeekCurrent)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// First packet type is identification, type 1
|
||||
t, err := readInt(r, 1)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if t != idType {
|
||||
return nil, errors.New("expected 'vorbis' identification type 1")
|
||||
}
|
||||
|
||||
// Seek and discard 29 bytes from common and identification header
|
||||
// See http://www.xiph.org/vorbis/doc/Vorbis_I_spec.html#x1-610004.2
|
||||
_, err = r.Seek(29, io.SeekCurrent)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// Beginning of a new page. Comment packet is on a separate page
|
||||
// See http://www.xiph.org/vorbis/doc/Vorbis_I_spec.html#x1-132000A.2
|
||||
oggs, err = readString(r, 4)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if oggs != "OggS" {
|
||||
return nil, errors.New("expected 'OggS'")
|
||||
}
|
||||
|
||||
// Skip page 2 header, same as line 30
|
||||
_, err = r.Seek(22, io.SeekCurrent)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
nS, err = readInt(r, 1)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
_, err = r.Seek(int64(nS), io.SeekCurrent)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// Packet type is comment, type 3
|
||||
t, err = readInt(r, 1)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if t != commentType {
|
||||
return nil, errors.New("expected 'vorbis' comment type 3")
|
||||
}
|
||||
|
||||
// Seek and discard 6 bytes from common header
|
||||
_, err = r.Seek(6, io.SeekCurrent)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
m := &metadataOGG{
|
||||
newMetadataVorbis(),
|
||||
}
|
||||
|
||||
err = m.readVorbisComment(r)
|
||||
return m, err
|
||||
}
|
||||
|
||||
type metadataOGG struct {
|
||||
*metadataVorbis
|
||||
}
|
||||
|
||||
func (m *metadataOGG) FileType() FileType {
|
||||
return OGG
|
||||
}
|
||||
219
vendor/github.com/dhowden/tag/sum.go
generated
vendored
Normal file
219
vendor/github.com/dhowden/tag/sum.go
generated
vendored
Normal file
@@ -0,0 +1,219 @@
|
||||
package tag
|
||||
|
||||
import (
|
||||
"crypto/sha1"
|
||||
"encoding/binary"
|
||||
"errors"
|
||||
"fmt"
|
||||
"hash"
|
||||
"io"
|
||||
)
|
||||
|
||||
// Sum creates a checksum of the audio file data provided by the io.ReadSeeker which is metadata
|
||||
// (ID3, MP4) invariant.
|
||||
func Sum(r io.ReadSeeker) (string, error) {
|
||||
b, err := readBytes(r, 11)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
_, err = r.Seek(-11, io.SeekCurrent)
|
||||
if err != nil {
|
||||
return "", fmt.Errorf("could not seek back to original position: %v", err)
|
||||
}
|
||||
|
||||
switch {
|
||||
case string(b[0:4]) == "fLaC":
|
||||
return SumFLAC(r)
|
||||
|
||||
case string(b[4:11]) == "ftypM4A":
|
||||
return SumAtoms(r)
|
||||
|
||||
case string(b[0:3]) == "ID3":
|
||||
return SumID3v2(r)
|
||||
}
|
||||
|
||||
h, err := SumID3v1(r)
|
||||
if err != nil {
|
||||
if err == ErrNotID3v1 {
|
||||
return SumAll(r)
|
||||
}
|
||||
return "", err
|
||||
}
|
||||
return h, nil
|
||||
}
|
||||
|
||||
// SumAll returns a checksum of the content from the reader (until EOF).
|
||||
func SumAll(r io.ReadSeeker) (string, error) {
|
||||
h := sha1.New()
|
||||
_, err := io.Copy(h, r)
|
||||
if err != nil {
|
||||
return "", nil
|
||||
}
|
||||
return hashSum(h), nil
|
||||
}
|
||||
|
||||
// SumAtoms constructs a checksum of MP4 audio file data provided by the io.ReadSeeker which is
|
||||
// metadata invariant.
|
||||
func SumAtoms(r io.ReadSeeker) (string, error) {
|
||||
for {
|
||||
var size uint32
|
||||
err := binary.Read(r, binary.BigEndian, &size)
|
||||
if err != nil {
|
||||
if err == io.EOF {
|
||||
return "", fmt.Errorf("reached EOF before audio data")
|
||||
}
|
||||
return "", err
|
||||
}
|
||||
|
||||
name, err := readString(r, 4)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
switch name {
|
||||
case "meta":
|
||||
// next_item_id (int32)
|
||||
_, err := r.Seek(4, io.SeekCurrent)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
fallthrough
|
||||
|
||||
case "moov", "udta", "ilst":
|
||||
continue
|
||||
|
||||
case "mdat": // stop when we get to the data
|
||||
h := sha1.New()
|
||||
_, err := io.CopyN(h, r, int64(size-8))
|
||||
if err != nil {
|
||||
return "", fmt.Errorf("error reading audio data: %v", err)
|
||||
}
|
||||
return hashSum(h), nil
|
||||
}
|
||||
|
||||
_, err = r.Seek(int64(size-8), io.SeekCurrent)
|
||||
if err != nil {
|
||||
return "", fmt.Errorf("error reading '%v' tag: %v", name, err)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func sizeToEndOffset(r io.ReadSeeker, offset int64) (int64, error) {
|
||||
n, err := r.Seek(-128, io.SeekEnd)
|
||||
if err != nil {
|
||||
return 0, fmt.Errorf("error seeking end offset (%d bytes): %v", offset, err)
|
||||
}
|
||||
|
||||
_, err = r.Seek(-n, io.SeekCurrent)
|
||||
if err != nil {
|
||||
return 0, fmt.Errorf("error seeking back to original position: %v", err)
|
||||
}
|
||||
return n, nil
|
||||
}
|
||||
|
||||
// SumID3v1 constructs a checksum of MP3 audio file data (assumed to have ID3v1 tags) provided
|
||||
// by the io.ReadSeeker which is metadata invariant.
|
||||
func SumID3v1(r io.ReadSeeker) (string, error) {
|
||||
n, err := sizeToEndOffset(r, 128)
|
||||
if err != nil {
|
||||
return "", fmt.Errorf("error determining read size to ID3v1 header: %v", err)
|
||||
}
|
||||
|
||||
// TODO: improve this check???
|
||||
if n <= 0 {
|
||||
return "", fmt.Errorf("file size must be greater than 128 bytes (ID3v1 header size) for MP3")
|
||||
}
|
||||
|
||||
h := sha1.New()
|
||||
_, err = io.CopyN(h, r, n)
|
||||
if err != nil {
|
||||
return "", fmt.Errorf("error reading %v bytes: %v", n, err)
|
||||
}
|
||||
return hashSum(h), nil
|
||||
}
|
||||
|
||||
// SumID3v2 constructs a checksum of MP3 audio file data (assumed to have ID3v2 tags) provided by the
|
||||
// io.ReadSeeker which is metadata invariant.
|
||||
func SumID3v2(r io.ReadSeeker) (string, error) {
|
||||
header, _, err := readID3v2Header(r)
|
||||
if err != nil {
|
||||
return "", fmt.Errorf("error reading ID3v2 header: %v", err)
|
||||
}
|
||||
|
||||
_, err = r.Seek(int64(header.Size), io.SeekCurrent)
|
||||
if err != nil {
|
||||
return "", fmt.Errorf("error seeking to end of ID3V2 header: %v", err)
|
||||
}
|
||||
|
||||
n, err := sizeToEndOffset(r, 128)
|
||||
if err != nil {
|
||||
return "", fmt.Errorf("error determining read size to ID3v1 header: %v", err)
|
||||
}
|
||||
|
||||
// TODO: remove this check?????
|
||||
if n < 0 {
|
||||
return "", fmt.Errorf("file size must be greater than 128 bytes for MP3: %v bytes", n)
|
||||
}
|
||||
|
||||
h := sha1.New()
|
||||
_, err = io.CopyN(h, r, n)
|
||||
if err != nil {
|
||||
return "", fmt.Errorf("error reading %v bytes: %v", n, err)
|
||||
}
|
||||
return hashSum(h), nil
|
||||
}
|
||||
|
||||
// SumFLAC costructs a checksum of the FLAC audio file data provided by the io.ReadSeeker (ignores
|
||||
// metadata fields).
|
||||
func SumFLAC(r io.ReadSeeker) (string, error) {
|
||||
flac, err := readString(r, 4)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
if flac != "fLaC" {
|
||||
return "", errors.New("expected 'fLaC'")
|
||||
}
|
||||
|
||||
for {
|
||||
last, err := skipFLACMetadataBlock(r)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
if last {
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
h := sha1.New()
|
||||
_, err = io.Copy(h, r)
|
||||
if err != nil {
|
||||
return "", fmt.Errorf("error reading data bytes from FLAC: %v", err)
|
||||
}
|
||||
return hashSum(h), nil
|
||||
}
|
||||
|
||||
func skipFLACMetadataBlock(r io.ReadSeeker) (last bool, err error) {
|
||||
blockHeader, err := readBytes(r, 1)
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
if getBit(blockHeader[0], 7) {
|
||||
blockHeader[0] ^= (1 << 7)
|
||||
last = true
|
||||
}
|
||||
|
||||
blockLen, err := readInt(r, 3)
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
_, err = r.Seek(int64(blockLen), io.SeekCurrent)
|
||||
return
|
||||
}
|
||||
|
||||
func hashSum(h hash.Hash) string {
|
||||
return fmt.Sprintf("%x", h.Sum([]byte{}))
|
||||
}
|
||||
147
vendor/github.com/dhowden/tag/tag.go
generated
vendored
Normal file
147
vendor/github.com/dhowden/tag/tag.go
generated
vendored
Normal file
@@ -0,0 +1,147 @@
|
||||
// Copyright 2015, David Howden
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
// Package tag provides MP3 (ID3: v1, 2.2, 2.3 and 2.4), MP4, FLAC and OGG metadata detection,
|
||||
// parsing and artwork extraction.
|
||||
//
|
||||
// Detect and parse tag metadata from an io.ReadSeeker (i.e. an *os.File):
|
||||
// m, err := tag.ReadFrom(f)
|
||||
// if err != nil {
|
||||
// log.Fatal(err)
|
||||
// }
|
||||
// log.Print(m.Format()) // The detected format.
|
||||
// log.Print(m.Title()) // The title of the track (see Metadata interface for more details).
|
||||
package tag
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"io"
|
||||
)
|
||||
|
||||
// ErrNoTagsFound is the error returned by ReadFrom when the metadata format
|
||||
// cannot be identified.
|
||||
var ErrNoTagsFound = errors.New("no tags found")
|
||||
|
||||
// ReadFrom detects and parses audio file metadata tags (currently supports ID3v1,2.{2,3,4}, MP4, FLAC/OGG).
|
||||
// Returns non-nil error if the format of the given data could not be determined, or if there was a problem
|
||||
// parsing the data.
|
||||
func ReadFrom(r io.ReadSeeker) (Metadata, error) {
|
||||
b, err := readBytes(r, 11)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
_, err = r.Seek(-11, io.SeekCurrent)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("could not seek back to original position: %v", err)
|
||||
}
|
||||
|
||||
switch {
|
||||
case string(b[0:4]) == "fLaC":
|
||||
return ReadFLACTags(r)
|
||||
|
||||
case string(b[0:4]) == "OggS":
|
||||
return ReadOGGTags(r)
|
||||
|
||||
case string(b[4:8]) == "ftyp":
|
||||
return ReadAtoms(r)
|
||||
|
||||
case string(b[0:3]) == "ID3":
|
||||
return ReadID3v2Tags(r)
|
||||
|
||||
case string(b[0:4]) == "DSD ":
|
||||
return ReadDSFTags(r)
|
||||
}
|
||||
|
||||
m, err := ReadID3v1Tags(r)
|
||||
if err != nil {
|
||||
if err == ErrNotID3v1 {
|
||||
err = ErrNoTagsFound
|
||||
}
|
||||
return nil, err
|
||||
}
|
||||
return m, nil
|
||||
}
|
||||
|
||||
// Format is an enumeration of metadata types supported by this package.
|
||||
type Format string
|
||||
|
||||
// Supported tag formats.
|
||||
const (
|
||||
UnknownFormat Format = "" // Unknown Format.
|
||||
ID3v1 Format = "ID3v1" // ID3v1 tag format.
|
||||
ID3v2_2 Format = "ID3v2.2" // ID3v2.2 tag format.
|
||||
ID3v2_3 Format = "ID3v2.3" // ID3v2.3 tag format (most common).
|
||||
ID3v2_4 Format = "ID3v2.4" // ID3v2.4 tag format.
|
||||
MP4 Format = "MP4" // MP4 tag (atom) format (see http://www.ftyps.com/ for a full file type list)
|
||||
VORBIS Format = "VORBIS" // Vorbis Comment tag format.
|
||||
)
|
||||
|
||||
// FileType is an enumeration of the audio file types supported by this package, in particular
|
||||
// there are audio file types which share metadata formats, and this type is used to distinguish
|
||||
// between them.
|
||||
type FileType string
|
||||
|
||||
// Supported file types.
|
||||
const (
|
||||
UnknownFileType FileType = "" // Unknown FileType.
|
||||
MP3 FileType = "MP3" // MP3 file
|
||||
M4A FileType = "M4A" // M4A file Apple iTunes (ACC) Audio
|
||||
M4B FileType = "M4B" // M4A file Apple iTunes (ACC) Audio Book
|
||||
M4P FileType = "M4P" // M4A file Apple iTunes (ACC) AES Protected Audio
|
||||
ALAC FileType = "ALAC" // Apple Lossless file FIXME: actually detect this
|
||||
FLAC FileType = "FLAC" // FLAC file
|
||||
OGG FileType = "OGG" // OGG file
|
||||
DSF FileType = "DSF" // DSF file DSD Sony format see https://dsd-guide.com/sites/default/files/white-papers/DSFFileFormatSpec_E.pdf
|
||||
)
|
||||
|
||||
// Metadata is an interface which is used to describe metadata retrieved by this package.
|
||||
type Metadata interface {
|
||||
// Format returns the metadata Format used to encode the data.
|
||||
Format() Format
|
||||
|
||||
// FileType returns the file type of the audio file.
|
||||
FileType() FileType
|
||||
|
||||
// Title returns the title of the track.
|
||||
Title() string
|
||||
|
||||
// Album returns the album name of the track.
|
||||
Album() string
|
||||
|
||||
// Artist returns the artist name of the track.
|
||||
Artist() string
|
||||
|
||||
// AlbumArtist returns the album artist name of the track.
|
||||
AlbumArtist() string
|
||||
|
||||
// Composer returns the composer of the track.
|
||||
Composer() string
|
||||
|
||||
// Year returns the year of the track.
|
||||
Year() int
|
||||
|
||||
// Genre returns the genre of the track.
|
||||
Genre() string
|
||||
|
||||
// Track returns the track number and total tracks, or zero values if unavailable.
|
||||
Track() (int, int)
|
||||
|
||||
// Disc returns the disc number and total discs, or zero values if unavailable.
|
||||
Disc() (int, int)
|
||||
|
||||
// Picture returns a picture, or nil if not available.
|
||||
Picture() *Picture
|
||||
|
||||
// Lyrics returns the lyrics, or an empty string if unavailable.
|
||||
Lyrics() string
|
||||
|
||||
// Comment returns the comment, or an empty string if unavailable.
|
||||
Comment() string
|
||||
|
||||
// Raw returns the raw mapping of retrieved tag names and associated values.
|
||||
// NB: tag/atom names are not standardised between formats.
|
||||
Raw() map[string]interface{}
|
||||
}
|
||||
81
vendor/github.com/dhowden/tag/util.go
generated
vendored
Normal file
81
vendor/github.com/dhowden/tag/util.go
generated
vendored
Normal file
@@ -0,0 +1,81 @@
|
||||
// Copyright 2015, David Howden
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package tag
|
||||
|
||||
import (
|
||||
"encoding/binary"
|
||||
"io"
|
||||
)
|
||||
|
||||
func getBit(b byte, n uint) bool {
|
||||
x := byte(1 << n)
|
||||
return (b & x) == x
|
||||
}
|
||||
|
||||
func get7BitChunkedInt(b []byte) int {
|
||||
var n int
|
||||
for _, x := range b {
|
||||
n = n << 7
|
||||
n |= int(x)
|
||||
}
|
||||
return n
|
||||
}
|
||||
|
||||
func getInt(b []byte) int {
|
||||
var n int
|
||||
for _, x := range b {
|
||||
n = n << 8
|
||||
n |= int(x)
|
||||
}
|
||||
return n
|
||||
}
|
||||
|
||||
func getIntLittleEndian(b []byte) int {
|
||||
var n int
|
||||
for i := len(b) - 1; i >= 0; i-- {
|
||||
n = n << 8
|
||||
n |= int(b[i])
|
||||
}
|
||||
return n
|
||||
}
|
||||
|
||||
func readBytes(r io.Reader, n int) ([]byte, error) {
|
||||
b := make([]byte, n)
|
||||
_, err := io.ReadFull(r, b)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return b, nil
|
||||
}
|
||||
|
||||
func readString(r io.Reader, n int) (string, error) {
|
||||
b, err := readBytes(r, n)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
return string(b), nil
|
||||
}
|
||||
|
||||
func readInt(r io.Reader, n int) (int, error) {
|
||||
b, err := readBytes(r, n)
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
return getInt(b), nil
|
||||
}
|
||||
|
||||
func read7BitChunkedInt(r io.Reader, n int) (int, error) {
|
||||
b, err := readBytes(r, n)
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
return get7BitChunkedInt(b), nil
|
||||
}
|
||||
|
||||
func readInt32LittleEndian(r io.Reader) (int, error) {
|
||||
var n int32
|
||||
err := binary.Read(r, binary.LittleEndian, &n)
|
||||
return int(n), err
|
||||
}
|
||||
255
vendor/github.com/dhowden/tag/vorbis.go
generated
vendored
Normal file
255
vendor/github.com/dhowden/tag/vorbis.go
generated
vendored
Normal file
@@ -0,0 +1,255 @@
|
||||
// Copyright 2015, David Howden
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package tag
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"io"
|
||||
"strconv"
|
||||
"strings"
|
||||
"time"
|
||||
)
|
||||
|
||||
func newMetadataVorbis() *metadataVorbis {
|
||||
return &metadataVorbis{
|
||||
c: make(map[string]string),
|
||||
}
|
||||
}
|
||||
|
||||
type metadataVorbis struct {
|
||||
c map[string]string // the vorbis comments
|
||||
p *Picture
|
||||
}
|
||||
|
||||
func (m *metadataVorbis) readVorbisComment(r io.Reader) error {
|
||||
vendorLen, err := readInt32LittleEndian(r)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if vendorLen < 0 {
|
||||
return fmt.Errorf("invalid encoding: expected positive length, got %d", vendorLen)
|
||||
}
|
||||
|
||||
vendor, err := readString(r, vendorLen)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
m.c["vendor"] = vendor
|
||||
|
||||
commentsLen, err := readInt32LittleEndian(r)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
for i := 0; i < commentsLen; i++ {
|
||||
l, err := readInt32LittleEndian(r)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
s, err := readString(r, l)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
k, v, err := parseComment(s)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
m.c[strings.ToLower(k)] = v
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (m *metadataVorbis) readPictureBlock(r io.Reader) error {
|
||||
b, err := readInt(r, 4)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
pictureType, ok := pictureTypes[byte(b)]
|
||||
if !ok {
|
||||
return fmt.Errorf("invalid picture type: %v", b)
|
||||
}
|
||||
mimeLen, err := readInt(r, 4)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
mime, err := readString(r, mimeLen)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
ext := ""
|
||||
switch mime {
|
||||
case "image/jpeg":
|
||||
ext = "jpg"
|
||||
case "image/png":
|
||||
ext = "png"
|
||||
case "image/gif":
|
||||
ext = "gif"
|
||||
}
|
||||
|
||||
descLen, err := readInt(r, 4)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
desc, err := readString(r, descLen)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// We skip width <32>, height <32>, colorDepth <32>, coloresUsed <32>
|
||||
_, err = readInt(r, 4) // width
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
_, err = readInt(r, 4) // height
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
_, err = readInt(r, 4) // color depth
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
_, err = readInt(r, 4) // colors used
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
dataLen, err := readInt(r, 4)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
data := make([]byte, dataLen)
|
||||
_, err = io.ReadFull(r, data)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
m.p = &Picture{
|
||||
Ext: ext,
|
||||
MIMEType: mime,
|
||||
Type: pictureType,
|
||||
Description: desc,
|
||||
Data: data,
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func parseComment(c string) (k, v string, err error) {
|
||||
kv := strings.SplitN(c, "=", 2)
|
||||
if len(kv) != 2 {
|
||||
err = errors.New("vorbis comment must contain '='")
|
||||
return
|
||||
}
|
||||
k = kv[0]
|
||||
v = kv[1]
|
||||
return
|
||||
}
|
||||
|
||||
func (m *metadataVorbis) Format() Format {
|
||||
return VORBIS
|
||||
}
|
||||
|
||||
func (m *metadataVorbis) Raw() map[string]interface{} {
|
||||
raw := make(map[string]interface{}, len(m.c))
|
||||
for k, v := range m.c {
|
||||
raw[k] = v
|
||||
}
|
||||
return raw
|
||||
}
|
||||
|
||||
func (m *metadataVorbis) Title() string {
|
||||
return m.c["title"]
|
||||
}
|
||||
|
||||
func (m *metadataVorbis) Artist() string {
|
||||
// PERFORMER
|
||||
// The artist(s) who performed the work. In classical music this would be the
|
||||
// conductor, orchestra, soloists. In an audio book it would be the actor who
|
||||
// did the reading. In popular music this is typically the same as the ARTIST
|
||||
// and is omitted.
|
||||
if m.c["performer"] != "" {
|
||||
return m.c["performer"]
|
||||
}
|
||||
return m.c["artist"]
|
||||
}
|
||||
|
||||
func (m *metadataVorbis) Album() string {
|
||||
return m.c["album"]
|
||||
}
|
||||
|
||||
func (m *metadataVorbis) AlbumArtist() string {
|
||||
// This field isn't actually included in the standard, though
|
||||
// it is commonly assigned to albumartist.
|
||||
return m.c["albumartist"]
|
||||
}
|
||||
|
||||
func (m *metadataVorbis) Composer() string {
|
||||
// ARTIST
|
||||
// The artist generally considered responsible for the work. In popular music
|
||||
// this is usually the performing band or singer. For classical music it would
|
||||
// be the composer. For an audio book it would be the author of the original text.
|
||||
if m.c["composer"] != "" {
|
||||
return m.c["composer"]
|
||||
}
|
||||
if m.c["performer"] == "" {
|
||||
return ""
|
||||
}
|
||||
return m.c["artist"]
|
||||
}
|
||||
|
||||
func (m *metadataVorbis) Genre() string {
|
||||
return m.c["genre"]
|
||||
}
|
||||
|
||||
func (m *metadataVorbis) Year() int {
|
||||
var dateFormat string
|
||||
|
||||
// The date need to follow the international standard https://en.wikipedia.org/wiki/ISO_8601
|
||||
// and obviously the VorbisComment standard https://wiki.xiph.org/VorbisComment#Date_and_time
|
||||
switch len(m.c["date"]) {
|
||||
case 0:
|
||||
return 0
|
||||
case 4:
|
||||
dateFormat = "2006"
|
||||
case 7:
|
||||
dateFormat = "2006-01"
|
||||
case 10:
|
||||
dateFormat = "2006-01-02"
|
||||
}
|
||||
|
||||
t, _ := time.Parse(dateFormat, m.c["date"])
|
||||
return t.Year()
|
||||
}
|
||||
|
||||
func (m *metadataVorbis) Track() (int, int) {
|
||||
x, _ := strconv.Atoi(m.c["tracknumber"])
|
||||
// https://wiki.xiph.org/Field_names
|
||||
n, _ := strconv.Atoi(m.c["tracktotal"])
|
||||
return x, n
|
||||
}
|
||||
|
||||
func (m *metadataVorbis) Disc() (int, int) {
|
||||
// https://wiki.xiph.org/Field_names
|
||||
x, _ := strconv.Atoi(m.c["discnumber"])
|
||||
n, _ := strconv.Atoi(m.c["disctotal"])
|
||||
return x, n
|
||||
}
|
||||
|
||||
func (m *metadataVorbis) Lyrics() string {
|
||||
return m.c["lyrics"]
|
||||
}
|
||||
|
||||
func (m *metadataVorbis) Comment() string {
|
||||
if m.c["comment"] != "" {
|
||||
return m.c["comment"]
|
||||
}
|
||||
return m.c["description"]
|
||||
}
|
||||
|
||||
func (m *metadataVorbis) Picture() *Picture {
|
||||
return m.p
|
||||
}
|
||||
11
vendor/github.com/jinzhu/gorm/.codeclimate.yml
generated
vendored
Normal file
11
vendor/github.com/jinzhu/gorm/.codeclimate.yml
generated
vendored
Normal file
@@ -0,0 +1,11 @@
|
||||
---
|
||||
engines:
|
||||
gofmt:
|
||||
enabled: true
|
||||
govet:
|
||||
enabled: true
|
||||
golint:
|
||||
enabled: true
|
||||
ratings:
|
||||
paths:
|
||||
- "**.go"
|
||||
2
vendor/github.com/jinzhu/gorm/.gitignore
generated
vendored
Normal file
2
vendor/github.com/jinzhu/gorm/.gitignore
generated
vendored
Normal file
@@ -0,0 +1,2 @@
|
||||
documents
|
||||
_book
|
||||
21
vendor/github.com/jinzhu/gorm/License
generated
vendored
Normal file
21
vendor/github.com/jinzhu/gorm/License
generated
vendored
Normal file
@@ -0,0 +1,21 @@
|
||||
The MIT License (MIT)
|
||||
|
||||
Copyright (c) 2013-NOW Jinzhu <wosmvp@gmail.com>
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
in the Software without restriction, including without limitation the rights
|
||||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
copies of the Software, and to permit persons to whom the Software is
|
||||
furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in
|
||||
all copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
|
||||
THE SOFTWARE.
|
||||
40
vendor/github.com/jinzhu/gorm/README.md
generated
vendored
Normal file
40
vendor/github.com/jinzhu/gorm/README.md
generated
vendored
Normal file
@@ -0,0 +1,40 @@
|
||||
# GORM
|
||||
|
||||
The fantastic ORM library for Golang, aims to be developer friendly.
|
||||
|
||||
[](https://goreportcard.com/report/github.com/jinzhu/gorm)
|
||||
[](https://app.wercker.com/project/byKey/8596cace912c9947dd9c8542ecc8cb8b)
|
||||
[](https://gitter.im/jinzhu/gorm?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge)
|
||||
[](https://opencollective.com/gorm)
|
||||
[](https://opencollective.com/gorm)
|
||||
[](http://opensource.org/licenses/MIT)
|
||||
[](https://godoc.org/github.com/jinzhu/gorm)
|
||||
|
||||
## Overview
|
||||
|
||||
* Full-Featured ORM (almost)
|
||||
* Associations (Has One, Has Many, Belongs To, Many To Many, Polymorphism)
|
||||
* Hooks (Before/After Create/Save/Update/Delete/Find)
|
||||
* Preloading (eager loading)
|
||||
* Transactions
|
||||
* Composite Primary Key
|
||||
* SQL Builder
|
||||
* Auto Migrations
|
||||
* Logger
|
||||
* Extendable, write Plugins based on GORM callbacks
|
||||
* Every feature comes with tests
|
||||
* Developer Friendly
|
||||
|
||||
## Getting Started
|
||||
|
||||
* GORM Guides [http://gorm.io](http://gorm.io)
|
||||
|
||||
## Contributing
|
||||
|
||||
[You can help to deliver a better GORM, check out things you can do](http://gorm.io/contribute.html)
|
||||
|
||||
## License
|
||||
|
||||
© Jinzhu, 2013~time.Now
|
||||
|
||||
Released under the [MIT License](https://github.com/jinzhu/gorm/blob/master/License)
|
||||
377
vendor/github.com/jinzhu/gorm/association.go
generated
vendored
Normal file
377
vendor/github.com/jinzhu/gorm/association.go
generated
vendored
Normal file
@@ -0,0 +1,377 @@
|
||||
package gorm
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"reflect"
|
||||
)
|
||||
|
||||
// Association Mode contains some helper methods to handle relationship things easily.
|
||||
type Association struct {
|
||||
Error error
|
||||
scope *Scope
|
||||
column string
|
||||
field *Field
|
||||
}
|
||||
|
||||
// Find find out all related associations
|
||||
func (association *Association) Find(value interface{}) *Association {
|
||||
association.scope.related(value, association.column)
|
||||
return association.setErr(association.scope.db.Error)
|
||||
}
|
||||
|
||||
// Append append new associations for many2many, has_many, replace current association for has_one, belongs_to
|
||||
func (association *Association) Append(values ...interface{}) *Association {
|
||||
if association.Error != nil {
|
||||
return association
|
||||
}
|
||||
|
||||
if relationship := association.field.Relationship; relationship.Kind == "has_one" {
|
||||
return association.Replace(values...)
|
||||
}
|
||||
return association.saveAssociations(values...)
|
||||
}
|
||||
|
||||
// Replace replace current associations with new one
|
||||
func (association *Association) Replace(values ...interface{}) *Association {
|
||||
if association.Error != nil {
|
||||
return association
|
||||
}
|
||||
|
||||
var (
|
||||
relationship = association.field.Relationship
|
||||
scope = association.scope
|
||||
field = association.field.Field
|
||||
newDB = scope.NewDB()
|
||||
)
|
||||
|
||||
// Append new values
|
||||
association.field.Set(reflect.Zero(association.field.Field.Type()))
|
||||
association.saveAssociations(values...)
|
||||
|
||||
// Belongs To
|
||||
if relationship.Kind == "belongs_to" {
|
||||
// Set foreign key to be null when clearing value (length equals 0)
|
||||
if len(values) == 0 {
|
||||
// Set foreign key to be nil
|
||||
var foreignKeyMap = map[string]interface{}{}
|
||||
for _, foreignKey := range relationship.ForeignDBNames {
|
||||
foreignKeyMap[foreignKey] = nil
|
||||
}
|
||||
association.setErr(newDB.Model(scope.Value).UpdateColumn(foreignKeyMap).Error)
|
||||
}
|
||||
} else {
|
||||
// Polymorphic Relations
|
||||
if relationship.PolymorphicDBName != "" {
|
||||
newDB = newDB.Where(fmt.Sprintf("%v = ?", scope.Quote(relationship.PolymorphicDBName)), relationship.PolymorphicValue)
|
||||
}
|
||||
|
||||
// Delete Relations except new created
|
||||
if len(values) > 0 {
|
||||
var associationForeignFieldNames, associationForeignDBNames []string
|
||||
if relationship.Kind == "many_to_many" {
|
||||
// if many to many relations, get association fields name from association foreign keys
|
||||
associationScope := scope.New(reflect.New(field.Type()).Interface())
|
||||
for idx, dbName := range relationship.AssociationForeignFieldNames {
|
||||
if field, ok := associationScope.FieldByName(dbName); ok {
|
||||
associationForeignFieldNames = append(associationForeignFieldNames, field.Name)
|
||||
associationForeignDBNames = append(associationForeignDBNames, relationship.AssociationForeignDBNames[idx])
|
||||
}
|
||||
}
|
||||
} else {
|
||||
// If has one/many relations, use primary keys
|
||||
for _, field := range scope.New(reflect.New(field.Type()).Interface()).PrimaryFields() {
|
||||
associationForeignFieldNames = append(associationForeignFieldNames, field.Name)
|
||||
associationForeignDBNames = append(associationForeignDBNames, field.DBName)
|
||||
}
|
||||
}
|
||||
|
||||
newPrimaryKeys := scope.getColumnAsArray(associationForeignFieldNames, field.Interface())
|
||||
|
||||
if len(newPrimaryKeys) > 0 {
|
||||
sql := fmt.Sprintf("%v NOT IN (%v)", toQueryCondition(scope, associationForeignDBNames), toQueryMarks(newPrimaryKeys))
|
||||
newDB = newDB.Where(sql, toQueryValues(newPrimaryKeys)...)
|
||||
}
|
||||
}
|
||||
|
||||
if relationship.Kind == "many_to_many" {
|
||||
// if many to many relations, delete related relations from join table
|
||||
var sourceForeignFieldNames []string
|
||||
|
||||
for _, dbName := range relationship.ForeignFieldNames {
|
||||
if field, ok := scope.FieldByName(dbName); ok {
|
||||
sourceForeignFieldNames = append(sourceForeignFieldNames, field.Name)
|
||||
}
|
||||
}
|
||||
|
||||
if sourcePrimaryKeys := scope.getColumnAsArray(sourceForeignFieldNames, scope.Value); len(sourcePrimaryKeys) > 0 {
|
||||
newDB = newDB.Where(fmt.Sprintf("%v IN (%v)", toQueryCondition(scope, relationship.ForeignDBNames), toQueryMarks(sourcePrimaryKeys)), toQueryValues(sourcePrimaryKeys)...)
|
||||
|
||||
association.setErr(relationship.JoinTableHandler.Delete(relationship.JoinTableHandler, newDB))
|
||||
}
|
||||
} else if relationship.Kind == "has_one" || relationship.Kind == "has_many" {
|
||||
// has_one or has_many relations, set foreign key to be nil (TODO or delete them?)
|
||||
var foreignKeyMap = map[string]interface{}{}
|
||||
for idx, foreignKey := range relationship.ForeignDBNames {
|
||||
foreignKeyMap[foreignKey] = nil
|
||||
if field, ok := scope.FieldByName(relationship.AssociationForeignFieldNames[idx]); ok {
|
||||
newDB = newDB.Where(fmt.Sprintf("%v = ?", scope.Quote(foreignKey)), field.Field.Interface())
|
||||
}
|
||||
}
|
||||
|
||||
fieldValue := reflect.New(association.field.Field.Type()).Interface()
|
||||
association.setErr(newDB.Model(fieldValue).UpdateColumn(foreignKeyMap).Error)
|
||||
}
|
||||
}
|
||||
return association
|
||||
}
|
||||
|
||||
// Delete remove relationship between source & passed arguments, but won't delete those arguments
|
||||
func (association *Association) Delete(values ...interface{}) *Association {
|
||||
if association.Error != nil {
|
||||
return association
|
||||
}
|
||||
|
||||
var (
|
||||
relationship = association.field.Relationship
|
||||
scope = association.scope
|
||||
field = association.field.Field
|
||||
newDB = scope.NewDB()
|
||||
)
|
||||
|
||||
if len(values) == 0 {
|
||||
return association
|
||||
}
|
||||
|
||||
var deletingResourcePrimaryFieldNames, deletingResourcePrimaryDBNames []string
|
||||
for _, field := range scope.New(reflect.New(field.Type()).Interface()).PrimaryFields() {
|
||||
deletingResourcePrimaryFieldNames = append(deletingResourcePrimaryFieldNames, field.Name)
|
||||
deletingResourcePrimaryDBNames = append(deletingResourcePrimaryDBNames, field.DBName)
|
||||
}
|
||||
|
||||
deletingPrimaryKeys := scope.getColumnAsArray(deletingResourcePrimaryFieldNames, values...)
|
||||
|
||||
if relationship.Kind == "many_to_many" {
|
||||
// source value's foreign keys
|
||||
for idx, foreignKey := range relationship.ForeignDBNames {
|
||||
if field, ok := scope.FieldByName(relationship.ForeignFieldNames[idx]); ok {
|
||||
newDB = newDB.Where(fmt.Sprintf("%v = ?", scope.Quote(foreignKey)), field.Field.Interface())
|
||||
}
|
||||
}
|
||||
|
||||
// get association's foreign fields name
|
||||
var associationScope = scope.New(reflect.New(field.Type()).Interface())
|
||||
var associationForeignFieldNames []string
|
||||
for _, associationDBName := range relationship.AssociationForeignFieldNames {
|
||||
if field, ok := associationScope.FieldByName(associationDBName); ok {
|
||||
associationForeignFieldNames = append(associationForeignFieldNames, field.Name)
|
||||
}
|
||||
}
|
||||
|
||||
// association value's foreign keys
|
||||
deletingPrimaryKeys := scope.getColumnAsArray(associationForeignFieldNames, values...)
|
||||
sql := fmt.Sprintf("%v IN (%v)", toQueryCondition(scope, relationship.AssociationForeignDBNames), toQueryMarks(deletingPrimaryKeys))
|
||||
newDB = newDB.Where(sql, toQueryValues(deletingPrimaryKeys)...)
|
||||
|
||||
association.setErr(relationship.JoinTableHandler.Delete(relationship.JoinTableHandler, newDB))
|
||||
} else {
|
||||
var foreignKeyMap = map[string]interface{}{}
|
||||
for _, foreignKey := range relationship.ForeignDBNames {
|
||||
foreignKeyMap[foreignKey] = nil
|
||||
}
|
||||
|
||||
if relationship.Kind == "belongs_to" {
|
||||
// find with deleting relation's foreign keys
|
||||
primaryKeys := scope.getColumnAsArray(relationship.AssociationForeignFieldNames, values...)
|
||||
newDB = newDB.Where(
|
||||
fmt.Sprintf("%v IN (%v)", toQueryCondition(scope, relationship.ForeignDBNames), toQueryMarks(primaryKeys)),
|
||||
toQueryValues(primaryKeys)...,
|
||||
)
|
||||
|
||||
// set foreign key to be null if there are some records affected
|
||||
modelValue := reflect.New(scope.GetModelStruct().ModelType).Interface()
|
||||
if results := newDB.Model(modelValue).UpdateColumn(foreignKeyMap); results.Error == nil {
|
||||
if results.RowsAffected > 0 {
|
||||
scope.updatedAttrsWithValues(foreignKeyMap)
|
||||
}
|
||||
} else {
|
||||
association.setErr(results.Error)
|
||||
}
|
||||
} else if relationship.Kind == "has_one" || relationship.Kind == "has_many" {
|
||||
// find all relations
|
||||
primaryKeys := scope.getColumnAsArray(relationship.AssociationForeignFieldNames, scope.Value)
|
||||
newDB = newDB.Where(
|
||||
fmt.Sprintf("%v IN (%v)", toQueryCondition(scope, relationship.ForeignDBNames), toQueryMarks(primaryKeys)),
|
||||
toQueryValues(primaryKeys)...,
|
||||
)
|
||||
|
||||
// only include those deleting relations
|
||||
newDB = newDB.Where(
|
||||
fmt.Sprintf("%v IN (%v)", toQueryCondition(scope, deletingResourcePrimaryDBNames), toQueryMarks(deletingPrimaryKeys)),
|
||||
toQueryValues(deletingPrimaryKeys)...,
|
||||
)
|
||||
|
||||
// set matched relation's foreign key to be null
|
||||
fieldValue := reflect.New(association.field.Field.Type()).Interface()
|
||||
association.setErr(newDB.Model(fieldValue).UpdateColumn(foreignKeyMap).Error)
|
||||
}
|
||||
}
|
||||
|
||||
// Remove deleted records from source's field
|
||||
if association.Error == nil {
|
||||
if field.Kind() == reflect.Slice {
|
||||
leftValues := reflect.Zero(field.Type())
|
||||
|
||||
for i := 0; i < field.Len(); i++ {
|
||||
reflectValue := field.Index(i)
|
||||
primaryKey := scope.getColumnAsArray(deletingResourcePrimaryFieldNames, reflectValue.Interface())[0]
|
||||
var isDeleted = false
|
||||
for _, pk := range deletingPrimaryKeys {
|
||||
if equalAsString(primaryKey, pk) {
|
||||
isDeleted = true
|
||||
break
|
||||
}
|
||||
}
|
||||
if !isDeleted {
|
||||
leftValues = reflect.Append(leftValues, reflectValue)
|
||||
}
|
||||
}
|
||||
|
||||
association.field.Set(leftValues)
|
||||
} else if field.Kind() == reflect.Struct {
|
||||
primaryKey := scope.getColumnAsArray(deletingResourcePrimaryFieldNames, field.Interface())[0]
|
||||
for _, pk := range deletingPrimaryKeys {
|
||||
if equalAsString(primaryKey, pk) {
|
||||
association.field.Set(reflect.Zero(field.Type()))
|
||||
break
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return association
|
||||
}
|
||||
|
||||
// Clear remove relationship between source & current associations, won't delete those associations
|
||||
func (association *Association) Clear() *Association {
|
||||
return association.Replace()
|
||||
}
|
||||
|
||||
// Count return the count of current associations
|
||||
func (association *Association) Count() int {
|
||||
var (
|
||||
count = 0
|
||||
relationship = association.field.Relationship
|
||||
scope = association.scope
|
||||
fieldValue = association.field.Field.Interface()
|
||||
query = scope.DB()
|
||||
)
|
||||
|
||||
switch relationship.Kind {
|
||||
case "many_to_many":
|
||||
query = relationship.JoinTableHandler.JoinWith(relationship.JoinTableHandler, query, scope.Value)
|
||||
case "has_many", "has_one":
|
||||
primaryKeys := scope.getColumnAsArray(relationship.AssociationForeignFieldNames, scope.Value)
|
||||
query = query.Where(
|
||||
fmt.Sprintf("%v IN (%v)", toQueryCondition(scope, relationship.ForeignDBNames), toQueryMarks(primaryKeys)),
|
||||
toQueryValues(primaryKeys)...,
|
||||
)
|
||||
case "belongs_to":
|
||||
primaryKeys := scope.getColumnAsArray(relationship.ForeignFieldNames, scope.Value)
|
||||
query = query.Where(
|
||||
fmt.Sprintf("%v IN (%v)", toQueryCondition(scope, relationship.AssociationForeignDBNames), toQueryMarks(primaryKeys)),
|
||||
toQueryValues(primaryKeys)...,
|
||||
)
|
||||
}
|
||||
|
||||
if relationship.PolymorphicType != "" {
|
||||
query = query.Where(
|
||||
fmt.Sprintf("%v.%v = ?", scope.New(fieldValue).QuotedTableName(), scope.Quote(relationship.PolymorphicDBName)),
|
||||
relationship.PolymorphicValue,
|
||||
)
|
||||
}
|
||||
|
||||
if err := query.Model(fieldValue).Count(&count).Error; err != nil {
|
||||
association.Error = err
|
||||
}
|
||||
return count
|
||||
}
|
||||
|
||||
// saveAssociations save passed values as associations
|
||||
func (association *Association) saveAssociations(values ...interface{}) *Association {
|
||||
var (
|
||||
scope = association.scope
|
||||
field = association.field
|
||||
relationship = field.Relationship
|
||||
)
|
||||
|
||||
saveAssociation := func(reflectValue reflect.Value) {
|
||||
// value has to been pointer
|
||||
if reflectValue.Kind() != reflect.Ptr {
|
||||
reflectPtr := reflect.New(reflectValue.Type())
|
||||
reflectPtr.Elem().Set(reflectValue)
|
||||
reflectValue = reflectPtr
|
||||
}
|
||||
|
||||
// value has to been saved for many2many
|
||||
if relationship.Kind == "many_to_many" {
|
||||
if scope.New(reflectValue.Interface()).PrimaryKeyZero() {
|
||||
association.setErr(scope.NewDB().Save(reflectValue.Interface()).Error)
|
||||
}
|
||||
}
|
||||
|
||||
// Assign Fields
|
||||
var fieldType = field.Field.Type()
|
||||
var setFieldBackToValue, setSliceFieldBackToValue bool
|
||||
if reflectValue.Type().AssignableTo(fieldType) {
|
||||
field.Set(reflectValue)
|
||||
} else if reflectValue.Type().Elem().AssignableTo(fieldType) {
|
||||
// if field's type is struct, then need to set value back to argument after save
|
||||
setFieldBackToValue = true
|
||||
field.Set(reflectValue.Elem())
|
||||
} else if fieldType.Kind() == reflect.Slice {
|
||||
if reflectValue.Type().AssignableTo(fieldType.Elem()) {
|
||||
field.Set(reflect.Append(field.Field, reflectValue))
|
||||
} else if reflectValue.Type().Elem().AssignableTo(fieldType.Elem()) {
|
||||
// if field's type is slice of struct, then need to set value back to argument after save
|
||||
setSliceFieldBackToValue = true
|
||||
field.Set(reflect.Append(field.Field, reflectValue.Elem()))
|
||||
}
|
||||
}
|
||||
|
||||
if relationship.Kind == "many_to_many" {
|
||||
association.setErr(relationship.JoinTableHandler.Add(relationship.JoinTableHandler, scope.NewDB(), scope.Value, reflectValue.Interface()))
|
||||
} else {
|
||||
association.setErr(scope.NewDB().Select(field.Name).Save(scope.Value).Error)
|
||||
|
||||
if setFieldBackToValue {
|
||||
reflectValue.Elem().Set(field.Field)
|
||||
} else if setSliceFieldBackToValue {
|
||||
reflectValue.Elem().Set(field.Field.Index(field.Field.Len() - 1))
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
for _, value := range values {
|
||||
reflectValue := reflect.ValueOf(value)
|
||||
indirectReflectValue := reflect.Indirect(reflectValue)
|
||||
if indirectReflectValue.Kind() == reflect.Struct {
|
||||
saveAssociation(reflectValue)
|
||||
} else if indirectReflectValue.Kind() == reflect.Slice {
|
||||
for i := 0; i < indirectReflectValue.Len(); i++ {
|
||||
saveAssociation(indirectReflectValue.Index(i))
|
||||
}
|
||||
} else {
|
||||
association.setErr(errors.New("invalid value type"))
|
||||
}
|
||||
}
|
||||
return association
|
||||
}
|
||||
|
||||
// setErr set error when the error is not nil. And return Association.
|
||||
func (association *Association) setErr(err error) *Association {
|
||||
if err != nil {
|
||||
association.Error = err
|
||||
}
|
||||
return association
|
||||
}
|
||||
242
vendor/github.com/jinzhu/gorm/callback.go
generated
vendored
Normal file
242
vendor/github.com/jinzhu/gorm/callback.go
generated
vendored
Normal file
@@ -0,0 +1,242 @@
|
||||
package gorm
|
||||
|
||||
import "log"
|
||||
|
||||
// DefaultCallback default callbacks defined by gorm
|
||||
var DefaultCallback = &Callback{}
|
||||
|
||||
// Callback is a struct that contains all CRUD callbacks
|
||||
// Field `creates` contains callbacks will be call when creating object
|
||||
// Field `updates` contains callbacks will be call when updating object
|
||||
// Field `deletes` contains callbacks will be call when deleting object
|
||||
// Field `queries` contains callbacks will be call when querying object with query methods like Find, First, Related, Association...
|
||||
// Field `rowQueries` contains callbacks will be call when querying object with Row, Rows...
|
||||
// Field `processors` contains all callback processors, will be used to generate above callbacks in order
|
||||
type Callback struct {
|
||||
creates []*func(scope *Scope)
|
||||
updates []*func(scope *Scope)
|
||||
deletes []*func(scope *Scope)
|
||||
queries []*func(scope *Scope)
|
||||
rowQueries []*func(scope *Scope)
|
||||
processors []*CallbackProcessor
|
||||
}
|
||||
|
||||
// CallbackProcessor contains callback informations
|
||||
type CallbackProcessor struct {
|
||||
name string // current callback's name
|
||||
before string // register current callback before a callback
|
||||
after string // register current callback after a callback
|
||||
replace bool // replace callbacks with same name
|
||||
remove bool // delete callbacks with same name
|
||||
kind string // callback type: create, update, delete, query, row_query
|
||||
processor *func(scope *Scope) // callback handler
|
||||
parent *Callback
|
||||
}
|
||||
|
||||
func (c *Callback) clone() *Callback {
|
||||
return &Callback{
|
||||
creates: c.creates,
|
||||
updates: c.updates,
|
||||
deletes: c.deletes,
|
||||
queries: c.queries,
|
||||
rowQueries: c.rowQueries,
|
||||
processors: c.processors,
|
||||
}
|
||||
}
|
||||
|
||||
// Create could be used to register callbacks for creating object
|
||||
// db.Callback().Create().After("gorm:create").Register("plugin:run_after_create", func(*Scope) {
|
||||
// // business logic
|
||||
// ...
|
||||
//
|
||||
// // set error if some thing wrong happened, will rollback the creating
|
||||
// scope.Err(errors.New("error"))
|
||||
// })
|
||||
func (c *Callback) Create() *CallbackProcessor {
|
||||
return &CallbackProcessor{kind: "create", parent: c}
|
||||
}
|
||||
|
||||
// Update could be used to register callbacks for updating object, refer `Create` for usage
|
||||
func (c *Callback) Update() *CallbackProcessor {
|
||||
return &CallbackProcessor{kind: "update", parent: c}
|
||||
}
|
||||
|
||||
// Delete could be used to register callbacks for deleting object, refer `Create` for usage
|
||||
func (c *Callback) Delete() *CallbackProcessor {
|
||||
return &CallbackProcessor{kind: "delete", parent: c}
|
||||
}
|
||||
|
||||
// Query could be used to register callbacks for querying objects with query methods like `Find`, `First`, `Related`, `Association`...
|
||||
// Refer `Create` for usage
|
||||
func (c *Callback) Query() *CallbackProcessor {
|
||||
return &CallbackProcessor{kind: "query", parent: c}
|
||||
}
|
||||
|
||||
// RowQuery could be used to register callbacks for querying objects with `Row`, `Rows`, refer `Create` for usage
|
||||
func (c *Callback) RowQuery() *CallbackProcessor {
|
||||
return &CallbackProcessor{kind: "row_query", parent: c}
|
||||
}
|
||||
|
||||
// After insert a new callback after callback `callbackName`, refer `Callbacks.Create`
|
||||
func (cp *CallbackProcessor) After(callbackName string) *CallbackProcessor {
|
||||
cp.after = callbackName
|
||||
return cp
|
||||
}
|
||||
|
||||
// Before insert a new callback before callback `callbackName`, refer `Callbacks.Create`
|
||||
func (cp *CallbackProcessor) Before(callbackName string) *CallbackProcessor {
|
||||
cp.before = callbackName
|
||||
return cp
|
||||
}
|
||||
|
||||
// Register a new callback, refer `Callbacks.Create`
|
||||
func (cp *CallbackProcessor) Register(callbackName string, callback func(scope *Scope)) {
|
||||
if cp.kind == "row_query" {
|
||||
if cp.before == "" && cp.after == "" && callbackName != "gorm:row_query" {
|
||||
log.Printf("Registing RowQuery callback %v without specify order with Before(), After(), applying Before('gorm:row_query') by default for compatibility...\n", callbackName)
|
||||
cp.before = "gorm:row_query"
|
||||
}
|
||||
}
|
||||
|
||||
cp.name = callbackName
|
||||
cp.processor = &callback
|
||||
cp.parent.processors = append(cp.parent.processors, cp)
|
||||
cp.parent.reorder()
|
||||
}
|
||||
|
||||
// Remove a registered callback
|
||||
// db.Callback().Create().Remove("gorm:update_time_stamp_when_create")
|
||||
func (cp *CallbackProcessor) Remove(callbackName string) {
|
||||
log.Printf("[info] removing callback `%v` from %v\n", callbackName, fileWithLineNum())
|
||||
cp.name = callbackName
|
||||
cp.remove = true
|
||||
cp.parent.processors = append(cp.parent.processors, cp)
|
||||
cp.parent.reorder()
|
||||
}
|
||||
|
||||
// Replace a registered callback with new callback
|
||||
// db.Callback().Create().Replace("gorm:update_time_stamp_when_create", func(*Scope) {
|
||||
// scope.SetColumn("Created", now)
|
||||
// scope.SetColumn("Updated", now)
|
||||
// })
|
||||
func (cp *CallbackProcessor) Replace(callbackName string, callback func(scope *Scope)) {
|
||||
log.Printf("[info] replacing callback `%v` from %v\n", callbackName, fileWithLineNum())
|
||||
cp.name = callbackName
|
||||
cp.processor = &callback
|
||||
cp.replace = true
|
||||
cp.parent.processors = append(cp.parent.processors, cp)
|
||||
cp.parent.reorder()
|
||||
}
|
||||
|
||||
// Get registered callback
|
||||
// db.Callback().Create().Get("gorm:create")
|
||||
func (cp *CallbackProcessor) Get(callbackName string) (callback func(scope *Scope)) {
|
||||
for _, p := range cp.parent.processors {
|
||||
if p.name == callbackName && p.kind == cp.kind && !cp.remove {
|
||||
return *p.processor
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// getRIndex get right index from string slice
|
||||
func getRIndex(strs []string, str string) int {
|
||||
for i := len(strs) - 1; i >= 0; i-- {
|
||||
if strs[i] == str {
|
||||
return i
|
||||
}
|
||||
}
|
||||
return -1
|
||||
}
|
||||
|
||||
// sortProcessors sort callback processors based on its before, after, remove, replace
|
||||
func sortProcessors(cps []*CallbackProcessor) []*func(scope *Scope) {
|
||||
var (
|
||||
allNames, sortedNames []string
|
||||
sortCallbackProcessor func(c *CallbackProcessor)
|
||||
)
|
||||
|
||||
for _, cp := range cps {
|
||||
// show warning message the callback name already exists
|
||||
if index := getRIndex(allNames, cp.name); index > -1 && !cp.replace && !cp.remove {
|
||||
log.Printf("[warning] duplicated callback `%v` from %v\n", cp.name, fileWithLineNum())
|
||||
}
|
||||
allNames = append(allNames, cp.name)
|
||||
}
|
||||
|
||||
sortCallbackProcessor = func(c *CallbackProcessor) {
|
||||
if getRIndex(sortedNames, c.name) == -1 { // if not sorted
|
||||
if c.before != "" { // if defined before callback
|
||||
if index := getRIndex(sortedNames, c.before); index != -1 {
|
||||
// if before callback already sorted, append current callback just after it
|
||||
sortedNames = append(sortedNames[:index], append([]string{c.name}, sortedNames[index:]...)...)
|
||||
} else if index := getRIndex(allNames, c.before); index != -1 {
|
||||
// if before callback exists but haven't sorted, append current callback to last
|
||||
sortedNames = append(sortedNames, c.name)
|
||||
sortCallbackProcessor(cps[index])
|
||||
}
|
||||
}
|
||||
|
||||
if c.after != "" { // if defined after callback
|
||||
if index := getRIndex(sortedNames, c.after); index != -1 {
|
||||
// if after callback already sorted, append current callback just before it
|
||||
sortedNames = append(sortedNames[:index+1], append([]string{c.name}, sortedNames[index+1:]...)...)
|
||||
} else if index := getRIndex(allNames, c.after); index != -1 {
|
||||
// if after callback exists but haven't sorted
|
||||
cp := cps[index]
|
||||
// set after callback's before callback to current callback
|
||||
if cp.before == "" {
|
||||
cp.before = c.name
|
||||
}
|
||||
sortCallbackProcessor(cp)
|
||||
}
|
||||
}
|
||||
|
||||
// if current callback haven't been sorted, append it to last
|
||||
if getRIndex(sortedNames, c.name) == -1 {
|
||||
sortedNames = append(sortedNames, c.name)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
for _, cp := range cps {
|
||||
sortCallbackProcessor(cp)
|
||||
}
|
||||
|
||||
var sortedFuncs []*func(scope *Scope)
|
||||
for _, name := range sortedNames {
|
||||
if index := getRIndex(allNames, name); !cps[index].remove {
|
||||
sortedFuncs = append(sortedFuncs, cps[index].processor)
|
||||
}
|
||||
}
|
||||
|
||||
return sortedFuncs
|
||||
}
|
||||
|
||||
// reorder all registered processors, and reset CRUD callbacks
|
||||
func (c *Callback) reorder() {
|
||||
var creates, updates, deletes, queries, rowQueries []*CallbackProcessor
|
||||
|
||||
for _, processor := range c.processors {
|
||||
if processor.name != "" {
|
||||
switch processor.kind {
|
||||
case "create":
|
||||
creates = append(creates, processor)
|
||||
case "update":
|
||||
updates = append(updates, processor)
|
||||
case "delete":
|
||||
deletes = append(deletes, processor)
|
||||
case "query":
|
||||
queries = append(queries, processor)
|
||||
case "row_query":
|
||||
rowQueries = append(rowQueries, processor)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
c.creates = sortProcessors(creates)
|
||||
c.updates = sortProcessors(updates)
|
||||
c.deletes = sortProcessors(deletes)
|
||||
c.queries = sortProcessors(queries)
|
||||
c.rowQueries = sortProcessors(rowQueries)
|
||||
}
|
||||
164
vendor/github.com/jinzhu/gorm/callback_create.go
generated
vendored
Normal file
164
vendor/github.com/jinzhu/gorm/callback_create.go
generated
vendored
Normal file
@@ -0,0 +1,164 @@
|
||||
package gorm
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"strings"
|
||||
)
|
||||
|
||||
// Define callbacks for creating
|
||||
func init() {
|
||||
DefaultCallback.Create().Register("gorm:begin_transaction", beginTransactionCallback)
|
||||
DefaultCallback.Create().Register("gorm:before_create", beforeCreateCallback)
|
||||
DefaultCallback.Create().Register("gorm:save_before_associations", saveBeforeAssociationsCallback)
|
||||
DefaultCallback.Create().Register("gorm:update_time_stamp", updateTimeStampForCreateCallback)
|
||||
DefaultCallback.Create().Register("gorm:create", createCallback)
|
||||
DefaultCallback.Create().Register("gorm:force_reload_after_create", forceReloadAfterCreateCallback)
|
||||
DefaultCallback.Create().Register("gorm:save_after_associations", saveAfterAssociationsCallback)
|
||||
DefaultCallback.Create().Register("gorm:after_create", afterCreateCallback)
|
||||
DefaultCallback.Create().Register("gorm:commit_or_rollback_transaction", commitOrRollbackTransactionCallback)
|
||||
}
|
||||
|
||||
// beforeCreateCallback will invoke `BeforeSave`, `BeforeCreate` method before creating
|
||||
func beforeCreateCallback(scope *Scope) {
|
||||
if !scope.HasError() {
|
||||
scope.CallMethod("BeforeSave")
|
||||
}
|
||||
if !scope.HasError() {
|
||||
scope.CallMethod("BeforeCreate")
|
||||
}
|
||||
}
|
||||
|
||||
// updateTimeStampForCreateCallback will set `CreatedAt`, `UpdatedAt` when creating
|
||||
func updateTimeStampForCreateCallback(scope *Scope) {
|
||||
if !scope.HasError() {
|
||||
now := NowFunc()
|
||||
|
||||
if createdAtField, ok := scope.FieldByName("CreatedAt"); ok {
|
||||
if createdAtField.IsBlank {
|
||||
createdAtField.Set(now)
|
||||
}
|
||||
}
|
||||
|
||||
if updatedAtField, ok := scope.FieldByName("UpdatedAt"); ok {
|
||||
if updatedAtField.IsBlank {
|
||||
updatedAtField.Set(now)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// createCallback the callback used to insert data into database
|
||||
func createCallback(scope *Scope) {
|
||||
if !scope.HasError() {
|
||||
defer scope.trace(NowFunc())
|
||||
|
||||
var (
|
||||
columns, placeholders []string
|
||||
blankColumnsWithDefaultValue []string
|
||||
)
|
||||
|
||||
for _, field := range scope.Fields() {
|
||||
if scope.changeableField(field) {
|
||||
if field.IsNormal && !field.IsIgnored {
|
||||
if field.IsBlank && field.HasDefaultValue {
|
||||
blankColumnsWithDefaultValue = append(blankColumnsWithDefaultValue, scope.Quote(field.DBName))
|
||||
scope.InstanceSet("gorm:blank_columns_with_default_value", blankColumnsWithDefaultValue)
|
||||
} else if !field.IsPrimaryKey || !field.IsBlank {
|
||||
columns = append(columns, scope.Quote(field.DBName))
|
||||
placeholders = append(placeholders, scope.AddToVars(field.Field.Interface()))
|
||||
}
|
||||
} else if field.Relationship != nil && field.Relationship.Kind == "belongs_to" {
|
||||
for _, foreignKey := range field.Relationship.ForeignDBNames {
|
||||
if foreignField, ok := scope.FieldByName(foreignKey); ok && !scope.changeableField(foreignField) {
|
||||
columns = append(columns, scope.Quote(foreignField.DBName))
|
||||
placeholders = append(placeholders, scope.AddToVars(foreignField.Field.Interface()))
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
var (
|
||||
returningColumn = "*"
|
||||
quotedTableName = scope.QuotedTableName()
|
||||
primaryField = scope.PrimaryField()
|
||||
extraOption string
|
||||
)
|
||||
|
||||
if str, ok := scope.Get("gorm:insert_option"); ok {
|
||||
extraOption = fmt.Sprint(str)
|
||||
}
|
||||
|
||||
if primaryField != nil {
|
||||
returningColumn = scope.Quote(primaryField.DBName)
|
||||
}
|
||||
|
||||
lastInsertIDReturningSuffix := scope.Dialect().LastInsertIDReturningSuffix(quotedTableName, returningColumn)
|
||||
|
||||
if len(columns) == 0 {
|
||||
scope.Raw(fmt.Sprintf(
|
||||
"INSERT INTO %v %v%v%v",
|
||||
quotedTableName,
|
||||
scope.Dialect().DefaultValueStr(),
|
||||
addExtraSpaceIfExist(extraOption),
|
||||
addExtraSpaceIfExist(lastInsertIDReturningSuffix),
|
||||
))
|
||||
} else {
|
||||
scope.Raw(fmt.Sprintf(
|
||||
"INSERT INTO %v (%v) VALUES (%v)%v%v",
|
||||
scope.QuotedTableName(),
|
||||
strings.Join(columns, ","),
|
||||
strings.Join(placeholders, ","),
|
||||
addExtraSpaceIfExist(extraOption),
|
||||
addExtraSpaceIfExist(lastInsertIDReturningSuffix),
|
||||
))
|
||||
}
|
||||
|
||||
// execute create sql
|
||||
if lastInsertIDReturningSuffix == "" || primaryField == nil {
|
||||
if result, err := scope.SQLDB().Exec(scope.SQL, scope.SQLVars...); scope.Err(err) == nil {
|
||||
// set rows affected count
|
||||
scope.db.RowsAffected, _ = result.RowsAffected()
|
||||
|
||||
// set primary value to primary field
|
||||
if primaryField != nil && primaryField.IsBlank {
|
||||
if primaryValue, err := result.LastInsertId(); scope.Err(err) == nil {
|
||||
scope.Err(primaryField.Set(primaryValue))
|
||||
}
|
||||
}
|
||||
}
|
||||
} else {
|
||||
if primaryField.Field.CanAddr() {
|
||||
if err := scope.SQLDB().QueryRow(scope.SQL, scope.SQLVars...).Scan(primaryField.Field.Addr().Interface()); scope.Err(err) == nil {
|
||||
primaryField.IsBlank = false
|
||||
scope.db.RowsAffected = 1
|
||||
}
|
||||
} else {
|
||||
scope.Err(ErrUnaddressable)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// forceReloadAfterCreateCallback will reload columns that having default value, and set it back to current object
|
||||
func forceReloadAfterCreateCallback(scope *Scope) {
|
||||
if blankColumnsWithDefaultValue, ok := scope.InstanceGet("gorm:blank_columns_with_default_value"); ok {
|
||||
db := scope.DB().New().Table(scope.TableName()).Select(blankColumnsWithDefaultValue.([]string))
|
||||
for _, field := range scope.Fields() {
|
||||
if field.IsPrimaryKey && !field.IsBlank {
|
||||
db = db.Where(fmt.Sprintf("%v = ?", field.DBName), field.Field.Interface())
|
||||
}
|
||||
}
|
||||
db.Scan(scope.Value)
|
||||
}
|
||||
}
|
||||
|
||||
// afterCreateCallback will invoke `AfterCreate`, `AfterSave` method after creating
|
||||
func afterCreateCallback(scope *Scope) {
|
||||
if !scope.HasError() {
|
||||
scope.CallMethod("AfterCreate")
|
||||
}
|
||||
if !scope.HasError() {
|
||||
scope.CallMethod("AfterSave")
|
||||
}
|
||||
}
|
||||
63
vendor/github.com/jinzhu/gorm/callback_delete.go
generated
vendored
Normal file
63
vendor/github.com/jinzhu/gorm/callback_delete.go
generated
vendored
Normal file
@@ -0,0 +1,63 @@
|
||||
package gorm
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
)
|
||||
|
||||
// Define callbacks for deleting
|
||||
func init() {
|
||||
DefaultCallback.Delete().Register("gorm:begin_transaction", beginTransactionCallback)
|
||||
DefaultCallback.Delete().Register("gorm:before_delete", beforeDeleteCallback)
|
||||
DefaultCallback.Delete().Register("gorm:delete", deleteCallback)
|
||||
DefaultCallback.Delete().Register("gorm:after_delete", afterDeleteCallback)
|
||||
DefaultCallback.Delete().Register("gorm:commit_or_rollback_transaction", commitOrRollbackTransactionCallback)
|
||||
}
|
||||
|
||||
// beforeDeleteCallback will invoke `BeforeDelete` method before deleting
|
||||
func beforeDeleteCallback(scope *Scope) {
|
||||
if scope.DB().HasBlockGlobalUpdate() && !scope.hasConditions() {
|
||||
scope.Err(errors.New("Missing WHERE clause while deleting"))
|
||||
return
|
||||
}
|
||||
if !scope.HasError() {
|
||||
scope.CallMethod("BeforeDelete")
|
||||
}
|
||||
}
|
||||
|
||||
// deleteCallback used to delete data from database or set deleted_at to current time (when using with soft delete)
|
||||
func deleteCallback(scope *Scope) {
|
||||
if !scope.HasError() {
|
||||
var extraOption string
|
||||
if str, ok := scope.Get("gorm:delete_option"); ok {
|
||||
extraOption = fmt.Sprint(str)
|
||||
}
|
||||
|
||||
deletedAtField, hasDeletedAtField := scope.FieldByName("DeletedAt")
|
||||
|
||||
if !scope.Search.Unscoped && hasDeletedAtField {
|
||||
scope.Raw(fmt.Sprintf(
|
||||
"UPDATE %v SET %v=%v%v%v",
|
||||
scope.QuotedTableName(),
|
||||
scope.Quote(deletedAtField.DBName),
|
||||
scope.AddToVars(NowFunc()),
|
||||
addExtraSpaceIfExist(scope.CombinedConditionSql()),
|
||||
addExtraSpaceIfExist(extraOption),
|
||||
)).Exec()
|
||||
} else {
|
||||
scope.Raw(fmt.Sprintf(
|
||||
"DELETE FROM %v%v%v",
|
||||
scope.QuotedTableName(),
|
||||
addExtraSpaceIfExist(scope.CombinedConditionSql()),
|
||||
addExtraSpaceIfExist(extraOption),
|
||||
)).Exec()
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// afterDeleteCallback will invoke `AfterDelete` method after deleting
|
||||
func afterDeleteCallback(scope *Scope) {
|
||||
if !scope.HasError() {
|
||||
scope.CallMethod("AfterDelete")
|
||||
}
|
||||
}
|
||||
104
vendor/github.com/jinzhu/gorm/callback_query.go
generated
vendored
Normal file
104
vendor/github.com/jinzhu/gorm/callback_query.go
generated
vendored
Normal file
@@ -0,0 +1,104 @@
|
||||
package gorm
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"reflect"
|
||||
)
|
||||
|
||||
// Define callbacks for querying
|
||||
func init() {
|
||||
DefaultCallback.Query().Register("gorm:query", queryCallback)
|
||||
DefaultCallback.Query().Register("gorm:preload", preloadCallback)
|
||||
DefaultCallback.Query().Register("gorm:after_query", afterQueryCallback)
|
||||
}
|
||||
|
||||
// queryCallback used to query data from database
|
||||
func queryCallback(scope *Scope) {
|
||||
if _, skip := scope.InstanceGet("gorm:skip_query_callback"); skip {
|
||||
return
|
||||
}
|
||||
|
||||
//we are only preloading relations, dont touch base model
|
||||
if _, skip := scope.InstanceGet("gorm:only_preload"); skip {
|
||||
return
|
||||
}
|
||||
|
||||
defer scope.trace(NowFunc())
|
||||
|
||||
var (
|
||||
isSlice, isPtr bool
|
||||
resultType reflect.Type
|
||||
results = scope.IndirectValue()
|
||||
)
|
||||
|
||||
if orderBy, ok := scope.Get("gorm:order_by_primary_key"); ok {
|
||||
if primaryField := scope.PrimaryField(); primaryField != nil {
|
||||
scope.Search.Order(fmt.Sprintf("%v.%v %v", scope.QuotedTableName(), scope.Quote(primaryField.DBName), orderBy))
|
||||
}
|
||||
}
|
||||
|
||||
if value, ok := scope.Get("gorm:query_destination"); ok {
|
||||
results = indirect(reflect.ValueOf(value))
|
||||
}
|
||||
|
||||
if kind := results.Kind(); kind == reflect.Slice {
|
||||
isSlice = true
|
||||
resultType = results.Type().Elem()
|
||||
results.Set(reflect.MakeSlice(results.Type(), 0, 0))
|
||||
|
||||
if resultType.Kind() == reflect.Ptr {
|
||||
isPtr = true
|
||||
resultType = resultType.Elem()
|
||||
}
|
||||
} else if kind != reflect.Struct {
|
||||
scope.Err(errors.New("unsupported destination, should be slice or struct"))
|
||||
return
|
||||
}
|
||||
|
||||
scope.prepareQuerySQL()
|
||||
|
||||
if !scope.HasError() {
|
||||
scope.db.RowsAffected = 0
|
||||
if str, ok := scope.Get("gorm:query_option"); ok {
|
||||
scope.SQL += addExtraSpaceIfExist(fmt.Sprint(str))
|
||||
}
|
||||
|
||||
if rows, err := scope.SQLDB().Query(scope.SQL, scope.SQLVars...); scope.Err(err) == nil {
|
||||
defer rows.Close()
|
||||
|
||||
columns, _ := rows.Columns()
|
||||
for rows.Next() {
|
||||
scope.db.RowsAffected++
|
||||
|
||||
elem := results
|
||||
if isSlice {
|
||||
elem = reflect.New(resultType).Elem()
|
||||
}
|
||||
|
||||
scope.scan(rows, columns, scope.New(elem.Addr().Interface()).Fields())
|
||||
|
||||
if isSlice {
|
||||
if isPtr {
|
||||
results.Set(reflect.Append(results, elem.Addr()))
|
||||
} else {
|
||||
results.Set(reflect.Append(results, elem))
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if err := rows.Err(); err != nil {
|
||||
scope.Err(err)
|
||||
} else if scope.db.RowsAffected == 0 && !isSlice {
|
||||
scope.Err(ErrRecordNotFound)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// afterQueryCallback will invoke `AfterFind` method after querying
|
||||
func afterQueryCallback(scope *Scope) {
|
||||
if !scope.HasError() {
|
||||
scope.CallMethod("AfterFind")
|
||||
}
|
||||
}
|
||||
404
vendor/github.com/jinzhu/gorm/callback_query_preload.go
generated
vendored
Normal file
404
vendor/github.com/jinzhu/gorm/callback_query_preload.go
generated
vendored
Normal file
@@ -0,0 +1,404 @@
|
||||
package gorm
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"reflect"
|
||||
"strconv"
|
||||
"strings"
|
||||
)
|
||||
|
||||
// preloadCallback used to preload associations
|
||||
func preloadCallback(scope *Scope) {
|
||||
if _, skip := scope.InstanceGet("gorm:skip_query_callback"); skip {
|
||||
return
|
||||
}
|
||||
|
||||
if ap, ok := scope.Get("gorm:auto_preload"); ok {
|
||||
// If gorm:auto_preload IS NOT a bool then auto preload.
|
||||
// Else if it IS a bool, use the value
|
||||
if apb, ok := ap.(bool); !ok {
|
||||
autoPreload(scope)
|
||||
} else if apb {
|
||||
autoPreload(scope)
|
||||
}
|
||||
}
|
||||
|
||||
if scope.Search.preload == nil || scope.HasError() {
|
||||
return
|
||||
}
|
||||
|
||||
var (
|
||||
preloadedMap = map[string]bool{}
|
||||
fields = scope.Fields()
|
||||
)
|
||||
|
||||
for _, preload := range scope.Search.preload {
|
||||
var (
|
||||
preloadFields = strings.Split(preload.schema, ".")
|
||||
currentScope = scope
|
||||
currentFields = fields
|
||||
)
|
||||
|
||||
for idx, preloadField := range preloadFields {
|
||||
var currentPreloadConditions []interface{}
|
||||
|
||||
if currentScope == nil {
|
||||
continue
|
||||
}
|
||||
|
||||
// if not preloaded
|
||||
if preloadKey := strings.Join(preloadFields[:idx+1], "."); !preloadedMap[preloadKey] {
|
||||
|
||||
// assign search conditions to last preload
|
||||
if idx == len(preloadFields)-1 {
|
||||
currentPreloadConditions = preload.conditions
|
||||
}
|
||||
|
||||
for _, field := range currentFields {
|
||||
if field.Name != preloadField || field.Relationship == nil {
|
||||
continue
|
||||
}
|
||||
|
||||
switch field.Relationship.Kind {
|
||||
case "has_one":
|
||||
currentScope.handleHasOnePreload(field, currentPreloadConditions)
|
||||
case "has_many":
|
||||
currentScope.handleHasManyPreload(field, currentPreloadConditions)
|
||||
case "belongs_to":
|
||||
currentScope.handleBelongsToPreload(field, currentPreloadConditions)
|
||||
case "many_to_many":
|
||||
currentScope.handleManyToManyPreload(field, currentPreloadConditions)
|
||||
default:
|
||||
scope.Err(errors.New("unsupported relation"))
|
||||
}
|
||||
|
||||
preloadedMap[preloadKey] = true
|
||||
break
|
||||
}
|
||||
|
||||
if !preloadedMap[preloadKey] {
|
||||
scope.Err(fmt.Errorf("can't preload field %s for %s", preloadField, currentScope.GetModelStruct().ModelType))
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
// preload next level
|
||||
if idx < len(preloadFields)-1 {
|
||||
currentScope = currentScope.getColumnAsScope(preloadField)
|
||||
if currentScope != nil {
|
||||
currentFields = currentScope.Fields()
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func autoPreload(scope *Scope) {
|
||||
for _, field := range scope.Fields() {
|
||||
if field.Relationship == nil {
|
||||
continue
|
||||
}
|
||||
|
||||
if val, ok := field.TagSettingsGet("PRELOAD"); ok {
|
||||
if preload, err := strconv.ParseBool(val); err != nil {
|
||||
scope.Err(errors.New("invalid preload option"))
|
||||
return
|
||||
} else if !preload {
|
||||
continue
|
||||
}
|
||||
}
|
||||
|
||||
scope.Search.Preload(field.Name)
|
||||
}
|
||||
}
|
||||
|
||||
func (scope *Scope) generatePreloadDBWithConditions(conditions []interface{}) (*DB, []interface{}) {
|
||||
var (
|
||||
preloadDB = scope.NewDB()
|
||||
preloadConditions []interface{}
|
||||
)
|
||||
|
||||
for _, condition := range conditions {
|
||||
if scopes, ok := condition.(func(*DB) *DB); ok {
|
||||
preloadDB = scopes(preloadDB)
|
||||
} else {
|
||||
preloadConditions = append(preloadConditions, condition)
|
||||
}
|
||||
}
|
||||
|
||||
return preloadDB, preloadConditions
|
||||
}
|
||||
|
||||
// handleHasOnePreload used to preload has one associations
|
||||
func (scope *Scope) handleHasOnePreload(field *Field, conditions []interface{}) {
|
||||
relation := field.Relationship
|
||||
|
||||
// get relations's primary keys
|
||||
primaryKeys := scope.getColumnAsArray(relation.AssociationForeignFieldNames, scope.Value)
|
||||
if len(primaryKeys) == 0 {
|
||||
return
|
||||
}
|
||||
|
||||
// preload conditions
|
||||
preloadDB, preloadConditions := scope.generatePreloadDBWithConditions(conditions)
|
||||
|
||||
// find relations
|
||||
query := fmt.Sprintf("%v IN (%v)", toQueryCondition(scope, relation.ForeignDBNames), toQueryMarks(primaryKeys))
|
||||
values := toQueryValues(primaryKeys)
|
||||
if relation.PolymorphicType != "" {
|
||||
query += fmt.Sprintf(" AND %v = ?", scope.Quote(relation.PolymorphicDBName))
|
||||
values = append(values, relation.PolymorphicValue)
|
||||
}
|
||||
|
||||
results := makeSlice(field.Struct.Type)
|
||||
scope.Err(preloadDB.Where(query, values...).Find(results, preloadConditions...).Error)
|
||||
|
||||
// assign find results
|
||||
var (
|
||||
resultsValue = indirect(reflect.ValueOf(results))
|
||||
indirectScopeValue = scope.IndirectValue()
|
||||
)
|
||||
|
||||
if indirectScopeValue.Kind() == reflect.Slice {
|
||||
foreignValuesToResults := make(map[string]reflect.Value)
|
||||
for i := 0; i < resultsValue.Len(); i++ {
|
||||
result := resultsValue.Index(i)
|
||||
foreignValues := toString(getValueFromFields(result, relation.ForeignFieldNames))
|
||||
foreignValuesToResults[foreignValues] = result
|
||||
}
|
||||
for j := 0; j < indirectScopeValue.Len(); j++ {
|
||||
indirectValue := indirect(indirectScopeValue.Index(j))
|
||||
valueString := toString(getValueFromFields(indirectValue, relation.AssociationForeignFieldNames))
|
||||
if result, found := foreignValuesToResults[valueString]; found {
|
||||
indirectValue.FieldByName(field.Name).Set(result)
|
||||
}
|
||||
}
|
||||
} else {
|
||||
for i := 0; i < resultsValue.Len(); i++ {
|
||||
result := resultsValue.Index(i)
|
||||
scope.Err(field.Set(result))
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// handleHasManyPreload used to preload has many associations
|
||||
func (scope *Scope) handleHasManyPreload(field *Field, conditions []interface{}) {
|
||||
relation := field.Relationship
|
||||
|
||||
// get relations's primary keys
|
||||
primaryKeys := scope.getColumnAsArray(relation.AssociationForeignFieldNames, scope.Value)
|
||||
if len(primaryKeys) == 0 {
|
||||
return
|
||||
}
|
||||
|
||||
// preload conditions
|
||||
preloadDB, preloadConditions := scope.generatePreloadDBWithConditions(conditions)
|
||||
|
||||
// find relations
|
||||
query := fmt.Sprintf("%v IN (%v)", toQueryCondition(scope, relation.ForeignDBNames), toQueryMarks(primaryKeys))
|
||||
values := toQueryValues(primaryKeys)
|
||||
if relation.PolymorphicType != "" {
|
||||
query += fmt.Sprintf(" AND %v = ?", scope.Quote(relation.PolymorphicDBName))
|
||||
values = append(values, relation.PolymorphicValue)
|
||||
}
|
||||
|
||||
results := makeSlice(field.Struct.Type)
|
||||
scope.Err(preloadDB.Where(query, values...).Find(results, preloadConditions...).Error)
|
||||
|
||||
// assign find results
|
||||
var (
|
||||
resultsValue = indirect(reflect.ValueOf(results))
|
||||
indirectScopeValue = scope.IndirectValue()
|
||||
)
|
||||
|
||||
if indirectScopeValue.Kind() == reflect.Slice {
|
||||
preloadMap := make(map[string][]reflect.Value)
|
||||
for i := 0; i < resultsValue.Len(); i++ {
|
||||
result := resultsValue.Index(i)
|
||||
foreignValues := getValueFromFields(result, relation.ForeignFieldNames)
|
||||
preloadMap[toString(foreignValues)] = append(preloadMap[toString(foreignValues)], result)
|
||||
}
|
||||
|
||||
for j := 0; j < indirectScopeValue.Len(); j++ {
|
||||
object := indirect(indirectScopeValue.Index(j))
|
||||
objectRealValue := getValueFromFields(object, relation.AssociationForeignFieldNames)
|
||||
f := object.FieldByName(field.Name)
|
||||
if results, ok := preloadMap[toString(objectRealValue)]; ok {
|
||||
f.Set(reflect.Append(f, results...))
|
||||
} else {
|
||||
f.Set(reflect.MakeSlice(f.Type(), 0, 0))
|
||||
}
|
||||
}
|
||||
} else {
|
||||
scope.Err(field.Set(resultsValue))
|
||||
}
|
||||
}
|
||||
|
||||
// handleBelongsToPreload used to preload belongs to associations
|
||||
func (scope *Scope) handleBelongsToPreload(field *Field, conditions []interface{}) {
|
||||
relation := field.Relationship
|
||||
|
||||
// preload conditions
|
||||
preloadDB, preloadConditions := scope.generatePreloadDBWithConditions(conditions)
|
||||
|
||||
// get relations's primary keys
|
||||
primaryKeys := scope.getColumnAsArray(relation.ForeignFieldNames, scope.Value)
|
||||
if len(primaryKeys) == 0 {
|
||||
return
|
||||
}
|
||||
|
||||
// find relations
|
||||
results := makeSlice(field.Struct.Type)
|
||||
scope.Err(preloadDB.Where(fmt.Sprintf("%v IN (%v)", toQueryCondition(scope, relation.AssociationForeignDBNames), toQueryMarks(primaryKeys)), toQueryValues(primaryKeys)...).Find(results, preloadConditions...).Error)
|
||||
|
||||
// assign find results
|
||||
var (
|
||||
resultsValue = indirect(reflect.ValueOf(results))
|
||||
indirectScopeValue = scope.IndirectValue()
|
||||
)
|
||||
|
||||
foreignFieldToObjects := make(map[string][]*reflect.Value)
|
||||
if indirectScopeValue.Kind() == reflect.Slice {
|
||||
for j := 0; j < indirectScopeValue.Len(); j++ {
|
||||
object := indirect(indirectScopeValue.Index(j))
|
||||
valueString := toString(getValueFromFields(object, relation.ForeignFieldNames))
|
||||
foreignFieldToObjects[valueString] = append(foreignFieldToObjects[valueString], &object)
|
||||
}
|
||||
}
|
||||
|
||||
for i := 0; i < resultsValue.Len(); i++ {
|
||||
result := resultsValue.Index(i)
|
||||
if indirectScopeValue.Kind() == reflect.Slice {
|
||||
valueString := toString(getValueFromFields(result, relation.AssociationForeignFieldNames))
|
||||
if objects, found := foreignFieldToObjects[valueString]; found {
|
||||
for _, object := range objects {
|
||||
object.FieldByName(field.Name).Set(result)
|
||||
}
|
||||
}
|
||||
} else {
|
||||
scope.Err(field.Set(result))
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// handleManyToManyPreload used to preload many to many associations
|
||||
func (scope *Scope) handleManyToManyPreload(field *Field, conditions []interface{}) {
|
||||
var (
|
||||
relation = field.Relationship
|
||||
joinTableHandler = relation.JoinTableHandler
|
||||
fieldType = field.Struct.Type.Elem()
|
||||
foreignKeyValue interface{}
|
||||
foreignKeyType = reflect.ValueOf(&foreignKeyValue).Type()
|
||||
linkHash = map[string][]reflect.Value{}
|
||||
isPtr bool
|
||||
)
|
||||
|
||||
if fieldType.Kind() == reflect.Ptr {
|
||||
isPtr = true
|
||||
fieldType = fieldType.Elem()
|
||||
}
|
||||
|
||||
var sourceKeys = []string{}
|
||||
for _, key := range joinTableHandler.SourceForeignKeys() {
|
||||
sourceKeys = append(sourceKeys, key.DBName)
|
||||
}
|
||||
|
||||
// preload conditions
|
||||
preloadDB, preloadConditions := scope.generatePreloadDBWithConditions(conditions)
|
||||
|
||||
// generate query with join table
|
||||
newScope := scope.New(reflect.New(fieldType).Interface())
|
||||
preloadDB = preloadDB.Table(newScope.TableName()).Model(newScope.Value)
|
||||
|
||||
if len(preloadDB.search.selects) == 0 {
|
||||
preloadDB = preloadDB.Select("*")
|
||||
}
|
||||
|
||||
preloadDB = joinTableHandler.JoinWith(joinTableHandler, preloadDB, scope.Value)
|
||||
|
||||
// preload inline conditions
|
||||
if len(preloadConditions) > 0 {
|
||||
preloadDB = preloadDB.Where(preloadConditions[0], preloadConditions[1:]...)
|
||||
}
|
||||
|
||||
rows, err := preloadDB.Rows()
|
||||
|
||||
if scope.Err(err) != nil {
|
||||
return
|
||||
}
|
||||
defer rows.Close()
|
||||
|
||||
columns, _ := rows.Columns()
|
||||
for rows.Next() {
|
||||
var (
|
||||
elem = reflect.New(fieldType).Elem()
|
||||
fields = scope.New(elem.Addr().Interface()).Fields()
|
||||
)
|
||||
|
||||
// register foreign keys in join tables
|
||||
var joinTableFields []*Field
|
||||
for _, sourceKey := range sourceKeys {
|
||||
joinTableFields = append(joinTableFields, &Field{StructField: &StructField{DBName: sourceKey, IsNormal: true}, Field: reflect.New(foreignKeyType).Elem()})
|
||||
}
|
||||
|
||||
scope.scan(rows, columns, append(fields, joinTableFields...))
|
||||
|
||||
scope.New(elem.Addr().Interface()).
|
||||
InstanceSet("gorm:skip_query_callback", true).
|
||||
callCallbacks(scope.db.parent.callbacks.queries)
|
||||
|
||||
var foreignKeys = make([]interface{}, len(sourceKeys))
|
||||
// generate hashed forkey keys in join table
|
||||
for idx, joinTableField := range joinTableFields {
|
||||
if !joinTableField.Field.IsNil() {
|
||||
foreignKeys[idx] = joinTableField.Field.Elem().Interface()
|
||||
}
|
||||
}
|
||||
hashedSourceKeys := toString(foreignKeys)
|
||||
|
||||
if isPtr {
|
||||
linkHash[hashedSourceKeys] = append(linkHash[hashedSourceKeys], elem.Addr())
|
||||
} else {
|
||||
linkHash[hashedSourceKeys] = append(linkHash[hashedSourceKeys], elem)
|
||||
}
|
||||
}
|
||||
|
||||
if err := rows.Err(); err != nil {
|
||||
scope.Err(err)
|
||||
}
|
||||
|
||||
// assign find results
|
||||
var (
|
||||
indirectScopeValue = scope.IndirectValue()
|
||||
fieldsSourceMap = map[string][]reflect.Value{}
|
||||
foreignFieldNames = []string{}
|
||||
)
|
||||
|
||||
for _, dbName := range relation.ForeignFieldNames {
|
||||
if field, ok := scope.FieldByName(dbName); ok {
|
||||
foreignFieldNames = append(foreignFieldNames, field.Name)
|
||||
}
|
||||
}
|
||||
|
||||
if indirectScopeValue.Kind() == reflect.Slice {
|
||||
for j := 0; j < indirectScopeValue.Len(); j++ {
|
||||
object := indirect(indirectScopeValue.Index(j))
|
||||
key := toString(getValueFromFields(object, foreignFieldNames))
|
||||
fieldsSourceMap[key] = append(fieldsSourceMap[key], object.FieldByName(field.Name))
|
||||
}
|
||||
} else if indirectScopeValue.IsValid() {
|
||||
key := toString(getValueFromFields(indirectScopeValue, foreignFieldNames))
|
||||
fieldsSourceMap[key] = append(fieldsSourceMap[key], indirectScopeValue.FieldByName(field.Name))
|
||||
}
|
||||
for source, link := range linkHash {
|
||||
for i, field := range fieldsSourceMap[source] {
|
||||
//If not 0 this means Value is a pointer and we already added preloaded models to it
|
||||
if fieldsSourceMap[source][i].Len() != 0 {
|
||||
continue
|
||||
}
|
||||
field.Set(reflect.Append(fieldsSourceMap[source][i], link...))
|
||||
}
|
||||
|
||||
}
|
||||
}
|
||||
30
vendor/github.com/jinzhu/gorm/callback_row_query.go
generated
vendored
Normal file
30
vendor/github.com/jinzhu/gorm/callback_row_query.go
generated
vendored
Normal file
@@ -0,0 +1,30 @@
|
||||
package gorm
|
||||
|
||||
import "database/sql"
|
||||
|
||||
// Define callbacks for row query
|
||||
func init() {
|
||||
DefaultCallback.RowQuery().Register("gorm:row_query", rowQueryCallback)
|
||||
}
|
||||
|
||||
type RowQueryResult struct {
|
||||
Row *sql.Row
|
||||
}
|
||||
|
||||
type RowsQueryResult struct {
|
||||
Rows *sql.Rows
|
||||
Error error
|
||||
}
|
||||
|
||||
// queryCallback used to query data from database
|
||||
func rowQueryCallback(scope *Scope) {
|
||||
if result, ok := scope.InstanceGet("row_query_result"); ok {
|
||||
scope.prepareQuerySQL()
|
||||
|
||||
if rowResult, ok := result.(*RowQueryResult); ok {
|
||||
rowResult.Row = scope.SQLDB().QueryRow(scope.SQL, scope.SQLVars...)
|
||||
} else if rowsResult, ok := result.(*RowsQueryResult); ok {
|
||||
rowsResult.Rows, rowsResult.Error = scope.SQLDB().Query(scope.SQL, scope.SQLVars...)
|
||||
}
|
||||
}
|
||||
}
|
||||
170
vendor/github.com/jinzhu/gorm/callback_save.go
generated
vendored
Normal file
170
vendor/github.com/jinzhu/gorm/callback_save.go
generated
vendored
Normal file
@@ -0,0 +1,170 @@
|
||||
package gorm
|
||||
|
||||
import (
|
||||
"reflect"
|
||||
"strings"
|
||||
)
|
||||
|
||||
func beginTransactionCallback(scope *Scope) {
|
||||
scope.Begin()
|
||||
}
|
||||
|
||||
func commitOrRollbackTransactionCallback(scope *Scope) {
|
||||
scope.CommitOrRollback()
|
||||
}
|
||||
|
||||
func saveAssociationCheck(scope *Scope, field *Field) (autoUpdate bool, autoCreate bool, saveReference bool, r *Relationship) {
|
||||
checkTruth := func(value interface{}) bool {
|
||||
if v, ok := value.(bool); ok && !v {
|
||||
return false
|
||||
}
|
||||
|
||||
if v, ok := value.(string); ok {
|
||||
v = strings.ToLower(v)
|
||||
return v == "true"
|
||||
}
|
||||
|
||||
return true
|
||||
}
|
||||
|
||||
if scope.changeableField(field) && !field.IsBlank && !field.IsIgnored {
|
||||
if r = field.Relationship; r != nil {
|
||||
autoUpdate, autoCreate, saveReference = true, true, true
|
||||
|
||||
if value, ok := scope.Get("gorm:save_associations"); ok {
|
||||
autoUpdate = checkTruth(value)
|
||||
autoCreate = autoUpdate
|
||||
saveReference = autoUpdate
|
||||
} else if value, ok := field.TagSettingsGet("SAVE_ASSOCIATIONS"); ok {
|
||||
autoUpdate = checkTruth(value)
|
||||
autoCreate = autoUpdate
|
||||
saveReference = autoUpdate
|
||||
}
|
||||
|
||||
if value, ok := scope.Get("gorm:association_autoupdate"); ok {
|
||||
autoUpdate = checkTruth(value)
|
||||
} else if value, ok := field.TagSettingsGet("ASSOCIATION_AUTOUPDATE"); ok {
|
||||
autoUpdate = checkTruth(value)
|
||||
}
|
||||
|
||||
if value, ok := scope.Get("gorm:association_autocreate"); ok {
|
||||
autoCreate = checkTruth(value)
|
||||
} else if value, ok := field.TagSettingsGet("ASSOCIATION_AUTOCREATE"); ok {
|
||||
autoCreate = checkTruth(value)
|
||||
}
|
||||
|
||||
if value, ok := scope.Get("gorm:association_save_reference"); ok {
|
||||
saveReference = checkTruth(value)
|
||||
} else if value, ok := field.TagSettingsGet("ASSOCIATION_SAVE_REFERENCE"); ok {
|
||||
saveReference = checkTruth(value)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
func saveBeforeAssociationsCallback(scope *Scope) {
|
||||
for _, field := range scope.Fields() {
|
||||
autoUpdate, autoCreate, saveReference, relationship := saveAssociationCheck(scope, field)
|
||||
|
||||
if relationship != nil && relationship.Kind == "belongs_to" {
|
||||
fieldValue := field.Field.Addr().Interface()
|
||||
newScope := scope.New(fieldValue)
|
||||
|
||||
if newScope.PrimaryKeyZero() {
|
||||
if autoCreate {
|
||||
scope.Err(scope.NewDB().Save(fieldValue).Error)
|
||||
}
|
||||
} else if autoUpdate {
|
||||
scope.Err(scope.NewDB().Save(fieldValue).Error)
|
||||
}
|
||||
|
||||
if saveReference {
|
||||
if len(relationship.ForeignFieldNames) != 0 {
|
||||
// set value's foreign key
|
||||
for idx, fieldName := range relationship.ForeignFieldNames {
|
||||
associationForeignName := relationship.AssociationForeignDBNames[idx]
|
||||
if foreignField, ok := scope.New(fieldValue).FieldByName(associationForeignName); ok {
|
||||
scope.Err(scope.SetColumn(fieldName, foreignField.Field.Interface()))
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func saveAfterAssociationsCallback(scope *Scope) {
|
||||
for _, field := range scope.Fields() {
|
||||
autoUpdate, autoCreate, saveReference, relationship := saveAssociationCheck(scope, field)
|
||||
|
||||
if relationship != nil && (relationship.Kind == "has_one" || relationship.Kind == "has_many" || relationship.Kind == "many_to_many") {
|
||||
value := field.Field
|
||||
|
||||
switch value.Kind() {
|
||||
case reflect.Slice:
|
||||
for i := 0; i < value.Len(); i++ {
|
||||
newDB := scope.NewDB()
|
||||
elem := value.Index(i).Addr().Interface()
|
||||
newScope := newDB.NewScope(elem)
|
||||
|
||||
if saveReference {
|
||||
if relationship.JoinTableHandler == nil && len(relationship.ForeignFieldNames) != 0 {
|
||||
for idx, fieldName := range relationship.ForeignFieldNames {
|
||||
associationForeignName := relationship.AssociationForeignDBNames[idx]
|
||||
if f, ok := scope.FieldByName(associationForeignName); ok {
|
||||
scope.Err(newScope.SetColumn(fieldName, f.Field.Interface()))
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if relationship.PolymorphicType != "" {
|
||||
scope.Err(newScope.SetColumn(relationship.PolymorphicType, relationship.PolymorphicValue))
|
||||
}
|
||||
}
|
||||
|
||||
if newScope.PrimaryKeyZero() {
|
||||
if autoCreate {
|
||||
scope.Err(newDB.Save(elem).Error)
|
||||
}
|
||||
} else if autoUpdate {
|
||||
scope.Err(newDB.Save(elem).Error)
|
||||
}
|
||||
|
||||
if !scope.New(newScope.Value).PrimaryKeyZero() && saveReference {
|
||||
if joinTableHandler := relationship.JoinTableHandler; joinTableHandler != nil {
|
||||
scope.Err(joinTableHandler.Add(joinTableHandler, newDB, scope.Value, newScope.Value))
|
||||
}
|
||||
}
|
||||
}
|
||||
default:
|
||||
elem := value.Addr().Interface()
|
||||
newScope := scope.New(elem)
|
||||
|
||||
if saveReference {
|
||||
if len(relationship.ForeignFieldNames) != 0 {
|
||||
for idx, fieldName := range relationship.ForeignFieldNames {
|
||||
associationForeignName := relationship.AssociationForeignDBNames[idx]
|
||||
if f, ok := scope.FieldByName(associationForeignName); ok {
|
||||
scope.Err(newScope.SetColumn(fieldName, f.Field.Interface()))
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if relationship.PolymorphicType != "" {
|
||||
scope.Err(newScope.SetColumn(relationship.PolymorphicType, relationship.PolymorphicValue))
|
||||
}
|
||||
}
|
||||
|
||||
if newScope.PrimaryKeyZero() {
|
||||
if autoCreate {
|
||||
scope.Err(scope.NewDB().Save(elem).Error)
|
||||
}
|
||||
} else if autoUpdate {
|
||||
scope.Err(scope.NewDB().Save(elem).Error)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
121
vendor/github.com/jinzhu/gorm/callback_update.go
generated
vendored
Normal file
121
vendor/github.com/jinzhu/gorm/callback_update.go
generated
vendored
Normal file
@@ -0,0 +1,121 @@
|
||||
package gorm
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"sort"
|
||||
"strings"
|
||||
)
|
||||
|
||||
// Define callbacks for updating
|
||||
func init() {
|
||||
DefaultCallback.Update().Register("gorm:assign_updating_attributes", assignUpdatingAttributesCallback)
|
||||
DefaultCallback.Update().Register("gorm:begin_transaction", beginTransactionCallback)
|
||||
DefaultCallback.Update().Register("gorm:before_update", beforeUpdateCallback)
|
||||
DefaultCallback.Update().Register("gorm:save_before_associations", saveBeforeAssociationsCallback)
|
||||
DefaultCallback.Update().Register("gorm:update_time_stamp", updateTimeStampForUpdateCallback)
|
||||
DefaultCallback.Update().Register("gorm:update", updateCallback)
|
||||
DefaultCallback.Update().Register("gorm:save_after_associations", saveAfterAssociationsCallback)
|
||||
DefaultCallback.Update().Register("gorm:after_update", afterUpdateCallback)
|
||||
DefaultCallback.Update().Register("gorm:commit_or_rollback_transaction", commitOrRollbackTransactionCallback)
|
||||
}
|
||||
|
||||
// assignUpdatingAttributesCallback assign updating attributes to model
|
||||
func assignUpdatingAttributesCallback(scope *Scope) {
|
||||
if attrs, ok := scope.InstanceGet("gorm:update_interface"); ok {
|
||||
if updateMaps, hasUpdate := scope.updatedAttrsWithValues(attrs); hasUpdate {
|
||||
scope.InstanceSet("gorm:update_attrs", updateMaps)
|
||||
} else {
|
||||
scope.SkipLeft()
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// beforeUpdateCallback will invoke `BeforeSave`, `BeforeUpdate` method before updating
|
||||
func beforeUpdateCallback(scope *Scope) {
|
||||
if scope.DB().HasBlockGlobalUpdate() && !scope.hasConditions() {
|
||||
scope.Err(errors.New("Missing WHERE clause while updating"))
|
||||
return
|
||||
}
|
||||
if _, ok := scope.Get("gorm:update_column"); !ok {
|
||||
if !scope.HasError() {
|
||||
scope.CallMethod("BeforeSave")
|
||||
}
|
||||
if !scope.HasError() {
|
||||
scope.CallMethod("BeforeUpdate")
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// updateTimeStampForUpdateCallback will set `UpdatedAt` when updating
|
||||
func updateTimeStampForUpdateCallback(scope *Scope) {
|
||||
if _, ok := scope.Get("gorm:update_column"); !ok {
|
||||
scope.SetColumn("UpdatedAt", NowFunc())
|
||||
}
|
||||
}
|
||||
|
||||
// updateCallback the callback used to update data to database
|
||||
func updateCallback(scope *Scope) {
|
||||
if !scope.HasError() {
|
||||
var sqls []string
|
||||
|
||||
if updateAttrs, ok := scope.InstanceGet("gorm:update_attrs"); ok {
|
||||
// Sort the column names so that the generated SQL is the same every time.
|
||||
updateMap := updateAttrs.(map[string]interface{})
|
||||
var columns []string
|
||||
for c := range updateMap {
|
||||
columns = append(columns, c)
|
||||
}
|
||||
sort.Strings(columns)
|
||||
|
||||
for _, column := range columns {
|
||||
value := updateMap[column]
|
||||
sqls = append(sqls, fmt.Sprintf("%v = %v", scope.Quote(column), scope.AddToVars(value)))
|
||||
}
|
||||
} else {
|
||||
for _, field := range scope.Fields() {
|
||||
if scope.changeableField(field) {
|
||||
if !field.IsPrimaryKey && field.IsNormal {
|
||||
if !field.IsForeignKey || !field.IsBlank || !field.HasDefaultValue {
|
||||
sqls = append(sqls, fmt.Sprintf("%v = %v", scope.Quote(field.DBName), scope.AddToVars(field.Field.Interface())))
|
||||
}
|
||||
} else if relationship := field.Relationship; relationship != nil && relationship.Kind == "belongs_to" {
|
||||
for _, foreignKey := range relationship.ForeignDBNames {
|
||||
if foreignField, ok := scope.FieldByName(foreignKey); ok && !scope.changeableField(foreignField) {
|
||||
sqls = append(sqls,
|
||||
fmt.Sprintf("%v = %v", scope.Quote(foreignField.DBName), scope.AddToVars(foreignField.Field.Interface())))
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
var extraOption string
|
||||
if str, ok := scope.Get("gorm:update_option"); ok {
|
||||
extraOption = fmt.Sprint(str)
|
||||
}
|
||||
|
||||
if len(sqls) > 0 {
|
||||
scope.Raw(fmt.Sprintf(
|
||||
"UPDATE %v SET %v%v%v",
|
||||
scope.QuotedTableName(),
|
||||
strings.Join(sqls, ", "),
|
||||
addExtraSpaceIfExist(scope.CombinedConditionSql()),
|
||||
addExtraSpaceIfExist(extraOption),
|
||||
)).Exec()
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// afterUpdateCallback will invoke `AfterUpdate`, `AfterSave` method after updating
|
||||
func afterUpdateCallback(scope *Scope) {
|
||||
if _, ok := scope.Get("gorm:update_column"); !ok {
|
||||
if !scope.HasError() {
|
||||
scope.CallMethod("AfterUpdate")
|
||||
}
|
||||
if !scope.HasError() {
|
||||
scope.CallMethod("AfterSave")
|
||||
}
|
||||
}
|
||||
}
|
||||
138
vendor/github.com/jinzhu/gorm/dialect.go
generated
vendored
Normal file
138
vendor/github.com/jinzhu/gorm/dialect.go
generated
vendored
Normal file
@@ -0,0 +1,138 @@
|
||||
package gorm
|
||||
|
||||
import (
|
||||
"database/sql"
|
||||
"fmt"
|
||||
"reflect"
|
||||
"strconv"
|
||||
"strings"
|
||||
)
|
||||
|
||||
// Dialect interface contains behaviors that differ across SQL database
|
||||
type Dialect interface {
|
||||
// GetName get dialect's name
|
||||
GetName() string
|
||||
|
||||
// SetDB set db for dialect
|
||||
SetDB(db SQLCommon)
|
||||
|
||||
// BindVar return the placeholder for actual values in SQL statements, in many dbs it is "?", Postgres using $1
|
||||
BindVar(i int) string
|
||||
// Quote quotes field name to avoid SQL parsing exceptions by using a reserved word as a field name
|
||||
Quote(key string) string
|
||||
// DataTypeOf return data's sql type
|
||||
DataTypeOf(field *StructField) string
|
||||
|
||||
// HasIndex check has index or not
|
||||
HasIndex(tableName string, indexName string) bool
|
||||
// HasForeignKey check has foreign key or not
|
||||
HasForeignKey(tableName string, foreignKeyName string) bool
|
||||
// RemoveIndex remove index
|
||||
RemoveIndex(tableName string, indexName string) error
|
||||
// HasTable check has table or not
|
||||
HasTable(tableName string) bool
|
||||
// HasColumn check has column or not
|
||||
HasColumn(tableName string, columnName string) bool
|
||||
// ModifyColumn modify column's type
|
||||
ModifyColumn(tableName string, columnName string, typ string) error
|
||||
|
||||
// LimitAndOffsetSQL return generated SQL with Limit and Offset, as mssql has special case
|
||||
LimitAndOffsetSQL(limit, offset interface{}) string
|
||||
// SelectFromDummyTable return select values, for most dbs, `SELECT values` just works, mysql needs `SELECT value FROM DUAL`
|
||||
SelectFromDummyTable() string
|
||||
// LastInsertIdReturningSuffix most dbs support LastInsertId, but postgres needs to use `RETURNING`
|
||||
LastInsertIDReturningSuffix(tableName, columnName string) string
|
||||
// DefaultValueStr
|
||||
DefaultValueStr() string
|
||||
|
||||
// BuildKeyName returns a valid key name (foreign key, index key) for the given table, field and reference
|
||||
BuildKeyName(kind, tableName string, fields ...string) string
|
||||
|
||||
// CurrentDatabase return current database name
|
||||
CurrentDatabase() string
|
||||
}
|
||||
|
||||
var dialectsMap = map[string]Dialect{}
|
||||
|
||||
func newDialect(name string, db SQLCommon) Dialect {
|
||||
if value, ok := dialectsMap[name]; ok {
|
||||
dialect := reflect.New(reflect.TypeOf(value).Elem()).Interface().(Dialect)
|
||||
dialect.SetDB(db)
|
||||
return dialect
|
||||
}
|
||||
|
||||
fmt.Printf("`%v` is not officially supported, running under compatibility mode.\n", name)
|
||||
commontDialect := &commonDialect{}
|
||||
commontDialect.SetDB(db)
|
||||
return commontDialect
|
||||
}
|
||||
|
||||
// RegisterDialect register new dialect
|
||||
func RegisterDialect(name string, dialect Dialect) {
|
||||
dialectsMap[name] = dialect
|
||||
}
|
||||
|
||||
// GetDialect gets the dialect for the specified dialect name
|
||||
func GetDialect(name string) (dialect Dialect, ok bool) {
|
||||
dialect, ok = dialectsMap[name]
|
||||
return
|
||||
}
|
||||
|
||||
// ParseFieldStructForDialect get field's sql data type
|
||||
var ParseFieldStructForDialect = func(field *StructField, dialect Dialect) (fieldValue reflect.Value, sqlType string, size int, additionalType string) {
|
||||
// Get redirected field type
|
||||
var (
|
||||
reflectType = field.Struct.Type
|
||||
dataType, _ = field.TagSettingsGet("TYPE")
|
||||
)
|
||||
|
||||
for reflectType.Kind() == reflect.Ptr {
|
||||
reflectType = reflectType.Elem()
|
||||
}
|
||||
|
||||
// Get redirected field value
|
||||
fieldValue = reflect.Indirect(reflect.New(reflectType))
|
||||
|
||||
if gormDataType, ok := fieldValue.Interface().(interface {
|
||||
GormDataType(Dialect) string
|
||||
}); ok {
|
||||
dataType = gormDataType.GormDataType(dialect)
|
||||
}
|
||||
|
||||
// Get scanner's real value
|
||||
if dataType == "" {
|
||||
var getScannerValue func(reflect.Value)
|
||||
getScannerValue = func(value reflect.Value) {
|
||||
fieldValue = value
|
||||
if _, isScanner := reflect.New(fieldValue.Type()).Interface().(sql.Scanner); isScanner && fieldValue.Kind() == reflect.Struct {
|
||||
getScannerValue(fieldValue.Field(0))
|
||||
}
|
||||
}
|
||||
getScannerValue(fieldValue)
|
||||
}
|
||||
|
||||
// Default Size
|
||||
if num, ok := field.TagSettingsGet("SIZE"); ok {
|
||||
size, _ = strconv.Atoi(num)
|
||||
} else {
|
||||
size = 255
|
||||
}
|
||||
|
||||
// Default type from tag setting
|
||||
notNull, _ := field.TagSettingsGet("NOT NULL")
|
||||
unique, _ := field.TagSettingsGet("UNIQUE")
|
||||
additionalType = notNull + " " + unique
|
||||
if value, ok := field.TagSettingsGet("DEFAULT"); ok {
|
||||
additionalType = additionalType + " DEFAULT " + value
|
||||
}
|
||||
|
||||
return fieldValue, dataType, size, strings.TrimSpace(additionalType)
|
||||
}
|
||||
|
||||
func currentDatabaseAndTable(dialect Dialect, tableName string) (string, string) {
|
||||
if strings.Contains(tableName, ".") {
|
||||
splitStrings := strings.SplitN(tableName, ".", 2)
|
||||
return splitStrings[0], splitStrings[1]
|
||||
}
|
||||
return dialect.CurrentDatabase(), tableName
|
||||
}
|
||||
176
vendor/github.com/jinzhu/gorm/dialect_common.go
generated
vendored
Normal file
176
vendor/github.com/jinzhu/gorm/dialect_common.go
generated
vendored
Normal file
@@ -0,0 +1,176 @@
|
||||
package gorm
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"reflect"
|
||||
"regexp"
|
||||
"strconv"
|
||||
"strings"
|
||||
"time"
|
||||
)
|
||||
|
||||
// DefaultForeignKeyNamer contains the default foreign key name generator method
|
||||
type DefaultForeignKeyNamer struct {
|
||||
}
|
||||
|
||||
type commonDialect struct {
|
||||
db SQLCommon
|
||||
DefaultForeignKeyNamer
|
||||
}
|
||||
|
||||
func init() {
|
||||
RegisterDialect("common", &commonDialect{})
|
||||
}
|
||||
|
||||
func (commonDialect) GetName() string {
|
||||
return "common"
|
||||
}
|
||||
|
||||
func (s *commonDialect) SetDB(db SQLCommon) {
|
||||
s.db = db
|
||||
}
|
||||
|
||||
func (commonDialect) BindVar(i int) string {
|
||||
return "$$$" // ?
|
||||
}
|
||||
|
||||
func (commonDialect) Quote(key string) string {
|
||||
return fmt.Sprintf(`"%s"`, key)
|
||||
}
|
||||
|
||||
func (s *commonDialect) fieldCanAutoIncrement(field *StructField) bool {
|
||||
if value, ok := field.TagSettingsGet("AUTO_INCREMENT"); ok {
|
||||
return strings.ToLower(value) != "false"
|
||||
}
|
||||
return field.IsPrimaryKey
|
||||
}
|
||||
|
||||
func (s *commonDialect) DataTypeOf(field *StructField) string {
|
||||
var dataValue, sqlType, size, additionalType = ParseFieldStructForDialect(field, s)
|
||||
|
||||
if sqlType == "" {
|
||||
switch dataValue.Kind() {
|
||||
case reflect.Bool:
|
||||
sqlType = "BOOLEAN"
|
||||
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uintptr:
|
||||
if s.fieldCanAutoIncrement(field) {
|
||||
sqlType = "INTEGER AUTO_INCREMENT"
|
||||
} else {
|
||||
sqlType = "INTEGER"
|
||||
}
|
||||
case reflect.Int64, reflect.Uint64:
|
||||
if s.fieldCanAutoIncrement(field) {
|
||||
sqlType = "BIGINT AUTO_INCREMENT"
|
||||
} else {
|
||||
sqlType = "BIGINT"
|
||||
}
|
||||
case reflect.Float32, reflect.Float64:
|
||||
sqlType = "FLOAT"
|
||||
case reflect.String:
|
||||
if size > 0 && size < 65532 {
|
||||
sqlType = fmt.Sprintf("VARCHAR(%d)", size)
|
||||
} else {
|
||||
sqlType = "VARCHAR(65532)"
|
||||
}
|
||||
case reflect.Struct:
|
||||
if _, ok := dataValue.Interface().(time.Time); ok {
|
||||
sqlType = "TIMESTAMP"
|
||||
}
|
||||
default:
|
||||
if _, ok := dataValue.Interface().([]byte); ok {
|
||||
if size > 0 && size < 65532 {
|
||||
sqlType = fmt.Sprintf("BINARY(%d)", size)
|
||||
} else {
|
||||
sqlType = "BINARY(65532)"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if sqlType == "" {
|
||||
panic(fmt.Sprintf("invalid sql type %s (%s) for commonDialect", dataValue.Type().Name(), dataValue.Kind().String()))
|
||||
}
|
||||
|
||||
if strings.TrimSpace(additionalType) == "" {
|
||||
return sqlType
|
||||
}
|
||||
return fmt.Sprintf("%v %v", sqlType, additionalType)
|
||||
}
|
||||
|
||||
func (s commonDialect) HasIndex(tableName string, indexName string) bool {
|
||||
var count int
|
||||
currentDatabase, tableName := currentDatabaseAndTable(&s, tableName)
|
||||
s.db.QueryRow("SELECT count(*) FROM INFORMATION_SCHEMA.STATISTICS WHERE table_schema = ? AND table_name = ? AND index_name = ?", currentDatabase, tableName, indexName).Scan(&count)
|
||||
return count > 0
|
||||
}
|
||||
|
||||
func (s commonDialect) RemoveIndex(tableName string, indexName string) error {
|
||||
_, err := s.db.Exec(fmt.Sprintf("DROP INDEX %v", indexName))
|
||||
return err
|
||||
}
|
||||
|
||||
func (s commonDialect) HasForeignKey(tableName string, foreignKeyName string) bool {
|
||||
return false
|
||||
}
|
||||
|
||||
func (s commonDialect) HasTable(tableName string) bool {
|
||||
var count int
|
||||
currentDatabase, tableName := currentDatabaseAndTable(&s, tableName)
|
||||
s.db.QueryRow("SELECT count(*) FROM INFORMATION_SCHEMA.TABLES WHERE table_schema = ? AND table_name = ?", currentDatabase, tableName).Scan(&count)
|
||||
return count > 0
|
||||
}
|
||||
|
||||
func (s commonDialect) HasColumn(tableName string, columnName string) bool {
|
||||
var count int
|
||||
currentDatabase, tableName := currentDatabaseAndTable(&s, tableName)
|
||||
s.db.QueryRow("SELECT count(*) FROM INFORMATION_SCHEMA.COLUMNS WHERE table_schema = ? AND table_name = ? AND column_name = ?", currentDatabase, tableName, columnName).Scan(&count)
|
||||
return count > 0
|
||||
}
|
||||
|
||||
func (s commonDialect) ModifyColumn(tableName string, columnName string, typ string) error {
|
||||
_, err := s.db.Exec(fmt.Sprintf("ALTER TABLE %v ALTER COLUMN %v TYPE %v", tableName, columnName, typ))
|
||||
return err
|
||||
}
|
||||
|
||||
func (s commonDialect) CurrentDatabase() (name string) {
|
||||
s.db.QueryRow("SELECT DATABASE()").Scan(&name)
|
||||
return
|
||||
}
|
||||
|
||||
func (commonDialect) LimitAndOffsetSQL(limit, offset interface{}) (sql string) {
|
||||
if limit != nil {
|
||||
if parsedLimit, err := strconv.ParseInt(fmt.Sprint(limit), 0, 0); err == nil && parsedLimit >= 0 {
|
||||
sql += fmt.Sprintf(" LIMIT %d", parsedLimit)
|
||||
}
|
||||
}
|
||||
if offset != nil {
|
||||
if parsedOffset, err := strconv.ParseInt(fmt.Sprint(offset), 0, 0); err == nil && parsedOffset >= 0 {
|
||||
sql += fmt.Sprintf(" OFFSET %d", parsedOffset)
|
||||
}
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
func (commonDialect) SelectFromDummyTable() string {
|
||||
return ""
|
||||
}
|
||||
|
||||
func (commonDialect) LastInsertIDReturningSuffix(tableName, columnName string) string {
|
||||
return ""
|
||||
}
|
||||
|
||||
func (commonDialect) DefaultValueStr() string {
|
||||
return "DEFAULT VALUES"
|
||||
}
|
||||
|
||||
// BuildKeyName returns a valid key name (foreign key, index key) for the given table, field and reference
|
||||
func (DefaultForeignKeyNamer) BuildKeyName(kind, tableName string, fields ...string) string {
|
||||
keyName := fmt.Sprintf("%s_%s_%s", kind, tableName, strings.Join(fields, "_"))
|
||||
keyName = regexp.MustCompile("[^a-zA-Z0-9]+").ReplaceAllString(keyName, "_")
|
||||
return keyName
|
||||
}
|
||||
|
||||
// IsByteArrayOrSlice returns true of the reflected value is an array or slice
|
||||
func IsByteArrayOrSlice(value reflect.Value) bool {
|
||||
return (value.Kind() == reflect.Array || value.Kind() == reflect.Slice) && value.Type().Elem() == reflect.TypeOf(uint8(0))
|
||||
}
|
||||
191
vendor/github.com/jinzhu/gorm/dialect_mysql.go
generated
vendored
Normal file
191
vendor/github.com/jinzhu/gorm/dialect_mysql.go
generated
vendored
Normal file
@@ -0,0 +1,191 @@
|
||||
package gorm
|
||||
|
||||
import (
|
||||
"crypto/sha1"
|
||||
"fmt"
|
||||
"reflect"
|
||||
"regexp"
|
||||
"strconv"
|
||||
"strings"
|
||||
"time"
|
||||
"unicode/utf8"
|
||||
)
|
||||
|
||||
type mysql struct {
|
||||
commonDialect
|
||||
}
|
||||
|
||||
func init() {
|
||||
RegisterDialect("mysql", &mysql{})
|
||||
}
|
||||
|
||||
func (mysql) GetName() string {
|
||||
return "mysql"
|
||||
}
|
||||
|
||||
func (mysql) Quote(key string) string {
|
||||
return fmt.Sprintf("`%s`", key)
|
||||
}
|
||||
|
||||
// Get Data Type for MySQL Dialect
|
||||
func (s *mysql) DataTypeOf(field *StructField) string {
|
||||
var dataValue, sqlType, size, additionalType = ParseFieldStructForDialect(field, s)
|
||||
|
||||
// MySQL allows only one auto increment column per table, and it must
|
||||
// be a KEY column.
|
||||
if _, ok := field.TagSettingsGet("AUTO_INCREMENT"); ok {
|
||||
if _, ok = field.TagSettingsGet("INDEX"); !ok && !field.IsPrimaryKey {
|
||||
field.TagSettingsDelete("AUTO_INCREMENT")
|
||||
}
|
||||
}
|
||||
|
||||
if sqlType == "" {
|
||||
switch dataValue.Kind() {
|
||||
case reflect.Bool:
|
||||
sqlType = "boolean"
|
||||
case reflect.Int8:
|
||||
if s.fieldCanAutoIncrement(field) {
|
||||
field.TagSettingsSet("AUTO_INCREMENT", "AUTO_INCREMENT")
|
||||
sqlType = "tinyint AUTO_INCREMENT"
|
||||
} else {
|
||||
sqlType = "tinyint"
|
||||
}
|
||||
case reflect.Int, reflect.Int16, reflect.Int32:
|
||||
if s.fieldCanAutoIncrement(field) {
|
||||
field.TagSettingsSet("AUTO_INCREMENT", "AUTO_INCREMENT")
|
||||
sqlType = "int AUTO_INCREMENT"
|
||||
} else {
|
||||
sqlType = "int"
|
||||
}
|
||||
case reflect.Uint8:
|
||||
if s.fieldCanAutoIncrement(field) {
|
||||
field.TagSettingsSet("AUTO_INCREMENT", "AUTO_INCREMENT")
|
||||
sqlType = "tinyint unsigned AUTO_INCREMENT"
|
||||
} else {
|
||||
sqlType = "tinyint unsigned"
|
||||
}
|
||||
case reflect.Uint, reflect.Uint16, reflect.Uint32, reflect.Uintptr:
|
||||
if s.fieldCanAutoIncrement(field) {
|
||||
field.TagSettingsSet("AUTO_INCREMENT", "AUTO_INCREMENT")
|
||||
sqlType = "int unsigned AUTO_INCREMENT"
|
||||
} else {
|
||||
sqlType = "int unsigned"
|
||||
}
|
||||
case reflect.Int64:
|
||||
if s.fieldCanAutoIncrement(field) {
|
||||
field.TagSettingsSet("AUTO_INCREMENT", "AUTO_INCREMENT")
|
||||
sqlType = "bigint AUTO_INCREMENT"
|
||||
} else {
|
||||
sqlType = "bigint"
|
||||
}
|
||||
case reflect.Uint64:
|
||||
if s.fieldCanAutoIncrement(field) {
|
||||
field.TagSettingsSet("AUTO_INCREMENT", "AUTO_INCREMENT")
|
||||
sqlType = "bigint unsigned AUTO_INCREMENT"
|
||||
} else {
|
||||
sqlType = "bigint unsigned"
|
||||
}
|
||||
case reflect.Float32, reflect.Float64:
|
||||
sqlType = "double"
|
||||
case reflect.String:
|
||||
if size > 0 && size < 65532 {
|
||||
sqlType = fmt.Sprintf("varchar(%d)", size)
|
||||
} else {
|
||||
sqlType = "longtext"
|
||||
}
|
||||
case reflect.Struct:
|
||||
if _, ok := dataValue.Interface().(time.Time); ok {
|
||||
precision := ""
|
||||
if p, ok := field.TagSettingsGet("PRECISION"); ok {
|
||||
precision = fmt.Sprintf("(%s)", p)
|
||||
}
|
||||
|
||||
if _, ok := field.TagSettingsGet("NOT NULL"); ok {
|
||||
sqlType = fmt.Sprintf("timestamp%v", precision)
|
||||
} else {
|
||||
sqlType = fmt.Sprintf("timestamp%v NULL", precision)
|
||||
}
|
||||
}
|
||||
default:
|
||||
if IsByteArrayOrSlice(dataValue) {
|
||||
if size > 0 && size < 65532 {
|
||||
sqlType = fmt.Sprintf("varbinary(%d)", size)
|
||||
} else {
|
||||
sqlType = "longblob"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if sqlType == "" {
|
||||
panic(fmt.Sprintf("invalid sql type %s (%s) for mysql", dataValue.Type().Name(), dataValue.Kind().String()))
|
||||
}
|
||||
|
||||
if strings.TrimSpace(additionalType) == "" {
|
||||
return sqlType
|
||||
}
|
||||
return fmt.Sprintf("%v %v", sqlType, additionalType)
|
||||
}
|
||||
|
||||
func (s mysql) RemoveIndex(tableName string, indexName string) error {
|
||||
_, err := s.db.Exec(fmt.Sprintf("DROP INDEX %v ON %v", indexName, s.Quote(tableName)))
|
||||
return err
|
||||
}
|
||||
|
||||
func (s mysql) ModifyColumn(tableName string, columnName string, typ string) error {
|
||||
_, err := s.db.Exec(fmt.Sprintf("ALTER TABLE %v MODIFY COLUMN %v %v", tableName, columnName, typ))
|
||||
return err
|
||||
}
|
||||
|
||||
func (s mysql) LimitAndOffsetSQL(limit, offset interface{}) (sql string) {
|
||||
if limit != nil {
|
||||
if parsedLimit, err := strconv.ParseInt(fmt.Sprint(limit), 0, 0); err == nil && parsedLimit >= 0 {
|
||||
sql += fmt.Sprintf(" LIMIT %d", parsedLimit)
|
||||
|
||||
if offset != nil {
|
||||
if parsedOffset, err := strconv.ParseInt(fmt.Sprint(offset), 0, 0); err == nil && parsedOffset >= 0 {
|
||||
sql += fmt.Sprintf(" OFFSET %d", parsedOffset)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
func (s mysql) HasForeignKey(tableName string, foreignKeyName string) bool {
|
||||
var count int
|
||||
currentDatabase, tableName := currentDatabaseAndTable(&s, tableName)
|
||||
s.db.QueryRow("SELECT count(*) FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS WHERE CONSTRAINT_SCHEMA=? AND TABLE_NAME=? AND CONSTRAINT_NAME=? AND CONSTRAINT_TYPE='FOREIGN KEY'", currentDatabase, tableName, foreignKeyName).Scan(&count)
|
||||
return count > 0
|
||||
}
|
||||
|
||||
func (s mysql) CurrentDatabase() (name string) {
|
||||
s.db.QueryRow("SELECT DATABASE()").Scan(&name)
|
||||
return
|
||||
}
|
||||
|
||||
func (mysql) SelectFromDummyTable() string {
|
||||
return "FROM DUAL"
|
||||
}
|
||||
|
||||
func (s mysql) BuildKeyName(kind, tableName string, fields ...string) string {
|
||||
keyName := s.commonDialect.BuildKeyName(kind, tableName, fields...)
|
||||
if utf8.RuneCountInString(keyName) <= 64 {
|
||||
return keyName
|
||||
}
|
||||
h := sha1.New()
|
||||
h.Write([]byte(keyName))
|
||||
bs := h.Sum(nil)
|
||||
|
||||
// sha1 is 40 characters, keep first 24 characters of destination
|
||||
destRunes := []rune(regexp.MustCompile("[^a-zA-Z0-9]+").ReplaceAllString(fields[0], "_"))
|
||||
if len(destRunes) > 24 {
|
||||
destRunes = destRunes[:24]
|
||||
}
|
||||
|
||||
return fmt.Sprintf("%s%x", string(destRunes), bs)
|
||||
}
|
||||
|
||||
func (mysql) DefaultValueStr() string {
|
||||
return "VALUES()"
|
||||
}
|
||||
143
vendor/github.com/jinzhu/gorm/dialect_postgres.go
generated
vendored
Normal file
143
vendor/github.com/jinzhu/gorm/dialect_postgres.go
generated
vendored
Normal file
@@ -0,0 +1,143 @@
|
||||
package gorm
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"reflect"
|
||||
"strings"
|
||||
"time"
|
||||
)
|
||||
|
||||
type postgres struct {
|
||||
commonDialect
|
||||
}
|
||||
|
||||
func init() {
|
||||
RegisterDialect("postgres", &postgres{})
|
||||
RegisterDialect("cloudsqlpostgres", &postgres{})
|
||||
}
|
||||
|
||||
func (postgres) GetName() string {
|
||||
return "postgres"
|
||||
}
|
||||
|
||||
func (postgres) BindVar(i int) string {
|
||||
return fmt.Sprintf("$%v", i)
|
||||
}
|
||||
|
||||
func (s *postgres) DataTypeOf(field *StructField) string {
|
||||
var dataValue, sqlType, size, additionalType = ParseFieldStructForDialect(field, s)
|
||||
|
||||
if sqlType == "" {
|
||||
switch dataValue.Kind() {
|
||||
case reflect.Bool:
|
||||
sqlType = "boolean"
|
||||
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uintptr:
|
||||
if s.fieldCanAutoIncrement(field) {
|
||||
field.TagSettingsSet("AUTO_INCREMENT", "AUTO_INCREMENT")
|
||||
sqlType = "serial"
|
||||
} else {
|
||||
sqlType = "integer"
|
||||
}
|
||||
case reflect.Int64, reflect.Uint32, reflect.Uint64:
|
||||
if s.fieldCanAutoIncrement(field) {
|
||||
field.TagSettingsSet("AUTO_INCREMENT", "AUTO_INCREMENT")
|
||||
sqlType = "bigserial"
|
||||
} else {
|
||||
sqlType = "bigint"
|
||||
}
|
||||
case reflect.Float32, reflect.Float64:
|
||||
sqlType = "numeric"
|
||||
case reflect.String:
|
||||
if _, ok := field.TagSettingsGet("SIZE"); !ok {
|
||||
size = 0 // if SIZE haven't been set, use `text` as the default type, as there are no performance different
|
||||
}
|
||||
|
||||
if size > 0 && size < 65532 {
|
||||
sqlType = fmt.Sprintf("varchar(%d)", size)
|
||||
} else {
|
||||
sqlType = "text"
|
||||
}
|
||||
case reflect.Struct:
|
||||
if _, ok := dataValue.Interface().(time.Time); ok {
|
||||
sqlType = "timestamp with time zone"
|
||||
}
|
||||
case reflect.Map:
|
||||
if dataValue.Type().Name() == "Hstore" {
|
||||
sqlType = "hstore"
|
||||
}
|
||||
default:
|
||||
if IsByteArrayOrSlice(dataValue) {
|
||||
sqlType = "bytea"
|
||||
|
||||
if isUUID(dataValue) {
|
||||
sqlType = "uuid"
|
||||
}
|
||||
|
||||
if isJSON(dataValue) {
|
||||
sqlType = "jsonb"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if sqlType == "" {
|
||||
panic(fmt.Sprintf("invalid sql type %s (%s) for postgres", dataValue.Type().Name(), dataValue.Kind().String()))
|
||||
}
|
||||
|
||||
if strings.TrimSpace(additionalType) == "" {
|
||||
return sqlType
|
||||
}
|
||||
return fmt.Sprintf("%v %v", sqlType, additionalType)
|
||||
}
|
||||
|
||||
func (s postgres) HasIndex(tableName string, indexName string) bool {
|
||||
var count int
|
||||
s.db.QueryRow("SELECT count(*) FROM pg_indexes WHERE tablename = $1 AND indexname = $2 AND schemaname = CURRENT_SCHEMA()", tableName, indexName).Scan(&count)
|
||||
return count > 0
|
||||
}
|
||||
|
||||
func (s postgres) HasForeignKey(tableName string, foreignKeyName string) bool {
|
||||
var count int
|
||||
s.db.QueryRow("SELECT count(con.conname) FROM pg_constraint con WHERE $1::regclass::oid = con.conrelid AND con.conname = $2 AND con.contype='f'", tableName, foreignKeyName).Scan(&count)
|
||||
return count > 0
|
||||
}
|
||||
|
||||
func (s postgres) HasTable(tableName string) bool {
|
||||
var count int
|
||||
s.db.QueryRow("SELECT count(*) FROM INFORMATION_SCHEMA.tables WHERE table_name = $1 AND table_type = 'BASE TABLE' AND table_schema = CURRENT_SCHEMA()", tableName).Scan(&count)
|
||||
return count > 0
|
||||
}
|
||||
|
||||
func (s postgres) HasColumn(tableName string, columnName string) bool {
|
||||
var count int
|
||||
s.db.QueryRow("SELECT count(*) FROM INFORMATION_SCHEMA.columns WHERE table_name = $1 AND column_name = $2 AND table_schema = CURRENT_SCHEMA()", tableName, columnName).Scan(&count)
|
||||
return count > 0
|
||||
}
|
||||
|
||||
func (s postgres) CurrentDatabase() (name string) {
|
||||
s.db.QueryRow("SELECT CURRENT_DATABASE()").Scan(&name)
|
||||
return
|
||||
}
|
||||
|
||||
func (s postgres) LastInsertIDReturningSuffix(tableName, key string) string {
|
||||
return fmt.Sprintf("RETURNING %v.%v", tableName, key)
|
||||
}
|
||||
|
||||
func (postgres) SupportLastInsertID() bool {
|
||||
return false
|
||||
}
|
||||
|
||||
func isUUID(value reflect.Value) bool {
|
||||
if value.Kind() != reflect.Array || value.Type().Len() != 16 {
|
||||
return false
|
||||
}
|
||||
typename := value.Type().Name()
|
||||
lower := strings.ToLower(typename)
|
||||
return "uuid" == lower || "guid" == lower
|
||||
}
|
||||
|
||||
func isJSON(value reflect.Value) bool {
|
||||
_, ok := value.Interface().(json.RawMessage)
|
||||
return ok
|
||||
}
|
||||
107
vendor/github.com/jinzhu/gorm/dialect_sqlite3.go
generated
vendored
Normal file
107
vendor/github.com/jinzhu/gorm/dialect_sqlite3.go
generated
vendored
Normal file
@@ -0,0 +1,107 @@
|
||||
package gorm
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"reflect"
|
||||
"strings"
|
||||
"time"
|
||||
)
|
||||
|
||||
type sqlite3 struct {
|
||||
commonDialect
|
||||
}
|
||||
|
||||
func init() {
|
||||
RegisterDialect("sqlite3", &sqlite3{})
|
||||
}
|
||||
|
||||
func (sqlite3) GetName() string {
|
||||
return "sqlite3"
|
||||
}
|
||||
|
||||
// Get Data Type for Sqlite Dialect
|
||||
func (s *sqlite3) DataTypeOf(field *StructField) string {
|
||||
var dataValue, sqlType, size, additionalType = ParseFieldStructForDialect(field, s)
|
||||
|
||||
if sqlType == "" {
|
||||
switch dataValue.Kind() {
|
||||
case reflect.Bool:
|
||||
sqlType = "bool"
|
||||
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uintptr:
|
||||
if s.fieldCanAutoIncrement(field) {
|
||||
field.TagSettingsSet("AUTO_INCREMENT", "AUTO_INCREMENT")
|
||||
sqlType = "integer primary key autoincrement"
|
||||
} else {
|
||||
sqlType = "integer"
|
||||
}
|
||||
case reflect.Int64, reflect.Uint64:
|
||||
if s.fieldCanAutoIncrement(field) {
|
||||
field.TagSettingsSet("AUTO_INCREMENT", "AUTO_INCREMENT")
|
||||
sqlType = "integer primary key autoincrement"
|
||||
} else {
|
||||
sqlType = "bigint"
|
||||
}
|
||||
case reflect.Float32, reflect.Float64:
|
||||
sqlType = "real"
|
||||
case reflect.String:
|
||||
if size > 0 && size < 65532 {
|
||||
sqlType = fmt.Sprintf("varchar(%d)", size)
|
||||
} else {
|
||||
sqlType = "text"
|
||||
}
|
||||
case reflect.Struct:
|
||||
if _, ok := dataValue.Interface().(time.Time); ok {
|
||||
sqlType = "datetime"
|
||||
}
|
||||
default:
|
||||
if IsByteArrayOrSlice(dataValue) {
|
||||
sqlType = "blob"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if sqlType == "" {
|
||||
panic(fmt.Sprintf("invalid sql type %s (%s) for sqlite3", dataValue.Type().Name(), dataValue.Kind().String()))
|
||||
}
|
||||
|
||||
if strings.TrimSpace(additionalType) == "" {
|
||||
return sqlType
|
||||
}
|
||||
return fmt.Sprintf("%v %v", sqlType, additionalType)
|
||||
}
|
||||
|
||||
func (s sqlite3) HasIndex(tableName string, indexName string) bool {
|
||||
var count int
|
||||
s.db.QueryRow(fmt.Sprintf("SELECT count(*) FROM sqlite_master WHERE tbl_name = ? AND sql LIKE '%%INDEX %v ON%%'", indexName), tableName).Scan(&count)
|
||||
return count > 0
|
||||
}
|
||||
|
||||
func (s sqlite3) HasTable(tableName string) bool {
|
||||
var count int
|
||||
s.db.QueryRow("SELECT count(*) FROM sqlite_master WHERE type='table' AND name=?", tableName).Scan(&count)
|
||||
return count > 0
|
||||
}
|
||||
|
||||
func (s sqlite3) HasColumn(tableName string, columnName string) bool {
|
||||
var count int
|
||||
s.db.QueryRow(fmt.Sprintf("SELECT count(*) FROM sqlite_master WHERE tbl_name = ? AND (sql LIKE '%%\"%v\" %%' OR sql LIKE '%%%v %%');\n", columnName, columnName), tableName).Scan(&count)
|
||||
return count > 0
|
||||
}
|
||||
|
||||
func (s sqlite3) CurrentDatabase() (name string) {
|
||||
var (
|
||||
ifaces = make([]interface{}, 3)
|
||||
pointers = make([]*string, 3)
|
||||
i int
|
||||
)
|
||||
for i = 0; i < 3; i++ {
|
||||
ifaces[i] = &pointers[i]
|
||||
}
|
||||
if err := s.db.QueryRow("PRAGMA database_list").Scan(ifaces...); err != nil {
|
||||
return
|
||||
}
|
||||
if pointers[1] != nil {
|
||||
name = *pointers[1]
|
||||
}
|
||||
return
|
||||
}
|
||||
3
vendor/github.com/jinzhu/gorm/dialects/sqlite/sqlite.go
generated
vendored
Normal file
3
vendor/github.com/jinzhu/gorm/dialects/sqlite/sqlite.go
generated
vendored
Normal file
@@ -0,0 +1,3 @@
|
||||
package sqlite
|
||||
|
||||
import _ "github.com/mattn/go-sqlite3"
|
||||
30
vendor/github.com/jinzhu/gorm/docker-compose.yml
generated
vendored
Normal file
30
vendor/github.com/jinzhu/gorm/docker-compose.yml
generated
vendored
Normal file
@@ -0,0 +1,30 @@
|
||||
version: '3'
|
||||
|
||||
services:
|
||||
mysql:
|
||||
image: 'mysql:latest'
|
||||
ports:
|
||||
- 9910:3306
|
||||
environment:
|
||||
- MYSQL_DATABASE=gorm
|
||||
- MYSQL_USER=gorm
|
||||
- MYSQL_PASSWORD=gorm
|
||||
- MYSQL_RANDOM_ROOT_PASSWORD="yes"
|
||||
postgres:
|
||||
image: 'postgres:latest'
|
||||
ports:
|
||||
- 9920:5432
|
||||
environment:
|
||||
- POSTGRES_USER=gorm
|
||||
- POSTGRES_DB=gorm
|
||||
- POSTGRES_PASSWORD=gorm
|
||||
mssql:
|
||||
image: 'mcmoe/mssqldocker:latest'
|
||||
ports:
|
||||
- 9930:1433
|
||||
environment:
|
||||
- ACCEPT_EULA=Y
|
||||
- SA_PASSWORD=LoremIpsum86
|
||||
- MSSQL_DB=gorm
|
||||
- MSSQL_USER=gorm
|
||||
- MSSQL_PASSWORD=LoremIpsum86
|
||||
72
vendor/github.com/jinzhu/gorm/errors.go
generated
vendored
Normal file
72
vendor/github.com/jinzhu/gorm/errors.go
generated
vendored
Normal file
@@ -0,0 +1,72 @@
|
||||
package gorm
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"strings"
|
||||
)
|
||||
|
||||
var (
|
||||
// ErrRecordNotFound record not found error, happens when only haven't find any matched data when looking up with a struct, finding a slice won't return this error
|
||||
ErrRecordNotFound = errors.New("record not found")
|
||||
// ErrInvalidSQL invalid SQL error, happens when you passed invalid SQL
|
||||
ErrInvalidSQL = errors.New("invalid SQL")
|
||||
// ErrInvalidTransaction invalid transaction when you are trying to `Commit` or `Rollback`
|
||||
ErrInvalidTransaction = errors.New("no valid transaction")
|
||||
// ErrCantStartTransaction can't start transaction when you are trying to start one with `Begin`
|
||||
ErrCantStartTransaction = errors.New("can't start transaction")
|
||||
// ErrUnaddressable unaddressable value
|
||||
ErrUnaddressable = errors.New("using unaddressable value")
|
||||
)
|
||||
|
||||
// Errors contains all happened errors
|
||||
type Errors []error
|
||||
|
||||
// IsRecordNotFoundError returns current error has record not found error or not
|
||||
func IsRecordNotFoundError(err error) bool {
|
||||
if errs, ok := err.(Errors); ok {
|
||||
for _, err := range errs {
|
||||
if err == ErrRecordNotFound {
|
||||
return true
|
||||
}
|
||||
}
|
||||
}
|
||||
return err == ErrRecordNotFound
|
||||
}
|
||||
|
||||
// GetErrors gets all happened errors
|
||||
func (errs Errors) GetErrors() []error {
|
||||
return errs
|
||||
}
|
||||
|
||||
// Add adds an error
|
||||
func (errs Errors) Add(newErrors ...error) Errors {
|
||||
for _, err := range newErrors {
|
||||
if err == nil {
|
||||
continue
|
||||
}
|
||||
|
||||
if errors, ok := err.(Errors); ok {
|
||||
errs = errs.Add(errors...)
|
||||
} else {
|
||||
ok = true
|
||||
for _, e := range errs {
|
||||
if err == e {
|
||||
ok = false
|
||||
}
|
||||
}
|
||||
if ok {
|
||||
errs = append(errs, err)
|
||||
}
|
||||
}
|
||||
}
|
||||
return errs
|
||||
}
|
||||
|
||||
// Error format happened errors
|
||||
func (errs Errors) Error() string {
|
||||
var errors = []string{}
|
||||
for _, e := range errs {
|
||||
errors = append(errors, e.Error())
|
||||
}
|
||||
return strings.Join(errors, "; ")
|
||||
}
|
||||
66
vendor/github.com/jinzhu/gorm/field.go
generated
vendored
Normal file
66
vendor/github.com/jinzhu/gorm/field.go
generated
vendored
Normal file
@@ -0,0 +1,66 @@
|
||||
package gorm
|
||||
|
||||
import (
|
||||
"database/sql"
|
||||
"database/sql/driver"
|
||||
"errors"
|
||||
"fmt"
|
||||
"reflect"
|
||||
)
|
||||
|
||||
// Field model field definition
|
||||
type Field struct {
|
||||
*StructField
|
||||
IsBlank bool
|
||||
Field reflect.Value
|
||||
}
|
||||
|
||||
// Set set a value to the field
|
||||
func (field *Field) Set(value interface{}) (err error) {
|
||||
if !field.Field.IsValid() {
|
||||
return errors.New("field value not valid")
|
||||
}
|
||||
|
||||
if !field.Field.CanAddr() {
|
||||
return ErrUnaddressable
|
||||
}
|
||||
|
||||
reflectValue, ok := value.(reflect.Value)
|
||||
if !ok {
|
||||
reflectValue = reflect.ValueOf(value)
|
||||
}
|
||||
|
||||
fieldValue := field.Field
|
||||
if reflectValue.IsValid() {
|
||||
if reflectValue.Type().ConvertibleTo(fieldValue.Type()) {
|
||||
fieldValue.Set(reflectValue.Convert(fieldValue.Type()))
|
||||
} else {
|
||||
if fieldValue.Kind() == reflect.Ptr {
|
||||
if fieldValue.IsNil() {
|
||||
fieldValue.Set(reflect.New(field.Struct.Type.Elem()))
|
||||
}
|
||||
fieldValue = fieldValue.Elem()
|
||||
}
|
||||
|
||||
if reflectValue.Type().ConvertibleTo(fieldValue.Type()) {
|
||||
fieldValue.Set(reflectValue.Convert(fieldValue.Type()))
|
||||
} else if scanner, ok := fieldValue.Addr().Interface().(sql.Scanner); ok {
|
||||
v := reflectValue.Interface()
|
||||
if valuer, ok := v.(driver.Valuer); ok {
|
||||
if v, err = valuer.Value(); err == nil {
|
||||
err = scanner.Scan(v)
|
||||
}
|
||||
} else {
|
||||
err = scanner.Scan(v)
|
||||
}
|
||||
} else {
|
||||
err = fmt.Errorf("could not convert argument of field %s from %s to %s", field.Name, reflectValue.Type(), fieldValue.Type())
|
||||
}
|
||||
}
|
||||
} else {
|
||||
field.Field.Set(reflect.Zero(field.Field.Type()))
|
||||
}
|
||||
|
||||
field.IsBlank = isBlank(field.Field)
|
||||
return err
|
||||
}
|
||||
20
vendor/github.com/jinzhu/gorm/interface.go
generated
vendored
Normal file
20
vendor/github.com/jinzhu/gorm/interface.go
generated
vendored
Normal file
@@ -0,0 +1,20 @@
|
||||
package gorm
|
||||
|
||||
import "database/sql"
|
||||
|
||||
// SQLCommon is the minimal database connection functionality gorm requires. Implemented by *sql.DB.
|
||||
type SQLCommon interface {
|
||||
Exec(query string, args ...interface{}) (sql.Result, error)
|
||||
Prepare(query string) (*sql.Stmt, error)
|
||||
Query(query string, args ...interface{}) (*sql.Rows, error)
|
||||
QueryRow(query string, args ...interface{}) *sql.Row
|
||||
}
|
||||
|
||||
type sqlDb interface {
|
||||
Begin() (*sql.Tx, error)
|
||||
}
|
||||
|
||||
type sqlTx interface {
|
||||
Commit() error
|
||||
Rollback() error
|
||||
}
|
||||
211
vendor/github.com/jinzhu/gorm/join_table_handler.go
generated
vendored
Normal file
211
vendor/github.com/jinzhu/gorm/join_table_handler.go
generated
vendored
Normal file
@@ -0,0 +1,211 @@
|
||||
package gorm
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"reflect"
|
||||
"strings"
|
||||
)
|
||||
|
||||
// JoinTableHandlerInterface is an interface for how to handle many2many relations
|
||||
type JoinTableHandlerInterface interface {
|
||||
// initialize join table handler
|
||||
Setup(relationship *Relationship, tableName string, source reflect.Type, destination reflect.Type)
|
||||
// Table return join table's table name
|
||||
Table(db *DB) string
|
||||
// Add create relationship in join table for source and destination
|
||||
Add(handler JoinTableHandlerInterface, db *DB, source interface{}, destination interface{}) error
|
||||
// Delete delete relationship in join table for sources
|
||||
Delete(handler JoinTableHandlerInterface, db *DB, sources ...interface{}) error
|
||||
// JoinWith query with `Join` conditions
|
||||
JoinWith(handler JoinTableHandlerInterface, db *DB, source interface{}) *DB
|
||||
// SourceForeignKeys return source foreign keys
|
||||
SourceForeignKeys() []JoinTableForeignKey
|
||||
// DestinationForeignKeys return destination foreign keys
|
||||
DestinationForeignKeys() []JoinTableForeignKey
|
||||
}
|
||||
|
||||
// JoinTableForeignKey join table foreign key struct
|
||||
type JoinTableForeignKey struct {
|
||||
DBName string
|
||||
AssociationDBName string
|
||||
}
|
||||
|
||||
// JoinTableSource is a struct that contains model type and foreign keys
|
||||
type JoinTableSource struct {
|
||||
ModelType reflect.Type
|
||||
ForeignKeys []JoinTableForeignKey
|
||||
}
|
||||
|
||||
// JoinTableHandler default join table handler
|
||||
type JoinTableHandler struct {
|
||||
TableName string `sql:"-"`
|
||||
Source JoinTableSource `sql:"-"`
|
||||
Destination JoinTableSource `sql:"-"`
|
||||
}
|
||||
|
||||
// SourceForeignKeys return source foreign keys
|
||||
func (s *JoinTableHandler) SourceForeignKeys() []JoinTableForeignKey {
|
||||
return s.Source.ForeignKeys
|
||||
}
|
||||
|
||||
// DestinationForeignKeys return destination foreign keys
|
||||
func (s *JoinTableHandler) DestinationForeignKeys() []JoinTableForeignKey {
|
||||
return s.Destination.ForeignKeys
|
||||
}
|
||||
|
||||
// Setup initialize a default join table handler
|
||||
func (s *JoinTableHandler) Setup(relationship *Relationship, tableName string, source reflect.Type, destination reflect.Type) {
|
||||
s.TableName = tableName
|
||||
|
||||
s.Source = JoinTableSource{ModelType: source}
|
||||
s.Source.ForeignKeys = []JoinTableForeignKey{}
|
||||
for idx, dbName := range relationship.ForeignFieldNames {
|
||||
s.Source.ForeignKeys = append(s.Source.ForeignKeys, JoinTableForeignKey{
|
||||
DBName: relationship.ForeignDBNames[idx],
|
||||
AssociationDBName: dbName,
|
||||
})
|
||||
}
|
||||
|
||||
s.Destination = JoinTableSource{ModelType: destination}
|
||||
s.Destination.ForeignKeys = []JoinTableForeignKey{}
|
||||
for idx, dbName := range relationship.AssociationForeignFieldNames {
|
||||
s.Destination.ForeignKeys = append(s.Destination.ForeignKeys, JoinTableForeignKey{
|
||||
DBName: relationship.AssociationForeignDBNames[idx],
|
||||
AssociationDBName: dbName,
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
// Table return join table's table name
|
||||
func (s JoinTableHandler) Table(db *DB) string {
|
||||
return DefaultTableNameHandler(db, s.TableName)
|
||||
}
|
||||
|
||||
func (s JoinTableHandler) updateConditionMap(conditionMap map[string]interface{}, db *DB, joinTableSources []JoinTableSource, sources ...interface{}) {
|
||||
for _, source := range sources {
|
||||
scope := db.NewScope(source)
|
||||
modelType := scope.GetModelStruct().ModelType
|
||||
|
||||
for _, joinTableSource := range joinTableSources {
|
||||
if joinTableSource.ModelType == modelType {
|
||||
for _, foreignKey := range joinTableSource.ForeignKeys {
|
||||
if field, ok := scope.FieldByName(foreignKey.AssociationDBName); ok {
|
||||
conditionMap[foreignKey.DBName] = field.Field.Interface()
|
||||
}
|
||||
}
|
||||
break
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Add create relationship in join table for source and destination
|
||||
func (s JoinTableHandler) Add(handler JoinTableHandlerInterface, db *DB, source interface{}, destination interface{}) error {
|
||||
var (
|
||||
scope = db.NewScope("")
|
||||
conditionMap = map[string]interface{}{}
|
||||
)
|
||||
|
||||
// Update condition map for source
|
||||
s.updateConditionMap(conditionMap, db, []JoinTableSource{s.Source}, source)
|
||||
|
||||
// Update condition map for destination
|
||||
s.updateConditionMap(conditionMap, db, []JoinTableSource{s.Destination}, destination)
|
||||
|
||||
var assignColumns, binVars, conditions []string
|
||||
var values []interface{}
|
||||
for key, value := range conditionMap {
|
||||
assignColumns = append(assignColumns, scope.Quote(key))
|
||||
binVars = append(binVars, `?`)
|
||||
conditions = append(conditions, fmt.Sprintf("%v = ?", scope.Quote(key)))
|
||||
values = append(values, value)
|
||||
}
|
||||
|
||||
for _, value := range values {
|
||||
values = append(values, value)
|
||||
}
|
||||
|
||||
quotedTable := scope.Quote(handler.Table(db))
|
||||
sql := fmt.Sprintf(
|
||||
"INSERT INTO %v (%v) SELECT %v %v WHERE NOT EXISTS (SELECT * FROM %v WHERE %v)",
|
||||
quotedTable,
|
||||
strings.Join(assignColumns, ","),
|
||||
strings.Join(binVars, ","),
|
||||
scope.Dialect().SelectFromDummyTable(),
|
||||
quotedTable,
|
||||
strings.Join(conditions, " AND "),
|
||||
)
|
||||
|
||||
return db.Exec(sql, values...).Error
|
||||
}
|
||||
|
||||
// Delete delete relationship in join table for sources
|
||||
func (s JoinTableHandler) Delete(handler JoinTableHandlerInterface, db *DB, sources ...interface{}) error {
|
||||
var (
|
||||
scope = db.NewScope(nil)
|
||||
conditions []string
|
||||
values []interface{}
|
||||
conditionMap = map[string]interface{}{}
|
||||
)
|
||||
|
||||
s.updateConditionMap(conditionMap, db, []JoinTableSource{s.Source, s.Destination}, sources...)
|
||||
|
||||
for key, value := range conditionMap {
|
||||
conditions = append(conditions, fmt.Sprintf("%v = ?", scope.Quote(key)))
|
||||
values = append(values, value)
|
||||
}
|
||||
|
||||
return db.Table(handler.Table(db)).Where(strings.Join(conditions, " AND "), values...).Delete("").Error
|
||||
}
|
||||
|
||||
// JoinWith query with `Join` conditions
|
||||
func (s JoinTableHandler) JoinWith(handler JoinTableHandlerInterface, db *DB, source interface{}) *DB {
|
||||
var (
|
||||
scope = db.NewScope(source)
|
||||
tableName = handler.Table(db)
|
||||
quotedTableName = scope.Quote(tableName)
|
||||
joinConditions []string
|
||||
values []interface{}
|
||||
)
|
||||
|
||||
if s.Source.ModelType == scope.GetModelStruct().ModelType {
|
||||
destinationTableName := db.NewScope(reflect.New(s.Destination.ModelType).Interface()).QuotedTableName()
|
||||
for _, foreignKey := range s.Destination.ForeignKeys {
|
||||
joinConditions = append(joinConditions, fmt.Sprintf("%v.%v = %v.%v", quotedTableName, scope.Quote(foreignKey.DBName), destinationTableName, scope.Quote(foreignKey.AssociationDBName)))
|
||||
}
|
||||
|
||||
var foreignDBNames []string
|
||||
var foreignFieldNames []string
|
||||
|
||||
for _, foreignKey := range s.Source.ForeignKeys {
|
||||
foreignDBNames = append(foreignDBNames, foreignKey.DBName)
|
||||
if field, ok := scope.FieldByName(foreignKey.AssociationDBName); ok {
|
||||
foreignFieldNames = append(foreignFieldNames, field.Name)
|
||||
}
|
||||
}
|
||||
|
||||
foreignFieldValues := scope.getColumnAsArray(foreignFieldNames, scope.Value)
|
||||
|
||||
var condString string
|
||||
if len(foreignFieldValues) > 0 {
|
||||
var quotedForeignDBNames []string
|
||||
for _, dbName := range foreignDBNames {
|
||||
quotedForeignDBNames = append(quotedForeignDBNames, tableName+"."+dbName)
|
||||
}
|
||||
|
||||
condString = fmt.Sprintf("%v IN (%v)", toQueryCondition(scope, quotedForeignDBNames), toQueryMarks(foreignFieldValues))
|
||||
|
||||
keys := scope.getColumnAsArray(foreignFieldNames, scope.Value)
|
||||
values = append(values, toQueryValues(keys))
|
||||
} else {
|
||||
condString = fmt.Sprintf("1 <> 1")
|
||||
}
|
||||
|
||||
return db.Joins(fmt.Sprintf("INNER JOIN %v ON %v", quotedTableName, strings.Join(joinConditions, " AND "))).
|
||||
Where(condString, toQueryValues(foreignFieldValues)...)
|
||||
}
|
||||
|
||||
db.Error = errors.New("wrong source type for join table handler")
|
||||
return db
|
||||
}
|
||||
119
vendor/github.com/jinzhu/gorm/logger.go
generated
vendored
Normal file
119
vendor/github.com/jinzhu/gorm/logger.go
generated
vendored
Normal file
@@ -0,0 +1,119 @@
|
||||
package gorm
|
||||
|
||||
import (
|
||||
"database/sql/driver"
|
||||
"fmt"
|
||||
"log"
|
||||
"os"
|
||||
"reflect"
|
||||
"regexp"
|
||||
"strconv"
|
||||
"time"
|
||||
"unicode"
|
||||
)
|
||||
|
||||
var (
|
||||
defaultLogger = Logger{log.New(os.Stdout, "\r\n", 0)}
|
||||
sqlRegexp = regexp.MustCompile(`\?`)
|
||||
numericPlaceHolderRegexp = regexp.MustCompile(`\$\d+`)
|
||||
)
|
||||
|
||||
func isPrintable(s string) bool {
|
||||
for _, r := range s {
|
||||
if !unicode.IsPrint(r) {
|
||||
return false
|
||||
}
|
||||
}
|
||||
return true
|
||||
}
|
||||
|
||||
var LogFormatter = func(values ...interface{}) (messages []interface{}) {
|
||||
if len(values) > 1 {
|
||||
var (
|
||||
sql string
|
||||
formattedValues []string
|
||||
level = values[0]
|
||||
currentTime = "\n\033[33m[" + NowFunc().Format("2006-01-02 15:04:05") + "]\033[0m"
|
||||
source = fmt.Sprintf("\033[35m(%v)\033[0m", values[1])
|
||||
)
|
||||
|
||||
messages = []interface{}{source, currentTime}
|
||||
|
||||
if level == "sql" {
|
||||
// duration
|
||||
messages = append(messages, fmt.Sprintf(" \033[36;1m[%.2fms]\033[0m ", float64(values[2].(time.Duration).Nanoseconds()/1e4)/100.0))
|
||||
// sql
|
||||
|
||||
for _, value := range values[4].([]interface{}) {
|
||||
indirectValue := reflect.Indirect(reflect.ValueOf(value))
|
||||
if indirectValue.IsValid() {
|
||||
value = indirectValue.Interface()
|
||||
if t, ok := value.(time.Time); ok {
|
||||
formattedValues = append(formattedValues, fmt.Sprintf("'%v'", t.Format("2006-01-02 15:04:05")))
|
||||
} else if b, ok := value.([]byte); ok {
|
||||
if str := string(b); isPrintable(str) {
|
||||
formattedValues = append(formattedValues, fmt.Sprintf("'%v'", str))
|
||||
} else {
|
||||
formattedValues = append(formattedValues, "'<binary>'")
|
||||
}
|
||||
} else if r, ok := value.(driver.Valuer); ok {
|
||||
if value, err := r.Value(); err == nil && value != nil {
|
||||
formattedValues = append(formattedValues, fmt.Sprintf("'%v'", value))
|
||||
} else {
|
||||
formattedValues = append(formattedValues, "NULL")
|
||||
}
|
||||
} else {
|
||||
formattedValues = append(formattedValues, fmt.Sprintf("'%v'", value))
|
||||
}
|
||||
} else {
|
||||
formattedValues = append(formattedValues, "NULL")
|
||||
}
|
||||
}
|
||||
|
||||
// differentiate between $n placeholders or else treat like ?
|
||||
if numericPlaceHolderRegexp.MatchString(values[3].(string)) {
|
||||
sql = values[3].(string)
|
||||
for index, value := range formattedValues {
|
||||
placeholder := fmt.Sprintf(`\$%d([^\d]|$)`, index+1)
|
||||
sql = regexp.MustCompile(placeholder).ReplaceAllString(sql, value+"$1")
|
||||
}
|
||||
} else {
|
||||
formattedValuesLength := len(formattedValues)
|
||||
for index, value := range sqlRegexp.Split(values[3].(string), -1) {
|
||||
sql += value
|
||||
if index < formattedValuesLength {
|
||||
sql += formattedValues[index]
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
messages = append(messages, sql)
|
||||
messages = append(messages, fmt.Sprintf(" \n\033[36;31m[%v]\033[0m ", strconv.FormatInt(values[5].(int64), 10)+" rows affected or returned "))
|
||||
} else {
|
||||
messages = append(messages, "\033[31;1m")
|
||||
messages = append(messages, values[2:]...)
|
||||
messages = append(messages, "\033[0m")
|
||||
}
|
||||
}
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
type logger interface {
|
||||
Print(v ...interface{})
|
||||
}
|
||||
|
||||
// LogWriter log writer interface
|
||||
type LogWriter interface {
|
||||
Println(v ...interface{})
|
||||
}
|
||||
|
||||
// Logger default logger
|
||||
type Logger struct {
|
||||
LogWriter
|
||||
}
|
||||
|
||||
// Print format & print log
|
||||
func (logger Logger) Print(values ...interface{}) {
|
||||
logger.Println(LogFormatter(values...)...)
|
||||
}
|
||||
792
vendor/github.com/jinzhu/gorm/main.go
generated
vendored
Normal file
792
vendor/github.com/jinzhu/gorm/main.go
generated
vendored
Normal file
@@ -0,0 +1,792 @@
|
||||
package gorm
|
||||
|
||||
import (
|
||||
"database/sql"
|
||||
"errors"
|
||||
"fmt"
|
||||
"reflect"
|
||||
"strings"
|
||||
"sync"
|
||||
"time"
|
||||
)
|
||||
|
||||
// DB contains information for current db connection
|
||||
type DB struct {
|
||||
Value interface{}
|
||||
Error error
|
||||
RowsAffected int64
|
||||
|
||||
// single db
|
||||
db SQLCommon
|
||||
blockGlobalUpdate bool
|
||||
logMode int
|
||||
logger logger
|
||||
search *search
|
||||
values sync.Map
|
||||
|
||||
// global db
|
||||
parent *DB
|
||||
callbacks *Callback
|
||||
dialect Dialect
|
||||
singularTable bool
|
||||
}
|
||||
|
||||
// Open initialize a new db connection, need to import driver first, e.g:
|
||||
//
|
||||
// import _ "github.com/go-sql-driver/mysql"
|
||||
// func main() {
|
||||
// db, err := gorm.Open("mysql", "user:password@/dbname?charset=utf8&parseTime=True&loc=Local")
|
||||
// }
|
||||
// GORM has wrapped some drivers, for easier to remember driver's import path, so you could import the mysql driver with
|
||||
// import _ "github.com/jinzhu/gorm/dialects/mysql"
|
||||
// // import _ "github.com/jinzhu/gorm/dialects/postgres"
|
||||
// // import _ "github.com/jinzhu/gorm/dialects/sqlite"
|
||||
// // import _ "github.com/jinzhu/gorm/dialects/mssql"
|
||||
func Open(dialect string, args ...interface{}) (db *DB, err error) {
|
||||
if len(args) == 0 {
|
||||
err = errors.New("invalid database source")
|
||||
return nil, err
|
||||
}
|
||||
var source string
|
||||
var dbSQL SQLCommon
|
||||
var ownDbSQL bool
|
||||
|
||||
switch value := args[0].(type) {
|
||||
case string:
|
||||
var driver = dialect
|
||||
if len(args) == 1 {
|
||||
source = value
|
||||
} else if len(args) >= 2 {
|
||||
driver = value
|
||||
source = args[1].(string)
|
||||
}
|
||||
dbSQL, err = sql.Open(driver, source)
|
||||
ownDbSQL = true
|
||||
case SQLCommon:
|
||||
dbSQL = value
|
||||
ownDbSQL = false
|
||||
default:
|
||||
return nil, fmt.Errorf("invalid database source: %v is not a valid type", value)
|
||||
}
|
||||
|
||||
db = &DB{
|
||||
db: dbSQL,
|
||||
logger: defaultLogger,
|
||||
callbacks: DefaultCallback,
|
||||
dialect: newDialect(dialect, dbSQL),
|
||||
}
|
||||
db.parent = db
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
// Send a ping to make sure the database connection is alive.
|
||||
if d, ok := dbSQL.(*sql.DB); ok {
|
||||
if err = d.Ping(); err != nil && ownDbSQL {
|
||||
d.Close()
|
||||
}
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
// New clone a new db connection without search conditions
|
||||
func (s *DB) New() *DB {
|
||||
clone := s.clone()
|
||||
clone.search = nil
|
||||
clone.Value = nil
|
||||
return clone
|
||||
}
|
||||
|
||||
type closer interface {
|
||||
Close() error
|
||||
}
|
||||
|
||||
// Close close current db connection. If database connection is not an io.Closer, returns an error.
|
||||
func (s *DB) Close() error {
|
||||
if db, ok := s.parent.db.(closer); ok {
|
||||
return db.Close()
|
||||
}
|
||||
return errors.New("can't close current db")
|
||||
}
|
||||
|
||||
// DB get `*sql.DB` from current connection
|
||||
// If the underlying database connection is not a *sql.DB, returns nil
|
||||
func (s *DB) DB() *sql.DB {
|
||||
db, _ := s.db.(*sql.DB)
|
||||
return db
|
||||
}
|
||||
|
||||
// CommonDB return the underlying `*sql.DB` or `*sql.Tx` instance, mainly intended to allow coexistence with legacy non-GORM code.
|
||||
func (s *DB) CommonDB() SQLCommon {
|
||||
return s.db
|
||||
}
|
||||
|
||||
// Dialect get dialect
|
||||
func (s *DB) Dialect() Dialect {
|
||||
return s.dialect
|
||||
}
|
||||
|
||||
// Callback return `Callbacks` container, you could add/change/delete callbacks with it
|
||||
// db.Callback().Create().Register("update_created_at", updateCreated)
|
||||
// Refer https://jinzhu.github.io/gorm/development.html#callbacks
|
||||
func (s *DB) Callback() *Callback {
|
||||
s.parent.callbacks = s.parent.callbacks.clone()
|
||||
return s.parent.callbacks
|
||||
}
|
||||
|
||||
// SetLogger replace default logger
|
||||
func (s *DB) SetLogger(log logger) {
|
||||
s.logger = log
|
||||
}
|
||||
|
||||
// LogMode set log mode, `true` for detailed logs, `false` for no log, default, will only print error logs
|
||||
func (s *DB) LogMode(enable bool) *DB {
|
||||
if enable {
|
||||
s.logMode = 2
|
||||
} else {
|
||||
s.logMode = 1
|
||||
}
|
||||
return s
|
||||
}
|
||||
|
||||
// BlockGlobalUpdate if true, generates an error on update/delete without where clause.
|
||||
// This is to prevent eventual error with empty objects updates/deletions
|
||||
func (s *DB) BlockGlobalUpdate(enable bool) *DB {
|
||||
s.blockGlobalUpdate = enable
|
||||
return s
|
||||
}
|
||||
|
||||
// HasBlockGlobalUpdate return state of block
|
||||
func (s *DB) HasBlockGlobalUpdate() bool {
|
||||
return s.blockGlobalUpdate
|
||||
}
|
||||
|
||||
// SingularTable use singular table by default
|
||||
func (s *DB) SingularTable(enable bool) {
|
||||
modelStructsMap = sync.Map{}
|
||||
s.parent.singularTable = enable
|
||||
}
|
||||
|
||||
// NewScope create a scope for current operation
|
||||
func (s *DB) NewScope(value interface{}) *Scope {
|
||||
dbClone := s.clone()
|
||||
dbClone.Value = value
|
||||
return &Scope{db: dbClone, Search: dbClone.search.clone(), Value: value}
|
||||
}
|
||||
|
||||
// QueryExpr returns the query as expr object
|
||||
func (s *DB) QueryExpr() *expr {
|
||||
scope := s.NewScope(s.Value)
|
||||
scope.InstanceSet("skip_bindvar", true)
|
||||
scope.prepareQuerySQL()
|
||||
|
||||
return Expr(scope.SQL, scope.SQLVars...)
|
||||
}
|
||||
|
||||
// SubQuery returns the query as sub query
|
||||
func (s *DB) SubQuery() *expr {
|
||||
scope := s.NewScope(s.Value)
|
||||
scope.InstanceSet("skip_bindvar", true)
|
||||
scope.prepareQuerySQL()
|
||||
|
||||
return Expr(fmt.Sprintf("(%v)", scope.SQL), scope.SQLVars...)
|
||||
}
|
||||
|
||||
// Where return a new relation, filter records with given conditions, accepts `map`, `struct` or `string` as conditions, refer http://jinzhu.github.io/gorm/crud.html#query
|
||||
func (s *DB) Where(query interface{}, args ...interface{}) *DB {
|
||||
return s.clone().search.Where(query, args...).db
|
||||
}
|
||||
|
||||
// Or filter records that match before conditions or this one, similar to `Where`
|
||||
func (s *DB) Or(query interface{}, args ...interface{}) *DB {
|
||||
return s.clone().search.Or(query, args...).db
|
||||
}
|
||||
|
||||
// Not filter records that don't match current conditions, similar to `Where`
|
||||
func (s *DB) Not(query interface{}, args ...interface{}) *DB {
|
||||
return s.clone().search.Not(query, args...).db
|
||||
}
|
||||
|
||||
// Limit specify the number of records to be retrieved
|
||||
func (s *DB) Limit(limit interface{}) *DB {
|
||||
return s.clone().search.Limit(limit).db
|
||||
}
|
||||
|
||||
// Offset specify the number of records to skip before starting to return the records
|
||||
func (s *DB) Offset(offset interface{}) *DB {
|
||||
return s.clone().search.Offset(offset).db
|
||||
}
|
||||
|
||||
// Order specify order when retrieve records from database, set reorder to `true` to overwrite defined conditions
|
||||
// db.Order("name DESC")
|
||||
// db.Order("name DESC", true) // reorder
|
||||
// db.Order(gorm.Expr("name = ? DESC", "first")) // sql expression
|
||||
func (s *DB) Order(value interface{}, reorder ...bool) *DB {
|
||||
return s.clone().search.Order(value, reorder...).db
|
||||
}
|
||||
|
||||
// Select specify fields that you want to retrieve from database when querying, by default, will select all fields;
|
||||
// When creating/updating, specify fields that you want to save to database
|
||||
func (s *DB) Select(query interface{}, args ...interface{}) *DB {
|
||||
return s.clone().search.Select(query, args...).db
|
||||
}
|
||||
|
||||
// Omit specify fields that you want to ignore when saving to database for creating, updating
|
||||
func (s *DB) Omit(columns ...string) *DB {
|
||||
return s.clone().search.Omit(columns...).db
|
||||
}
|
||||
|
||||
// Group specify the group method on the find
|
||||
func (s *DB) Group(query string) *DB {
|
||||
return s.clone().search.Group(query).db
|
||||
}
|
||||
|
||||
// Having specify HAVING conditions for GROUP BY
|
||||
func (s *DB) Having(query interface{}, values ...interface{}) *DB {
|
||||
return s.clone().search.Having(query, values...).db
|
||||
}
|
||||
|
||||
// Joins specify Joins conditions
|
||||
// db.Joins("JOIN emails ON emails.user_id = users.id AND emails.email = ?", "jinzhu@example.org").Find(&user)
|
||||
func (s *DB) Joins(query string, args ...interface{}) *DB {
|
||||
return s.clone().search.Joins(query, args...).db
|
||||
}
|
||||
|
||||
// Scopes pass current database connection to arguments `func(*DB) *DB`, which could be used to add conditions dynamically
|
||||
// func AmountGreaterThan1000(db *gorm.DB) *gorm.DB {
|
||||
// return db.Where("amount > ?", 1000)
|
||||
// }
|
||||
//
|
||||
// func OrderStatus(status []string) func (db *gorm.DB) *gorm.DB {
|
||||
// return func (db *gorm.DB) *gorm.DB {
|
||||
// return db.Scopes(AmountGreaterThan1000).Where("status in (?)", status)
|
||||
// }
|
||||
// }
|
||||
//
|
||||
// db.Scopes(AmountGreaterThan1000, OrderStatus([]string{"paid", "shipped"})).Find(&orders)
|
||||
// Refer https://jinzhu.github.io/gorm/crud.html#scopes
|
||||
func (s *DB) Scopes(funcs ...func(*DB) *DB) *DB {
|
||||
for _, f := range funcs {
|
||||
s = f(s)
|
||||
}
|
||||
return s
|
||||
}
|
||||
|
||||
// Unscoped return all record including deleted record, refer Soft Delete https://jinzhu.github.io/gorm/crud.html#soft-delete
|
||||
func (s *DB) Unscoped() *DB {
|
||||
return s.clone().search.unscoped().db
|
||||
}
|
||||
|
||||
// Attrs initialize struct with argument if record not found with `FirstOrInit` https://jinzhu.github.io/gorm/crud.html#firstorinit or `FirstOrCreate` https://jinzhu.github.io/gorm/crud.html#firstorcreate
|
||||
func (s *DB) Attrs(attrs ...interface{}) *DB {
|
||||
return s.clone().search.Attrs(attrs...).db
|
||||
}
|
||||
|
||||
// Assign assign result with argument regardless it is found or not with `FirstOrInit` https://jinzhu.github.io/gorm/crud.html#firstorinit or `FirstOrCreate` https://jinzhu.github.io/gorm/crud.html#firstorcreate
|
||||
func (s *DB) Assign(attrs ...interface{}) *DB {
|
||||
return s.clone().search.Assign(attrs...).db
|
||||
}
|
||||
|
||||
// First find first record that match given conditions, order by primary key
|
||||
func (s *DB) First(out interface{}, where ...interface{}) *DB {
|
||||
newScope := s.NewScope(out)
|
||||
newScope.Search.Limit(1)
|
||||
return newScope.Set("gorm:order_by_primary_key", "ASC").
|
||||
inlineCondition(where...).callCallbacks(s.parent.callbacks.queries).db
|
||||
}
|
||||
|
||||
// Take return a record that match given conditions, the order will depend on the database implementation
|
||||
func (s *DB) Take(out interface{}, where ...interface{}) *DB {
|
||||
newScope := s.NewScope(out)
|
||||
newScope.Search.Limit(1)
|
||||
return newScope.inlineCondition(where...).callCallbacks(s.parent.callbacks.queries).db
|
||||
}
|
||||
|
||||
// Last find last record that match given conditions, order by primary key
|
||||
func (s *DB) Last(out interface{}, where ...interface{}) *DB {
|
||||
newScope := s.NewScope(out)
|
||||
newScope.Search.Limit(1)
|
||||
return newScope.Set("gorm:order_by_primary_key", "DESC").
|
||||
inlineCondition(where...).callCallbacks(s.parent.callbacks.queries).db
|
||||
}
|
||||
|
||||
// Find find records that match given conditions
|
||||
func (s *DB) Find(out interface{}, where ...interface{}) *DB {
|
||||
return s.NewScope(out).inlineCondition(where...).callCallbacks(s.parent.callbacks.queries).db
|
||||
}
|
||||
|
||||
//Preloads preloads relations, don`t touch out
|
||||
func (s *DB) Preloads(out interface{}) *DB {
|
||||
return s.NewScope(out).InstanceSet("gorm:only_preload", 1).callCallbacks(s.parent.callbacks.queries).db
|
||||
}
|
||||
|
||||
// Scan scan value to a struct
|
||||
func (s *DB) Scan(dest interface{}) *DB {
|
||||
return s.NewScope(s.Value).Set("gorm:query_destination", dest).callCallbacks(s.parent.callbacks.queries).db
|
||||
}
|
||||
|
||||
// Row return `*sql.Row` with given conditions
|
||||
func (s *DB) Row() *sql.Row {
|
||||
return s.NewScope(s.Value).row()
|
||||
}
|
||||
|
||||
// Rows return `*sql.Rows` with given conditions
|
||||
func (s *DB) Rows() (*sql.Rows, error) {
|
||||
return s.NewScope(s.Value).rows()
|
||||
}
|
||||
|
||||
// ScanRows scan `*sql.Rows` to give struct
|
||||
func (s *DB) ScanRows(rows *sql.Rows, result interface{}) error {
|
||||
var (
|
||||
scope = s.NewScope(result)
|
||||
clone = scope.db
|
||||
columns, err = rows.Columns()
|
||||
)
|
||||
|
||||
if clone.AddError(err) == nil {
|
||||
scope.scan(rows, columns, scope.Fields())
|
||||
}
|
||||
|
||||
return clone.Error
|
||||
}
|
||||
|
||||
// Pluck used to query single column from a model as a map
|
||||
// var ages []int64
|
||||
// db.Find(&users).Pluck("age", &ages)
|
||||
func (s *DB) Pluck(column string, value interface{}) *DB {
|
||||
return s.NewScope(s.Value).pluck(column, value).db
|
||||
}
|
||||
|
||||
// Count get how many records for a model
|
||||
func (s *DB) Count(value interface{}) *DB {
|
||||
return s.NewScope(s.Value).count(value).db
|
||||
}
|
||||
|
||||
// Related get related associations
|
||||
func (s *DB) Related(value interface{}, foreignKeys ...string) *DB {
|
||||
return s.NewScope(s.Value).related(value, foreignKeys...).db
|
||||
}
|
||||
|
||||
// FirstOrInit find first matched record or initialize a new one with given conditions (only works with struct, map conditions)
|
||||
// https://jinzhu.github.io/gorm/crud.html#firstorinit
|
||||
func (s *DB) FirstOrInit(out interface{}, where ...interface{}) *DB {
|
||||
c := s.clone()
|
||||
if result := c.First(out, where...); result.Error != nil {
|
||||
if !result.RecordNotFound() {
|
||||
return result
|
||||
}
|
||||
c.NewScope(out).inlineCondition(where...).initialize()
|
||||
} else {
|
||||
c.NewScope(out).updatedAttrsWithValues(c.search.assignAttrs)
|
||||
}
|
||||
return c
|
||||
}
|
||||
|
||||
// FirstOrCreate find first matched record or create a new one with given conditions (only works with struct, map conditions)
|
||||
// https://jinzhu.github.io/gorm/crud.html#firstorcreate
|
||||
func (s *DB) FirstOrCreate(out interface{}, where ...interface{}) *DB {
|
||||
c := s.clone()
|
||||
if result := s.First(out, where...); result.Error != nil {
|
||||
if !result.RecordNotFound() {
|
||||
return result
|
||||
}
|
||||
return c.NewScope(out).inlineCondition(where...).initialize().callCallbacks(c.parent.callbacks.creates).db
|
||||
} else if len(c.search.assignAttrs) > 0 {
|
||||
return c.NewScope(out).InstanceSet("gorm:update_interface", c.search.assignAttrs).callCallbacks(c.parent.callbacks.updates).db
|
||||
}
|
||||
return c
|
||||
}
|
||||
|
||||
// Update update attributes with callbacks, refer: https://jinzhu.github.io/gorm/crud.html#update
|
||||
func (s *DB) Update(attrs ...interface{}) *DB {
|
||||
return s.Updates(toSearchableMap(attrs...), true)
|
||||
}
|
||||
|
||||
// Updates update attributes with callbacks, refer: https://jinzhu.github.io/gorm/crud.html#update
|
||||
func (s *DB) Updates(values interface{}, ignoreProtectedAttrs ...bool) *DB {
|
||||
return s.NewScope(s.Value).
|
||||
Set("gorm:ignore_protected_attrs", len(ignoreProtectedAttrs) > 0).
|
||||
InstanceSet("gorm:update_interface", values).
|
||||
callCallbacks(s.parent.callbacks.updates).db
|
||||
}
|
||||
|
||||
// UpdateColumn update attributes without callbacks, refer: https://jinzhu.github.io/gorm/crud.html#update
|
||||
func (s *DB) UpdateColumn(attrs ...interface{}) *DB {
|
||||
return s.UpdateColumns(toSearchableMap(attrs...))
|
||||
}
|
||||
|
||||
// UpdateColumns update attributes without callbacks, refer: https://jinzhu.github.io/gorm/crud.html#update
|
||||
func (s *DB) UpdateColumns(values interface{}) *DB {
|
||||
return s.NewScope(s.Value).
|
||||
Set("gorm:update_column", true).
|
||||
Set("gorm:save_associations", false).
|
||||
InstanceSet("gorm:update_interface", values).
|
||||
callCallbacks(s.parent.callbacks.updates).db
|
||||
}
|
||||
|
||||
// Save update value in database, if the value doesn't have primary key, will insert it
|
||||
func (s *DB) Save(value interface{}) *DB {
|
||||
scope := s.NewScope(value)
|
||||
if !scope.PrimaryKeyZero() {
|
||||
newDB := scope.callCallbacks(s.parent.callbacks.updates).db
|
||||
if newDB.Error == nil && newDB.RowsAffected == 0 {
|
||||
return s.New().FirstOrCreate(value)
|
||||
}
|
||||
return newDB
|
||||
}
|
||||
return scope.callCallbacks(s.parent.callbacks.creates).db
|
||||
}
|
||||
|
||||
// Create insert the value into database
|
||||
func (s *DB) Create(value interface{}) *DB {
|
||||
scope := s.NewScope(value)
|
||||
return scope.callCallbacks(s.parent.callbacks.creates).db
|
||||
}
|
||||
|
||||
// Delete delete value match given conditions, if the value has primary key, then will including the primary key as condition
|
||||
func (s *DB) Delete(value interface{}, where ...interface{}) *DB {
|
||||
return s.NewScope(value).inlineCondition(where...).callCallbacks(s.parent.callbacks.deletes).db
|
||||
}
|
||||
|
||||
// Raw use raw sql as conditions, won't run it unless invoked by other methods
|
||||
// db.Raw("SELECT name, age FROM users WHERE name = ?", 3).Scan(&result)
|
||||
func (s *DB) Raw(sql string, values ...interface{}) *DB {
|
||||
return s.clone().search.Raw(true).Where(sql, values...).db
|
||||
}
|
||||
|
||||
// Exec execute raw sql
|
||||
func (s *DB) Exec(sql string, values ...interface{}) *DB {
|
||||
scope := s.NewScope(nil)
|
||||
generatedSQL := scope.buildCondition(map[string]interface{}{"query": sql, "args": values}, true)
|
||||
generatedSQL = strings.TrimSuffix(strings.TrimPrefix(generatedSQL, "("), ")")
|
||||
scope.Raw(generatedSQL)
|
||||
return scope.Exec().db
|
||||
}
|
||||
|
||||
// Model specify the model you would like to run db operations
|
||||
// // update all users's name to `hello`
|
||||
// db.Model(&User{}).Update("name", "hello")
|
||||
// // if user's primary key is non-blank, will use it as condition, then will only update the user's name to `hello`
|
||||
// db.Model(&user).Update("name", "hello")
|
||||
func (s *DB) Model(value interface{}) *DB {
|
||||
c := s.clone()
|
||||
c.Value = value
|
||||
return c
|
||||
}
|
||||
|
||||
// Table specify the table you would like to run db operations
|
||||
func (s *DB) Table(name string) *DB {
|
||||
clone := s.clone()
|
||||
clone.search.Table(name)
|
||||
clone.Value = nil
|
||||
return clone
|
||||
}
|
||||
|
||||
// Debug start debug mode
|
||||
func (s *DB) Debug() *DB {
|
||||
return s.clone().LogMode(true)
|
||||
}
|
||||
|
||||
// Begin begin a transaction
|
||||
func (s *DB) Begin() *DB {
|
||||
c := s.clone()
|
||||
if db, ok := c.db.(sqlDb); ok && db != nil {
|
||||
tx, err := db.Begin()
|
||||
c.db = interface{}(tx).(SQLCommon)
|
||||
|
||||
c.dialect.SetDB(c.db)
|
||||
c.AddError(err)
|
||||
} else {
|
||||
c.AddError(ErrCantStartTransaction)
|
||||
}
|
||||
return c
|
||||
}
|
||||
|
||||
// Commit commit a transaction
|
||||
func (s *DB) Commit() *DB {
|
||||
var emptySQLTx *sql.Tx
|
||||
if db, ok := s.db.(sqlTx); ok && db != nil && db != emptySQLTx {
|
||||
s.AddError(db.Commit())
|
||||
} else {
|
||||
s.AddError(ErrInvalidTransaction)
|
||||
}
|
||||
return s
|
||||
}
|
||||
|
||||
// Rollback rollback a transaction
|
||||
func (s *DB) Rollback() *DB {
|
||||
var emptySQLTx *sql.Tx
|
||||
if db, ok := s.db.(sqlTx); ok && db != nil && db != emptySQLTx {
|
||||
s.AddError(db.Rollback())
|
||||
} else {
|
||||
s.AddError(ErrInvalidTransaction)
|
||||
}
|
||||
return s
|
||||
}
|
||||
|
||||
// NewRecord check if value's primary key is blank
|
||||
func (s *DB) NewRecord(value interface{}) bool {
|
||||
return s.NewScope(value).PrimaryKeyZero()
|
||||
}
|
||||
|
||||
// RecordNotFound check if returning ErrRecordNotFound error
|
||||
func (s *DB) RecordNotFound() bool {
|
||||
for _, err := range s.GetErrors() {
|
||||
if err == ErrRecordNotFound {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
// CreateTable create table for models
|
||||
func (s *DB) CreateTable(models ...interface{}) *DB {
|
||||
db := s.Unscoped()
|
||||
for _, model := range models {
|
||||
db = db.NewScope(model).createTable().db
|
||||
}
|
||||
return db
|
||||
}
|
||||
|
||||
// DropTable drop table for models
|
||||
func (s *DB) DropTable(values ...interface{}) *DB {
|
||||
db := s.clone()
|
||||
for _, value := range values {
|
||||
if tableName, ok := value.(string); ok {
|
||||
db = db.Table(tableName)
|
||||
}
|
||||
|
||||
db = db.NewScope(value).dropTable().db
|
||||
}
|
||||
return db
|
||||
}
|
||||
|
||||
// DropTableIfExists drop table if it is exist
|
||||
func (s *DB) DropTableIfExists(values ...interface{}) *DB {
|
||||
db := s.clone()
|
||||
for _, value := range values {
|
||||
if s.HasTable(value) {
|
||||
db.AddError(s.DropTable(value).Error)
|
||||
}
|
||||
}
|
||||
return db
|
||||
}
|
||||
|
||||
// HasTable check has table or not
|
||||
func (s *DB) HasTable(value interface{}) bool {
|
||||
var (
|
||||
scope = s.NewScope(value)
|
||||
tableName string
|
||||
)
|
||||
|
||||
if name, ok := value.(string); ok {
|
||||
tableName = name
|
||||
} else {
|
||||
tableName = scope.TableName()
|
||||
}
|
||||
|
||||
has := scope.Dialect().HasTable(tableName)
|
||||
s.AddError(scope.db.Error)
|
||||
return has
|
||||
}
|
||||
|
||||
// AutoMigrate run auto migration for given models, will only add missing fields, won't delete/change current data
|
||||
func (s *DB) AutoMigrate(values ...interface{}) *DB {
|
||||
db := s.Unscoped()
|
||||
for _, value := range values {
|
||||
db = db.NewScope(value).autoMigrate().db
|
||||
}
|
||||
return db
|
||||
}
|
||||
|
||||
// ModifyColumn modify column to type
|
||||
func (s *DB) ModifyColumn(column string, typ string) *DB {
|
||||
scope := s.NewScope(s.Value)
|
||||
scope.modifyColumn(column, typ)
|
||||
return scope.db
|
||||
}
|
||||
|
||||
// DropColumn drop a column
|
||||
func (s *DB) DropColumn(column string) *DB {
|
||||
scope := s.NewScope(s.Value)
|
||||
scope.dropColumn(column)
|
||||
return scope.db
|
||||
}
|
||||
|
||||
// AddIndex add index for columns with given name
|
||||
func (s *DB) AddIndex(indexName string, columns ...string) *DB {
|
||||
scope := s.Unscoped().NewScope(s.Value)
|
||||
scope.addIndex(false, indexName, columns...)
|
||||
return scope.db
|
||||
}
|
||||
|
||||
// AddUniqueIndex add unique index for columns with given name
|
||||
func (s *DB) AddUniqueIndex(indexName string, columns ...string) *DB {
|
||||
scope := s.Unscoped().NewScope(s.Value)
|
||||
scope.addIndex(true, indexName, columns...)
|
||||
return scope.db
|
||||
}
|
||||
|
||||
// RemoveIndex remove index with name
|
||||
func (s *DB) RemoveIndex(indexName string) *DB {
|
||||
scope := s.NewScope(s.Value)
|
||||
scope.removeIndex(indexName)
|
||||
return scope.db
|
||||
}
|
||||
|
||||
// AddForeignKey Add foreign key to the given scope, e.g:
|
||||
// db.Model(&User{}).AddForeignKey("city_id", "cities(id)", "RESTRICT", "RESTRICT")
|
||||
func (s *DB) AddForeignKey(field string, dest string, onDelete string, onUpdate string) *DB {
|
||||
scope := s.NewScope(s.Value)
|
||||
scope.addForeignKey(field, dest, onDelete, onUpdate)
|
||||
return scope.db
|
||||
}
|
||||
|
||||
// RemoveForeignKey Remove foreign key from the given scope, e.g:
|
||||
// db.Model(&User{}).RemoveForeignKey("city_id", "cities(id)")
|
||||
func (s *DB) RemoveForeignKey(field string, dest string) *DB {
|
||||
scope := s.clone().NewScope(s.Value)
|
||||
scope.removeForeignKey(field, dest)
|
||||
return scope.db
|
||||
}
|
||||
|
||||
// Association start `Association Mode` to handler relations things easir in that mode, refer: https://jinzhu.github.io/gorm/associations.html#association-mode
|
||||
func (s *DB) Association(column string) *Association {
|
||||
var err error
|
||||
var scope = s.Set("gorm:association:source", s.Value).NewScope(s.Value)
|
||||
|
||||
if primaryField := scope.PrimaryField(); primaryField.IsBlank {
|
||||
err = errors.New("primary key can't be nil")
|
||||
} else {
|
||||
if field, ok := scope.FieldByName(column); ok {
|
||||
if field.Relationship == nil || len(field.Relationship.ForeignFieldNames) == 0 {
|
||||
err = fmt.Errorf("invalid association %v for %v", column, scope.IndirectValue().Type())
|
||||
} else {
|
||||
return &Association{scope: scope, column: column, field: field}
|
||||
}
|
||||
} else {
|
||||
err = fmt.Errorf("%v doesn't have column %v", scope.IndirectValue().Type(), column)
|
||||
}
|
||||
}
|
||||
|
||||
return &Association{Error: err}
|
||||
}
|
||||
|
||||
// Preload preload associations with given conditions
|
||||
// db.Preload("Orders", "state NOT IN (?)", "cancelled").Find(&users)
|
||||
func (s *DB) Preload(column string, conditions ...interface{}) *DB {
|
||||
return s.clone().search.Preload(column, conditions...).db
|
||||
}
|
||||
|
||||
// Set set setting by name, which could be used in callbacks, will clone a new db, and update its setting
|
||||
func (s *DB) Set(name string, value interface{}) *DB {
|
||||
return s.clone().InstantSet(name, value)
|
||||
}
|
||||
|
||||
// InstantSet instant set setting, will affect current db
|
||||
func (s *DB) InstantSet(name string, value interface{}) *DB {
|
||||
s.values.Store(name, value)
|
||||
return s
|
||||
}
|
||||
|
||||
// Get get setting by name
|
||||
func (s *DB) Get(name string) (value interface{}, ok bool) {
|
||||
value, ok = s.values.Load(name)
|
||||
return
|
||||
}
|
||||
|
||||
// SetJoinTableHandler set a model's join table handler for a relation
|
||||
func (s *DB) SetJoinTableHandler(source interface{}, column string, handler JoinTableHandlerInterface) {
|
||||
scope := s.NewScope(source)
|
||||
for _, field := range scope.GetModelStruct().StructFields {
|
||||
if field.Name == column || field.DBName == column {
|
||||
if many2many, _ := field.TagSettingsGet("MANY2MANY"); many2many != "" {
|
||||
source := (&Scope{Value: source}).GetModelStruct().ModelType
|
||||
destination := (&Scope{Value: reflect.New(field.Struct.Type).Interface()}).GetModelStruct().ModelType
|
||||
handler.Setup(field.Relationship, many2many, source, destination)
|
||||
field.Relationship.JoinTableHandler = handler
|
||||
if table := handler.Table(s); scope.Dialect().HasTable(table) {
|
||||
s.Table(table).AutoMigrate(handler)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// AddError add error to the db
|
||||
func (s *DB) AddError(err error) error {
|
||||
if err != nil {
|
||||
if err != ErrRecordNotFound {
|
||||
if s.logMode == 0 {
|
||||
go s.print(fileWithLineNum(), err)
|
||||
} else {
|
||||
s.log(err)
|
||||
}
|
||||
|
||||
errors := Errors(s.GetErrors())
|
||||
errors = errors.Add(err)
|
||||
if len(errors) > 1 {
|
||||
err = errors
|
||||
}
|
||||
}
|
||||
|
||||
s.Error = err
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
// GetErrors get happened errors from the db
|
||||
func (s *DB) GetErrors() []error {
|
||||
if errs, ok := s.Error.(Errors); ok {
|
||||
return errs
|
||||
} else if s.Error != nil {
|
||||
return []error{s.Error}
|
||||
}
|
||||
return []error{}
|
||||
}
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
// Private Methods For DB
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
func (s *DB) clone() *DB {
|
||||
db := &DB{
|
||||
db: s.db,
|
||||
parent: s.parent,
|
||||
logger: s.logger,
|
||||
logMode: s.logMode,
|
||||
Value: s.Value,
|
||||
Error: s.Error,
|
||||
blockGlobalUpdate: s.blockGlobalUpdate,
|
||||
dialect: newDialect(s.dialect.GetName(), s.db),
|
||||
}
|
||||
|
||||
s.values.Range(func(k, v interface{}) bool {
|
||||
db.values.Store(k, v)
|
||||
return true
|
||||
})
|
||||
|
||||
if s.search == nil {
|
||||
db.search = &search{limit: -1, offset: -1}
|
||||
} else {
|
||||
db.search = s.search.clone()
|
||||
}
|
||||
|
||||
db.search.db = db
|
||||
return db
|
||||
}
|
||||
|
||||
func (s *DB) print(v ...interface{}) {
|
||||
s.logger.Print(v...)
|
||||
}
|
||||
|
||||
func (s *DB) log(v ...interface{}) {
|
||||
if s != nil && s.logMode == 2 {
|
||||
s.print(append([]interface{}{"log", fileWithLineNum()}, v...)...)
|
||||
}
|
||||
}
|
||||
|
||||
func (s *DB) slog(sql string, t time.Time, vars ...interface{}) {
|
||||
if s.logMode == 2 {
|
||||
s.print("sql", fileWithLineNum(), NowFunc().Sub(t), sql, vars, s.RowsAffected)
|
||||
}
|
||||
}
|
||||
14
vendor/github.com/jinzhu/gorm/model.go
generated
vendored
Normal file
14
vendor/github.com/jinzhu/gorm/model.go
generated
vendored
Normal file
@@ -0,0 +1,14 @@
|
||||
package gorm
|
||||
|
||||
import "time"
|
||||
|
||||
// Model base model definition, including fields `ID`, `CreatedAt`, `UpdatedAt`, `DeletedAt`, which could be embedded in your models
|
||||
// type User struct {
|
||||
// gorm.Model
|
||||
// }
|
||||
type Model struct {
|
||||
ID uint `gorm:"primary_key"`
|
||||
CreatedAt time.Time
|
||||
UpdatedAt time.Time
|
||||
DeletedAt *time.Time `sql:"index"`
|
||||
}
|
||||
640
vendor/github.com/jinzhu/gorm/model_struct.go
generated
vendored
Normal file
640
vendor/github.com/jinzhu/gorm/model_struct.go
generated
vendored
Normal file
@@ -0,0 +1,640 @@
|
||||
package gorm
|
||||
|
||||
import (
|
||||
"database/sql"
|
||||
"errors"
|
||||
"go/ast"
|
||||
"reflect"
|
||||
"strings"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/jinzhu/inflection"
|
||||
)
|
||||
|
||||
// DefaultTableNameHandler default table name handler
|
||||
var DefaultTableNameHandler = func(db *DB, defaultTableName string) string {
|
||||
return defaultTableName
|
||||
}
|
||||
|
||||
var modelStructsMap sync.Map
|
||||
|
||||
// ModelStruct model definition
|
||||
type ModelStruct struct {
|
||||
PrimaryFields []*StructField
|
||||
StructFields []*StructField
|
||||
ModelType reflect.Type
|
||||
|
||||
defaultTableName string
|
||||
l sync.Mutex
|
||||
}
|
||||
|
||||
// TableName returns model's table name
|
||||
func (s *ModelStruct) TableName(db *DB) string {
|
||||
s.l.Lock()
|
||||
defer s.l.Unlock()
|
||||
|
||||
if s.defaultTableName == "" && db != nil && s.ModelType != nil {
|
||||
// Set default table name
|
||||
if tabler, ok := reflect.New(s.ModelType).Interface().(tabler); ok {
|
||||
s.defaultTableName = tabler.TableName()
|
||||
} else {
|
||||
tableName := ToTableName(s.ModelType.Name())
|
||||
if db == nil || !db.parent.singularTable {
|
||||
tableName = inflection.Plural(tableName)
|
||||
}
|
||||
s.defaultTableName = tableName
|
||||
}
|
||||
}
|
||||
|
||||
return DefaultTableNameHandler(db, s.defaultTableName)
|
||||
}
|
||||
|
||||
// StructField model field's struct definition
|
||||
type StructField struct {
|
||||
DBName string
|
||||
Name string
|
||||
Names []string
|
||||
IsPrimaryKey bool
|
||||
IsNormal bool
|
||||
IsIgnored bool
|
||||
IsScanner bool
|
||||
HasDefaultValue bool
|
||||
Tag reflect.StructTag
|
||||
TagSettings map[string]string
|
||||
Struct reflect.StructField
|
||||
IsForeignKey bool
|
||||
Relationship *Relationship
|
||||
|
||||
tagSettingsLock sync.RWMutex
|
||||
}
|
||||
|
||||
// TagSettingsSet Sets a tag in the tag settings map
|
||||
func (s *StructField) TagSettingsSet(key, val string) {
|
||||
s.tagSettingsLock.Lock()
|
||||
defer s.tagSettingsLock.Unlock()
|
||||
s.TagSettings[key] = val
|
||||
}
|
||||
|
||||
// TagSettingsGet returns a tag from the tag settings
|
||||
func (s *StructField) TagSettingsGet(key string) (string, bool) {
|
||||
s.tagSettingsLock.RLock()
|
||||
defer s.tagSettingsLock.RUnlock()
|
||||
val, ok := s.TagSettings[key]
|
||||
return val, ok
|
||||
}
|
||||
|
||||
// TagSettingsDelete deletes a tag
|
||||
func (s *StructField) TagSettingsDelete(key string) {
|
||||
s.tagSettingsLock.Lock()
|
||||
defer s.tagSettingsLock.Unlock()
|
||||
delete(s.TagSettings, key)
|
||||
}
|
||||
|
||||
func (structField *StructField) clone() *StructField {
|
||||
clone := &StructField{
|
||||
DBName: structField.DBName,
|
||||
Name: structField.Name,
|
||||
Names: structField.Names,
|
||||
IsPrimaryKey: structField.IsPrimaryKey,
|
||||
IsNormal: structField.IsNormal,
|
||||
IsIgnored: structField.IsIgnored,
|
||||
IsScanner: structField.IsScanner,
|
||||
HasDefaultValue: structField.HasDefaultValue,
|
||||
Tag: structField.Tag,
|
||||
TagSettings: map[string]string{},
|
||||
Struct: structField.Struct,
|
||||
IsForeignKey: structField.IsForeignKey,
|
||||
}
|
||||
|
||||
if structField.Relationship != nil {
|
||||
relationship := *structField.Relationship
|
||||
clone.Relationship = &relationship
|
||||
}
|
||||
|
||||
// copy the struct field tagSettings, they should be read-locked while they are copied
|
||||
structField.tagSettingsLock.Lock()
|
||||
defer structField.tagSettingsLock.Unlock()
|
||||
for key, value := range structField.TagSettings {
|
||||
clone.TagSettings[key] = value
|
||||
}
|
||||
|
||||
return clone
|
||||
}
|
||||
|
||||
// Relationship described the relationship between models
|
||||
type Relationship struct {
|
||||
Kind string
|
||||
PolymorphicType string
|
||||
PolymorphicDBName string
|
||||
PolymorphicValue string
|
||||
ForeignFieldNames []string
|
||||
ForeignDBNames []string
|
||||
AssociationForeignFieldNames []string
|
||||
AssociationForeignDBNames []string
|
||||
JoinTableHandler JoinTableHandlerInterface
|
||||
}
|
||||
|
||||
func getForeignField(column string, fields []*StructField) *StructField {
|
||||
for _, field := range fields {
|
||||
if field.Name == column || field.DBName == column || field.DBName == ToColumnName(column) {
|
||||
return field
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// GetModelStruct get value's model struct, relationships based on struct and tag definition
|
||||
func (scope *Scope) GetModelStruct() *ModelStruct {
|
||||
var modelStruct ModelStruct
|
||||
// Scope value can't be nil
|
||||
if scope.Value == nil {
|
||||
return &modelStruct
|
||||
}
|
||||
|
||||
reflectType := reflect.ValueOf(scope.Value).Type()
|
||||
for reflectType.Kind() == reflect.Slice || reflectType.Kind() == reflect.Ptr {
|
||||
reflectType = reflectType.Elem()
|
||||
}
|
||||
|
||||
// Scope value need to be a struct
|
||||
if reflectType.Kind() != reflect.Struct {
|
||||
return &modelStruct
|
||||
}
|
||||
|
||||
// Get Cached model struct
|
||||
if value, ok := modelStructsMap.Load(reflectType); ok && value != nil {
|
||||
return value.(*ModelStruct)
|
||||
}
|
||||
|
||||
modelStruct.ModelType = reflectType
|
||||
|
||||
// Get all fields
|
||||
for i := 0; i < reflectType.NumField(); i++ {
|
||||
if fieldStruct := reflectType.Field(i); ast.IsExported(fieldStruct.Name) {
|
||||
field := &StructField{
|
||||
Struct: fieldStruct,
|
||||
Name: fieldStruct.Name,
|
||||
Names: []string{fieldStruct.Name},
|
||||
Tag: fieldStruct.Tag,
|
||||
TagSettings: parseTagSetting(fieldStruct.Tag),
|
||||
}
|
||||
|
||||
// is ignored field
|
||||
if _, ok := field.TagSettingsGet("-"); ok {
|
||||
field.IsIgnored = true
|
||||
} else {
|
||||
if _, ok := field.TagSettingsGet("PRIMARY_KEY"); ok {
|
||||
field.IsPrimaryKey = true
|
||||
modelStruct.PrimaryFields = append(modelStruct.PrimaryFields, field)
|
||||
}
|
||||
|
||||
if _, ok := field.TagSettingsGet("DEFAULT"); ok {
|
||||
field.HasDefaultValue = true
|
||||
}
|
||||
|
||||
if _, ok := field.TagSettingsGet("AUTO_INCREMENT"); ok && !field.IsPrimaryKey {
|
||||
field.HasDefaultValue = true
|
||||
}
|
||||
|
||||
indirectType := fieldStruct.Type
|
||||
for indirectType.Kind() == reflect.Ptr {
|
||||
indirectType = indirectType.Elem()
|
||||
}
|
||||
|
||||
fieldValue := reflect.New(indirectType).Interface()
|
||||
if _, isScanner := fieldValue.(sql.Scanner); isScanner {
|
||||
// is scanner
|
||||
field.IsScanner, field.IsNormal = true, true
|
||||
if indirectType.Kind() == reflect.Struct {
|
||||
for i := 0; i < indirectType.NumField(); i++ {
|
||||
for key, value := range parseTagSetting(indirectType.Field(i).Tag) {
|
||||
if _, ok := field.TagSettingsGet(key); !ok {
|
||||
field.TagSettingsSet(key, value)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
} else if _, isTime := fieldValue.(*time.Time); isTime {
|
||||
// is time
|
||||
field.IsNormal = true
|
||||
} else if _, ok := field.TagSettingsGet("EMBEDDED"); ok || fieldStruct.Anonymous {
|
||||
// is embedded struct
|
||||
for _, subField := range scope.New(fieldValue).GetModelStruct().StructFields {
|
||||
subField = subField.clone()
|
||||
subField.Names = append([]string{fieldStruct.Name}, subField.Names...)
|
||||
if prefix, ok := field.TagSettingsGet("EMBEDDED_PREFIX"); ok {
|
||||
subField.DBName = prefix + subField.DBName
|
||||
}
|
||||
|
||||
if subField.IsPrimaryKey {
|
||||
if _, ok := subField.TagSettingsGet("PRIMARY_KEY"); ok {
|
||||
modelStruct.PrimaryFields = append(modelStruct.PrimaryFields, subField)
|
||||
} else {
|
||||
subField.IsPrimaryKey = false
|
||||
}
|
||||
}
|
||||
|
||||
if subField.Relationship != nil && subField.Relationship.JoinTableHandler != nil {
|
||||
if joinTableHandler, ok := subField.Relationship.JoinTableHandler.(*JoinTableHandler); ok {
|
||||
newJoinTableHandler := &JoinTableHandler{}
|
||||
newJoinTableHandler.Setup(subField.Relationship, joinTableHandler.TableName, reflectType, joinTableHandler.Destination.ModelType)
|
||||
subField.Relationship.JoinTableHandler = newJoinTableHandler
|
||||
}
|
||||
}
|
||||
|
||||
modelStruct.StructFields = append(modelStruct.StructFields, subField)
|
||||
}
|
||||
continue
|
||||
} else {
|
||||
// build relationships
|
||||
switch indirectType.Kind() {
|
||||
case reflect.Slice:
|
||||
defer func(field *StructField) {
|
||||
var (
|
||||
relationship = &Relationship{}
|
||||
toScope = scope.New(reflect.New(field.Struct.Type).Interface())
|
||||
foreignKeys []string
|
||||
associationForeignKeys []string
|
||||
elemType = field.Struct.Type
|
||||
)
|
||||
|
||||
if foreignKey, _ := field.TagSettingsGet("FOREIGNKEY"); foreignKey != "" {
|
||||
foreignKeys = strings.Split(foreignKey, ",")
|
||||
}
|
||||
|
||||
if foreignKey, _ := field.TagSettingsGet("ASSOCIATION_FOREIGNKEY"); foreignKey != "" {
|
||||
associationForeignKeys = strings.Split(foreignKey, ",")
|
||||
} else if foreignKey, _ := field.TagSettingsGet("ASSOCIATIONFOREIGNKEY"); foreignKey != "" {
|
||||
associationForeignKeys = strings.Split(foreignKey, ",")
|
||||
}
|
||||
|
||||
for elemType.Kind() == reflect.Slice || elemType.Kind() == reflect.Ptr {
|
||||
elemType = elemType.Elem()
|
||||
}
|
||||
|
||||
if elemType.Kind() == reflect.Struct {
|
||||
if many2many, _ := field.TagSettingsGet("MANY2MANY"); many2many != "" {
|
||||
relationship.Kind = "many_to_many"
|
||||
|
||||
{ // Foreign Keys for Source
|
||||
joinTableDBNames := []string{}
|
||||
|
||||
if foreignKey, _ := field.TagSettingsGet("JOINTABLE_FOREIGNKEY"); foreignKey != "" {
|
||||
joinTableDBNames = strings.Split(foreignKey, ",")
|
||||
}
|
||||
|
||||
// if no foreign keys defined with tag
|
||||
if len(foreignKeys) == 0 {
|
||||
for _, field := range modelStruct.PrimaryFields {
|
||||
foreignKeys = append(foreignKeys, field.DBName)
|
||||
}
|
||||
}
|
||||
|
||||
for idx, foreignKey := range foreignKeys {
|
||||
if foreignField := getForeignField(foreignKey, modelStruct.StructFields); foreignField != nil {
|
||||
// source foreign keys (db names)
|
||||
relationship.ForeignFieldNames = append(relationship.ForeignFieldNames, foreignField.DBName)
|
||||
|
||||
// setup join table foreign keys for source
|
||||
if len(joinTableDBNames) > idx {
|
||||
// if defined join table's foreign key
|
||||
relationship.ForeignDBNames = append(relationship.ForeignDBNames, joinTableDBNames[idx])
|
||||
} else {
|
||||
defaultJointableForeignKey := ToColumnName(reflectType.Name()) + "_" + foreignField.DBName
|
||||
relationship.ForeignDBNames = append(relationship.ForeignDBNames, defaultJointableForeignKey)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
{ // Foreign Keys for Association (Destination)
|
||||
associationJoinTableDBNames := []string{}
|
||||
|
||||
if foreignKey, _ := field.TagSettingsGet("ASSOCIATION_JOINTABLE_FOREIGNKEY"); foreignKey != "" {
|
||||
associationJoinTableDBNames = strings.Split(foreignKey, ",")
|
||||
}
|
||||
|
||||
// if no association foreign keys defined with tag
|
||||
if len(associationForeignKeys) == 0 {
|
||||
for _, field := range toScope.PrimaryFields() {
|
||||
associationForeignKeys = append(associationForeignKeys, field.DBName)
|
||||
}
|
||||
}
|
||||
|
||||
for idx, name := range associationForeignKeys {
|
||||
if field, ok := toScope.FieldByName(name); ok {
|
||||
// association foreign keys (db names)
|
||||
relationship.AssociationForeignFieldNames = append(relationship.AssociationForeignFieldNames, field.DBName)
|
||||
|
||||
// setup join table foreign keys for association
|
||||
if len(associationJoinTableDBNames) > idx {
|
||||
relationship.AssociationForeignDBNames = append(relationship.AssociationForeignDBNames, associationJoinTableDBNames[idx])
|
||||
} else {
|
||||
// join table foreign keys for association
|
||||
joinTableDBName := ToColumnName(elemType.Name()) + "_" + field.DBName
|
||||
relationship.AssociationForeignDBNames = append(relationship.AssociationForeignDBNames, joinTableDBName)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
joinTableHandler := JoinTableHandler{}
|
||||
joinTableHandler.Setup(relationship, ToTableName(many2many), reflectType, elemType)
|
||||
relationship.JoinTableHandler = &joinTableHandler
|
||||
field.Relationship = relationship
|
||||
} else {
|
||||
// User has many comments, associationType is User, comment use UserID as foreign key
|
||||
var associationType = reflectType.Name()
|
||||
var toFields = toScope.GetStructFields()
|
||||
relationship.Kind = "has_many"
|
||||
|
||||
if polymorphic, _ := field.TagSettingsGet("POLYMORPHIC"); polymorphic != "" {
|
||||
// Dog has many toys, tag polymorphic is Owner, then associationType is Owner
|
||||
// Toy use OwnerID, OwnerType ('dogs') as foreign key
|
||||
if polymorphicType := getForeignField(polymorphic+"Type", toFields); polymorphicType != nil {
|
||||
associationType = polymorphic
|
||||
relationship.PolymorphicType = polymorphicType.Name
|
||||
relationship.PolymorphicDBName = polymorphicType.DBName
|
||||
// if Dog has multiple set of toys set name of the set (instead of default 'dogs')
|
||||
if value, ok := field.TagSettingsGet("POLYMORPHIC_VALUE"); ok {
|
||||
relationship.PolymorphicValue = value
|
||||
} else {
|
||||
relationship.PolymorphicValue = scope.TableName()
|
||||
}
|
||||
polymorphicType.IsForeignKey = true
|
||||
}
|
||||
}
|
||||
|
||||
// if no foreign keys defined with tag
|
||||
if len(foreignKeys) == 0 {
|
||||
// if no association foreign keys defined with tag
|
||||
if len(associationForeignKeys) == 0 {
|
||||
for _, field := range modelStruct.PrimaryFields {
|
||||
foreignKeys = append(foreignKeys, associationType+field.Name)
|
||||
associationForeignKeys = append(associationForeignKeys, field.Name)
|
||||
}
|
||||
} else {
|
||||
// generate foreign keys from defined association foreign keys
|
||||
for _, scopeFieldName := range associationForeignKeys {
|
||||
if foreignField := getForeignField(scopeFieldName, modelStruct.StructFields); foreignField != nil {
|
||||
foreignKeys = append(foreignKeys, associationType+foreignField.Name)
|
||||
associationForeignKeys = append(associationForeignKeys, foreignField.Name)
|
||||
}
|
||||
}
|
||||
}
|
||||
} else {
|
||||
// generate association foreign keys from foreign keys
|
||||
if len(associationForeignKeys) == 0 {
|
||||
for _, foreignKey := range foreignKeys {
|
||||
if strings.HasPrefix(foreignKey, associationType) {
|
||||
associationForeignKey := strings.TrimPrefix(foreignKey, associationType)
|
||||
if foreignField := getForeignField(associationForeignKey, modelStruct.StructFields); foreignField != nil {
|
||||
associationForeignKeys = append(associationForeignKeys, associationForeignKey)
|
||||
}
|
||||
}
|
||||
}
|
||||
if len(associationForeignKeys) == 0 && len(foreignKeys) == 1 {
|
||||
associationForeignKeys = []string{scope.PrimaryKey()}
|
||||
}
|
||||
} else if len(foreignKeys) != len(associationForeignKeys) {
|
||||
scope.Err(errors.New("invalid foreign keys, should have same length"))
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
for idx, foreignKey := range foreignKeys {
|
||||
if foreignField := getForeignField(foreignKey, toFields); foreignField != nil {
|
||||
if associationField := getForeignField(associationForeignKeys[idx], modelStruct.StructFields); associationField != nil {
|
||||
// source foreign keys
|
||||
foreignField.IsForeignKey = true
|
||||
relationship.AssociationForeignFieldNames = append(relationship.AssociationForeignFieldNames, associationField.Name)
|
||||
relationship.AssociationForeignDBNames = append(relationship.AssociationForeignDBNames, associationField.DBName)
|
||||
|
||||
// association foreign keys
|
||||
relationship.ForeignFieldNames = append(relationship.ForeignFieldNames, foreignField.Name)
|
||||
relationship.ForeignDBNames = append(relationship.ForeignDBNames, foreignField.DBName)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if len(relationship.ForeignFieldNames) != 0 {
|
||||
field.Relationship = relationship
|
||||
}
|
||||
}
|
||||
} else {
|
||||
field.IsNormal = true
|
||||
}
|
||||
}(field)
|
||||
case reflect.Struct:
|
||||
defer func(field *StructField) {
|
||||
var (
|
||||
// user has one profile, associationType is User, profile use UserID as foreign key
|
||||
// user belongs to profile, associationType is Profile, user use ProfileID as foreign key
|
||||
associationType = reflectType.Name()
|
||||
relationship = &Relationship{}
|
||||
toScope = scope.New(reflect.New(field.Struct.Type).Interface())
|
||||
toFields = toScope.GetStructFields()
|
||||
tagForeignKeys []string
|
||||
tagAssociationForeignKeys []string
|
||||
)
|
||||
|
||||
if foreignKey, _ := field.TagSettingsGet("FOREIGNKEY"); foreignKey != "" {
|
||||
tagForeignKeys = strings.Split(foreignKey, ",")
|
||||
}
|
||||
|
||||
if foreignKey, _ := field.TagSettingsGet("ASSOCIATION_FOREIGNKEY"); foreignKey != "" {
|
||||
tagAssociationForeignKeys = strings.Split(foreignKey, ",")
|
||||
} else if foreignKey, _ := field.TagSettingsGet("ASSOCIATIONFOREIGNKEY"); foreignKey != "" {
|
||||
tagAssociationForeignKeys = strings.Split(foreignKey, ",")
|
||||
}
|
||||
|
||||
if polymorphic, _ := field.TagSettingsGet("POLYMORPHIC"); polymorphic != "" {
|
||||
// Cat has one toy, tag polymorphic is Owner, then associationType is Owner
|
||||
// Toy use OwnerID, OwnerType ('cats') as foreign key
|
||||
if polymorphicType := getForeignField(polymorphic+"Type", toFields); polymorphicType != nil {
|
||||
associationType = polymorphic
|
||||
relationship.PolymorphicType = polymorphicType.Name
|
||||
relationship.PolymorphicDBName = polymorphicType.DBName
|
||||
// if Cat has several different types of toys set name for each (instead of default 'cats')
|
||||
if value, ok := field.TagSettingsGet("POLYMORPHIC_VALUE"); ok {
|
||||
relationship.PolymorphicValue = value
|
||||
} else {
|
||||
relationship.PolymorphicValue = scope.TableName()
|
||||
}
|
||||
polymorphicType.IsForeignKey = true
|
||||
}
|
||||
}
|
||||
|
||||
// Has One
|
||||
{
|
||||
var foreignKeys = tagForeignKeys
|
||||
var associationForeignKeys = tagAssociationForeignKeys
|
||||
// if no foreign keys defined with tag
|
||||
if len(foreignKeys) == 0 {
|
||||
// if no association foreign keys defined with tag
|
||||
if len(associationForeignKeys) == 0 {
|
||||
for _, primaryField := range modelStruct.PrimaryFields {
|
||||
foreignKeys = append(foreignKeys, associationType+primaryField.Name)
|
||||
associationForeignKeys = append(associationForeignKeys, primaryField.Name)
|
||||
}
|
||||
} else {
|
||||
// generate foreign keys form association foreign keys
|
||||
for _, associationForeignKey := range tagAssociationForeignKeys {
|
||||
if foreignField := getForeignField(associationForeignKey, modelStruct.StructFields); foreignField != nil {
|
||||
foreignKeys = append(foreignKeys, associationType+foreignField.Name)
|
||||
associationForeignKeys = append(associationForeignKeys, foreignField.Name)
|
||||
}
|
||||
}
|
||||
}
|
||||
} else {
|
||||
// generate association foreign keys from foreign keys
|
||||
if len(associationForeignKeys) == 0 {
|
||||
for _, foreignKey := range foreignKeys {
|
||||
if strings.HasPrefix(foreignKey, associationType) {
|
||||
associationForeignKey := strings.TrimPrefix(foreignKey, associationType)
|
||||
if foreignField := getForeignField(associationForeignKey, modelStruct.StructFields); foreignField != nil {
|
||||
associationForeignKeys = append(associationForeignKeys, associationForeignKey)
|
||||
}
|
||||
}
|
||||
}
|
||||
if len(associationForeignKeys) == 0 && len(foreignKeys) == 1 {
|
||||
associationForeignKeys = []string{scope.PrimaryKey()}
|
||||
}
|
||||
} else if len(foreignKeys) != len(associationForeignKeys) {
|
||||
scope.Err(errors.New("invalid foreign keys, should have same length"))
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
for idx, foreignKey := range foreignKeys {
|
||||
if foreignField := getForeignField(foreignKey, toFields); foreignField != nil {
|
||||
if scopeField := getForeignField(associationForeignKeys[idx], modelStruct.StructFields); scopeField != nil {
|
||||
foreignField.IsForeignKey = true
|
||||
// source foreign keys
|
||||
relationship.AssociationForeignFieldNames = append(relationship.AssociationForeignFieldNames, scopeField.Name)
|
||||
relationship.AssociationForeignDBNames = append(relationship.AssociationForeignDBNames, scopeField.DBName)
|
||||
|
||||
// association foreign keys
|
||||
relationship.ForeignFieldNames = append(relationship.ForeignFieldNames, foreignField.Name)
|
||||
relationship.ForeignDBNames = append(relationship.ForeignDBNames, foreignField.DBName)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if len(relationship.ForeignFieldNames) != 0 {
|
||||
relationship.Kind = "has_one"
|
||||
field.Relationship = relationship
|
||||
} else {
|
||||
var foreignKeys = tagForeignKeys
|
||||
var associationForeignKeys = tagAssociationForeignKeys
|
||||
|
||||
if len(foreignKeys) == 0 {
|
||||
// generate foreign keys & association foreign keys
|
||||
if len(associationForeignKeys) == 0 {
|
||||
for _, primaryField := range toScope.PrimaryFields() {
|
||||
foreignKeys = append(foreignKeys, field.Name+primaryField.Name)
|
||||
associationForeignKeys = append(associationForeignKeys, primaryField.Name)
|
||||
}
|
||||
} else {
|
||||
// generate foreign keys with association foreign keys
|
||||
for _, associationForeignKey := range associationForeignKeys {
|
||||
if foreignField := getForeignField(associationForeignKey, toFields); foreignField != nil {
|
||||
foreignKeys = append(foreignKeys, field.Name+foreignField.Name)
|
||||
associationForeignKeys = append(associationForeignKeys, foreignField.Name)
|
||||
}
|
||||
}
|
||||
}
|
||||
} else {
|
||||
// generate foreign keys & association foreign keys
|
||||
if len(associationForeignKeys) == 0 {
|
||||
for _, foreignKey := range foreignKeys {
|
||||
if strings.HasPrefix(foreignKey, field.Name) {
|
||||
associationForeignKey := strings.TrimPrefix(foreignKey, field.Name)
|
||||
if foreignField := getForeignField(associationForeignKey, toFields); foreignField != nil {
|
||||
associationForeignKeys = append(associationForeignKeys, associationForeignKey)
|
||||
}
|
||||
}
|
||||
}
|
||||
if len(associationForeignKeys) == 0 && len(foreignKeys) == 1 {
|
||||
associationForeignKeys = []string{toScope.PrimaryKey()}
|
||||
}
|
||||
} else if len(foreignKeys) != len(associationForeignKeys) {
|
||||
scope.Err(errors.New("invalid foreign keys, should have same length"))
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
for idx, foreignKey := range foreignKeys {
|
||||
if foreignField := getForeignField(foreignKey, modelStruct.StructFields); foreignField != nil {
|
||||
if associationField := getForeignField(associationForeignKeys[idx], toFields); associationField != nil {
|
||||
foreignField.IsForeignKey = true
|
||||
|
||||
// association foreign keys
|
||||
relationship.AssociationForeignFieldNames = append(relationship.AssociationForeignFieldNames, associationField.Name)
|
||||
relationship.AssociationForeignDBNames = append(relationship.AssociationForeignDBNames, associationField.DBName)
|
||||
|
||||
// source foreign keys
|
||||
relationship.ForeignFieldNames = append(relationship.ForeignFieldNames, foreignField.Name)
|
||||
relationship.ForeignDBNames = append(relationship.ForeignDBNames, foreignField.DBName)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if len(relationship.ForeignFieldNames) != 0 {
|
||||
relationship.Kind = "belongs_to"
|
||||
field.Relationship = relationship
|
||||
}
|
||||
}
|
||||
}(field)
|
||||
default:
|
||||
field.IsNormal = true
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Even it is ignored, also possible to decode db value into the field
|
||||
if value, ok := field.TagSettingsGet("COLUMN"); ok {
|
||||
field.DBName = value
|
||||
} else {
|
||||
field.DBName = ToColumnName(fieldStruct.Name)
|
||||
}
|
||||
|
||||
modelStruct.StructFields = append(modelStruct.StructFields, field)
|
||||
}
|
||||
}
|
||||
|
||||
if len(modelStruct.PrimaryFields) == 0 {
|
||||
if field := getForeignField("id", modelStruct.StructFields); field != nil {
|
||||
field.IsPrimaryKey = true
|
||||
modelStruct.PrimaryFields = append(modelStruct.PrimaryFields, field)
|
||||
}
|
||||
}
|
||||
|
||||
modelStructsMap.Store(reflectType, &modelStruct)
|
||||
|
||||
return &modelStruct
|
||||
}
|
||||
|
||||
// GetStructFields get model's field structs
|
||||
func (scope *Scope) GetStructFields() (fields []*StructField) {
|
||||
return scope.GetModelStruct().StructFields
|
||||
}
|
||||
|
||||
func parseTagSetting(tags reflect.StructTag) map[string]string {
|
||||
setting := map[string]string{}
|
||||
for _, str := range []string{tags.Get("sql"), tags.Get("gorm")} {
|
||||
tags := strings.Split(str, ";")
|
||||
for _, value := range tags {
|
||||
v := strings.Split(value, ":")
|
||||
k := strings.TrimSpace(strings.ToUpper(v[0]))
|
||||
if len(v) >= 2 {
|
||||
setting[k] = strings.Join(v[1:], ":")
|
||||
} else {
|
||||
setting[k] = k
|
||||
}
|
||||
}
|
||||
}
|
||||
return setting
|
||||
}
|
||||
124
vendor/github.com/jinzhu/gorm/naming.go
generated
vendored
Normal file
124
vendor/github.com/jinzhu/gorm/naming.go
generated
vendored
Normal file
@@ -0,0 +1,124 @@
|
||||
package gorm
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"strings"
|
||||
)
|
||||
|
||||
// Namer is a function type which is given a string and return a string
|
||||
type Namer func(string) string
|
||||
|
||||
// NamingStrategy represents naming strategies
|
||||
type NamingStrategy struct {
|
||||
DB Namer
|
||||
Table Namer
|
||||
Column Namer
|
||||
}
|
||||
|
||||
// TheNamingStrategy is being initialized with defaultNamingStrategy
|
||||
var TheNamingStrategy = &NamingStrategy{
|
||||
DB: defaultNamer,
|
||||
Table: defaultNamer,
|
||||
Column: defaultNamer,
|
||||
}
|
||||
|
||||
// AddNamingStrategy sets the naming strategy
|
||||
func AddNamingStrategy(ns *NamingStrategy) {
|
||||
if ns.DB == nil {
|
||||
ns.DB = defaultNamer
|
||||
}
|
||||
if ns.Table == nil {
|
||||
ns.Table = defaultNamer
|
||||
}
|
||||
if ns.Column == nil {
|
||||
ns.Column = defaultNamer
|
||||
}
|
||||
TheNamingStrategy = ns
|
||||
}
|
||||
|
||||
// DBName alters the given name by DB
|
||||
func (ns *NamingStrategy) DBName(name string) string {
|
||||
return ns.DB(name)
|
||||
}
|
||||
|
||||
// TableName alters the given name by Table
|
||||
func (ns *NamingStrategy) TableName(name string) string {
|
||||
return ns.Table(name)
|
||||
}
|
||||
|
||||
// ColumnName alters the given name by Column
|
||||
func (ns *NamingStrategy) ColumnName(name string) string {
|
||||
return ns.Column(name)
|
||||
}
|
||||
|
||||
// ToDBName convert string to db name
|
||||
func ToDBName(name string) string {
|
||||
return TheNamingStrategy.DBName(name)
|
||||
}
|
||||
|
||||
// ToTableName convert string to table name
|
||||
func ToTableName(name string) string {
|
||||
return TheNamingStrategy.TableName(name)
|
||||
}
|
||||
|
||||
// ToColumnName convert string to db name
|
||||
func ToColumnName(name string) string {
|
||||
return TheNamingStrategy.ColumnName(name)
|
||||
}
|
||||
|
||||
var smap = newSafeMap()
|
||||
|
||||
func defaultNamer(name string) string {
|
||||
const (
|
||||
lower = false
|
||||
upper = true
|
||||
)
|
||||
|
||||
if v := smap.Get(name); v != "" {
|
||||
return v
|
||||
}
|
||||
|
||||
if name == "" {
|
||||
return ""
|
||||
}
|
||||
|
||||
var (
|
||||
value = commonInitialismsReplacer.Replace(name)
|
||||
buf = bytes.NewBufferString("")
|
||||
lastCase, currCase, nextCase, nextNumber bool
|
||||
)
|
||||
|
||||
for i, v := range value[:len(value)-1] {
|
||||
nextCase = bool(value[i+1] >= 'A' && value[i+1] <= 'Z')
|
||||
nextNumber = bool(value[i+1] >= '0' && value[i+1] <= '9')
|
||||
|
||||
if i > 0 {
|
||||
if currCase == upper {
|
||||
if lastCase == upper && (nextCase == upper || nextNumber == upper) {
|
||||
buf.WriteRune(v)
|
||||
} else {
|
||||
if value[i-1] != '_' && value[i+1] != '_' {
|
||||
buf.WriteRune('_')
|
||||
}
|
||||
buf.WriteRune(v)
|
||||
}
|
||||
} else {
|
||||
buf.WriteRune(v)
|
||||
if i == len(value)-2 && (nextCase == upper && nextNumber == lower) {
|
||||
buf.WriteRune('_')
|
||||
}
|
||||
}
|
||||
} else {
|
||||
currCase = upper
|
||||
buf.WriteRune(v)
|
||||
}
|
||||
lastCase = currCase
|
||||
currCase = nextCase
|
||||
}
|
||||
|
||||
buf.WriteByte(value[len(value)-1])
|
||||
|
||||
s := strings.ToLower(buf.String())
|
||||
smap.Set(name, s)
|
||||
return s
|
||||
}
|
||||
1397
vendor/github.com/jinzhu/gorm/scope.go
generated
vendored
Normal file
1397
vendor/github.com/jinzhu/gorm/scope.go
generated
vendored
Normal file
File diff suppressed because it is too large
Load Diff
153
vendor/github.com/jinzhu/gorm/search.go
generated
vendored
Normal file
153
vendor/github.com/jinzhu/gorm/search.go
generated
vendored
Normal file
@@ -0,0 +1,153 @@
|
||||
package gorm
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
)
|
||||
|
||||
type search struct {
|
||||
db *DB
|
||||
whereConditions []map[string]interface{}
|
||||
orConditions []map[string]interface{}
|
||||
notConditions []map[string]interface{}
|
||||
havingConditions []map[string]interface{}
|
||||
joinConditions []map[string]interface{}
|
||||
initAttrs []interface{}
|
||||
assignAttrs []interface{}
|
||||
selects map[string]interface{}
|
||||
omits []string
|
||||
orders []interface{}
|
||||
preload []searchPreload
|
||||
offset interface{}
|
||||
limit interface{}
|
||||
group string
|
||||
tableName string
|
||||
raw bool
|
||||
Unscoped bool
|
||||
ignoreOrderQuery bool
|
||||
}
|
||||
|
||||
type searchPreload struct {
|
||||
schema string
|
||||
conditions []interface{}
|
||||
}
|
||||
|
||||
func (s *search) clone() *search {
|
||||
clone := *s
|
||||
return &clone
|
||||
}
|
||||
|
||||
func (s *search) Where(query interface{}, values ...interface{}) *search {
|
||||
s.whereConditions = append(s.whereConditions, map[string]interface{}{"query": query, "args": values})
|
||||
return s
|
||||
}
|
||||
|
||||
func (s *search) Not(query interface{}, values ...interface{}) *search {
|
||||
s.notConditions = append(s.notConditions, map[string]interface{}{"query": query, "args": values})
|
||||
return s
|
||||
}
|
||||
|
||||
func (s *search) Or(query interface{}, values ...interface{}) *search {
|
||||
s.orConditions = append(s.orConditions, map[string]interface{}{"query": query, "args": values})
|
||||
return s
|
||||
}
|
||||
|
||||
func (s *search) Attrs(attrs ...interface{}) *search {
|
||||
s.initAttrs = append(s.initAttrs, toSearchableMap(attrs...))
|
||||
return s
|
||||
}
|
||||
|
||||
func (s *search) Assign(attrs ...interface{}) *search {
|
||||
s.assignAttrs = append(s.assignAttrs, toSearchableMap(attrs...))
|
||||
return s
|
||||
}
|
||||
|
||||
func (s *search) Order(value interface{}, reorder ...bool) *search {
|
||||
if len(reorder) > 0 && reorder[0] {
|
||||
s.orders = []interface{}{}
|
||||
}
|
||||
|
||||
if value != nil && value != "" {
|
||||
s.orders = append(s.orders, value)
|
||||
}
|
||||
return s
|
||||
}
|
||||
|
||||
func (s *search) Select(query interface{}, args ...interface{}) *search {
|
||||
s.selects = map[string]interface{}{"query": query, "args": args}
|
||||
return s
|
||||
}
|
||||
|
||||
func (s *search) Omit(columns ...string) *search {
|
||||
s.omits = columns
|
||||
return s
|
||||
}
|
||||
|
||||
func (s *search) Limit(limit interface{}) *search {
|
||||
s.limit = limit
|
||||
return s
|
||||
}
|
||||
|
||||
func (s *search) Offset(offset interface{}) *search {
|
||||
s.offset = offset
|
||||
return s
|
||||
}
|
||||
|
||||
func (s *search) Group(query string) *search {
|
||||
s.group = s.getInterfaceAsSQL(query)
|
||||
return s
|
||||
}
|
||||
|
||||
func (s *search) Having(query interface{}, values ...interface{}) *search {
|
||||
if val, ok := query.(*expr); ok {
|
||||
s.havingConditions = append(s.havingConditions, map[string]interface{}{"query": val.expr, "args": val.args})
|
||||
} else {
|
||||
s.havingConditions = append(s.havingConditions, map[string]interface{}{"query": query, "args": values})
|
||||
}
|
||||
return s
|
||||
}
|
||||
|
||||
func (s *search) Joins(query string, values ...interface{}) *search {
|
||||
s.joinConditions = append(s.joinConditions, map[string]interface{}{"query": query, "args": values})
|
||||
return s
|
||||
}
|
||||
|
||||
func (s *search) Preload(schema string, values ...interface{}) *search {
|
||||
var preloads []searchPreload
|
||||
for _, preload := range s.preload {
|
||||
if preload.schema != schema {
|
||||
preloads = append(preloads, preload)
|
||||
}
|
||||
}
|
||||
preloads = append(preloads, searchPreload{schema, values})
|
||||
s.preload = preloads
|
||||
return s
|
||||
}
|
||||
|
||||
func (s *search) Raw(b bool) *search {
|
||||
s.raw = b
|
||||
return s
|
||||
}
|
||||
|
||||
func (s *search) unscoped() *search {
|
||||
s.Unscoped = true
|
||||
return s
|
||||
}
|
||||
|
||||
func (s *search) Table(name string) *search {
|
||||
s.tableName = name
|
||||
return s
|
||||
}
|
||||
|
||||
func (s *search) getInterfaceAsSQL(value interface{}) (str string) {
|
||||
switch value.(type) {
|
||||
case string, int, int8, int16, int32, int64, uint, uint8, uint16, uint32, uint64:
|
||||
str = fmt.Sprintf("%v", value)
|
||||
default:
|
||||
s.db.AddError(ErrInvalidSQL)
|
||||
}
|
||||
|
||||
if str == "-1" {
|
||||
return ""
|
||||
}
|
||||
return
|
||||
}
|
||||
5
vendor/github.com/jinzhu/gorm/test_all.sh
generated
vendored
Normal file
5
vendor/github.com/jinzhu/gorm/test_all.sh
generated
vendored
Normal file
@@ -0,0 +1,5 @@
|
||||
dialects=("postgres" "mysql" "mssql" "sqlite")
|
||||
|
||||
for dialect in "${dialects[@]}" ; do
|
||||
DEBUG=false GORM_DIALECT=${dialect} go test
|
||||
done
|
||||
226
vendor/github.com/jinzhu/gorm/utils.go
generated
vendored
Normal file
226
vendor/github.com/jinzhu/gorm/utils.go
generated
vendored
Normal file
@@ -0,0 +1,226 @@
|
||||
package gorm
|
||||
|
||||
import (
|
||||
"database/sql/driver"
|
||||
"fmt"
|
||||
"reflect"
|
||||
"regexp"
|
||||
"runtime"
|
||||
"strings"
|
||||
"sync"
|
||||
"time"
|
||||
)
|
||||
|
||||
// NowFunc returns current time, this function is exported in order to be able
|
||||
// to give the flexibility to the developer to customize it according to their
|
||||
// needs, e.g:
|
||||
// gorm.NowFunc = func() time.Time {
|
||||
// return time.Now().UTC()
|
||||
// }
|
||||
var NowFunc = func() time.Time {
|
||||
return time.Now()
|
||||
}
|
||||
|
||||
// Copied from golint
|
||||
var commonInitialisms = []string{"API", "ASCII", "CPU", "CSS", "DNS", "EOF", "GUID", "HTML", "HTTP", "HTTPS", "ID", "IP", "JSON", "LHS", "QPS", "RAM", "RHS", "RPC", "SLA", "SMTP", "SSH", "TLS", "TTL", "UID", "UI", "UUID", "URI", "URL", "UTF8", "VM", "XML", "XSRF", "XSS"}
|
||||
var commonInitialismsReplacer *strings.Replacer
|
||||
|
||||
var goSrcRegexp = regexp.MustCompile(`jinzhu/gorm(@.*)?/.*.go`)
|
||||
var goTestRegexp = regexp.MustCompile(`jinzhu/gorm(@.*)?/.*test.go`)
|
||||
|
||||
func init() {
|
||||
var commonInitialismsForReplacer []string
|
||||
for _, initialism := range commonInitialisms {
|
||||
commonInitialismsForReplacer = append(commonInitialismsForReplacer, initialism, strings.Title(strings.ToLower(initialism)))
|
||||
}
|
||||
commonInitialismsReplacer = strings.NewReplacer(commonInitialismsForReplacer...)
|
||||
}
|
||||
|
||||
type safeMap struct {
|
||||
m map[string]string
|
||||
l *sync.RWMutex
|
||||
}
|
||||
|
||||
func (s *safeMap) Set(key string, value string) {
|
||||
s.l.Lock()
|
||||
defer s.l.Unlock()
|
||||
s.m[key] = value
|
||||
}
|
||||
|
||||
func (s *safeMap) Get(key string) string {
|
||||
s.l.RLock()
|
||||
defer s.l.RUnlock()
|
||||
return s.m[key]
|
||||
}
|
||||
|
||||
func newSafeMap() *safeMap {
|
||||
return &safeMap{l: new(sync.RWMutex), m: make(map[string]string)}
|
||||
}
|
||||
|
||||
// SQL expression
|
||||
type expr struct {
|
||||
expr string
|
||||
args []interface{}
|
||||
}
|
||||
|
||||
// Expr generate raw SQL expression, for example:
|
||||
// DB.Model(&product).Update("price", gorm.Expr("price * ? + ?", 2, 100))
|
||||
func Expr(expression string, args ...interface{}) *expr {
|
||||
return &expr{expr: expression, args: args}
|
||||
}
|
||||
|
||||
func indirect(reflectValue reflect.Value) reflect.Value {
|
||||
for reflectValue.Kind() == reflect.Ptr {
|
||||
reflectValue = reflectValue.Elem()
|
||||
}
|
||||
return reflectValue
|
||||
}
|
||||
|
||||
func toQueryMarks(primaryValues [][]interface{}) string {
|
||||
var results []string
|
||||
|
||||
for _, primaryValue := range primaryValues {
|
||||
var marks []string
|
||||
for range primaryValue {
|
||||
marks = append(marks, "?")
|
||||
}
|
||||
|
||||
if len(marks) > 1 {
|
||||
results = append(results, fmt.Sprintf("(%v)", strings.Join(marks, ",")))
|
||||
} else {
|
||||
results = append(results, strings.Join(marks, ""))
|
||||
}
|
||||
}
|
||||
return strings.Join(results, ",")
|
||||
}
|
||||
|
||||
func toQueryCondition(scope *Scope, columns []string) string {
|
||||
var newColumns []string
|
||||
for _, column := range columns {
|
||||
newColumns = append(newColumns, scope.Quote(column))
|
||||
}
|
||||
|
||||
if len(columns) > 1 {
|
||||
return fmt.Sprintf("(%v)", strings.Join(newColumns, ","))
|
||||
}
|
||||
return strings.Join(newColumns, ",")
|
||||
}
|
||||
|
||||
func toQueryValues(values [][]interface{}) (results []interface{}) {
|
||||
for _, value := range values {
|
||||
for _, v := range value {
|
||||
results = append(results, v)
|
||||
}
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
func fileWithLineNum() string {
|
||||
for i := 2; i < 15; i++ {
|
||||
_, file, line, ok := runtime.Caller(i)
|
||||
if ok && (!goSrcRegexp.MatchString(file) || goTestRegexp.MatchString(file)) {
|
||||
return fmt.Sprintf("%v:%v", file, line)
|
||||
}
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
func isBlank(value reflect.Value) bool {
|
||||
switch value.Kind() {
|
||||
case reflect.String:
|
||||
return value.Len() == 0
|
||||
case reflect.Bool:
|
||||
return !value.Bool()
|
||||
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
|
||||
return value.Int() == 0
|
||||
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr:
|
||||
return value.Uint() == 0
|
||||
case reflect.Float32, reflect.Float64:
|
||||
return value.Float() == 0
|
||||
case reflect.Interface, reflect.Ptr:
|
||||
return value.IsNil()
|
||||
}
|
||||
|
||||
return reflect.DeepEqual(value.Interface(), reflect.Zero(value.Type()).Interface())
|
||||
}
|
||||
|
||||
func toSearchableMap(attrs ...interface{}) (result interface{}) {
|
||||
if len(attrs) > 1 {
|
||||
if str, ok := attrs[0].(string); ok {
|
||||
result = map[string]interface{}{str: attrs[1]}
|
||||
}
|
||||
} else if len(attrs) == 1 {
|
||||
if attr, ok := attrs[0].(map[string]interface{}); ok {
|
||||
result = attr
|
||||
}
|
||||
|
||||
if attr, ok := attrs[0].(interface{}); ok {
|
||||
result = attr
|
||||
}
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
func equalAsString(a interface{}, b interface{}) bool {
|
||||
return toString(a) == toString(b)
|
||||
}
|
||||
|
||||
func toString(str interface{}) string {
|
||||
if values, ok := str.([]interface{}); ok {
|
||||
var results []string
|
||||
for _, value := range values {
|
||||
results = append(results, toString(value))
|
||||
}
|
||||
return strings.Join(results, "_")
|
||||
} else if bytes, ok := str.([]byte); ok {
|
||||
return string(bytes)
|
||||
} else if reflectValue := reflect.Indirect(reflect.ValueOf(str)); reflectValue.IsValid() {
|
||||
return fmt.Sprintf("%v", reflectValue.Interface())
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
func makeSlice(elemType reflect.Type) interface{} {
|
||||
if elemType.Kind() == reflect.Slice {
|
||||
elemType = elemType.Elem()
|
||||
}
|
||||
sliceType := reflect.SliceOf(elemType)
|
||||
slice := reflect.New(sliceType)
|
||||
slice.Elem().Set(reflect.MakeSlice(sliceType, 0, 0))
|
||||
return slice.Interface()
|
||||
}
|
||||
|
||||
func strInSlice(a string, list []string) bool {
|
||||
for _, b := range list {
|
||||
if b == a {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
// getValueFromFields return given fields's value
|
||||
func getValueFromFields(value reflect.Value, fieldNames []string) (results []interface{}) {
|
||||
// If value is a nil pointer, Indirect returns a zero Value!
|
||||
// Therefor we need to check for a zero value,
|
||||
// as FieldByName could panic
|
||||
if indirectValue := reflect.Indirect(value); indirectValue.IsValid() {
|
||||
for _, fieldName := range fieldNames {
|
||||
if fieldValue := reflect.Indirect(indirectValue.FieldByName(fieldName)); fieldValue.IsValid() {
|
||||
result := fieldValue.Interface()
|
||||
if r, ok := result.(driver.Valuer); ok {
|
||||
result, _ = r.Value()
|
||||
}
|
||||
results = append(results, result)
|
||||
}
|
||||
}
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
func addExtraSpaceIfExist(str string) string {
|
||||
if str != "" {
|
||||
return " " + str
|
||||
}
|
||||
return ""
|
||||
}
|
||||
148
vendor/github.com/jinzhu/gorm/wercker.yml
generated
vendored
Normal file
148
vendor/github.com/jinzhu/gorm/wercker.yml
generated
vendored
Normal file
@@ -0,0 +1,148 @@
|
||||
# use the default golang container from Docker Hub
|
||||
box: golang
|
||||
|
||||
services:
|
||||
- name: mariadb
|
||||
id: mariadb:latest
|
||||
env:
|
||||
MYSQL_DATABASE: gorm
|
||||
MYSQL_USER: gorm
|
||||
MYSQL_PASSWORD: gorm
|
||||
MYSQL_RANDOM_ROOT_PASSWORD: "yes"
|
||||
- name: mysql57
|
||||
id: mysql:5.7
|
||||
env:
|
||||
MYSQL_DATABASE: gorm
|
||||
MYSQL_USER: gorm
|
||||
MYSQL_PASSWORD: gorm
|
||||
MYSQL_RANDOM_ROOT_PASSWORD: "yes"
|
||||
- name: mysql56
|
||||
id: mysql:5.6
|
||||
env:
|
||||
MYSQL_DATABASE: gorm
|
||||
MYSQL_USER: gorm
|
||||
MYSQL_PASSWORD: gorm
|
||||
MYSQL_RANDOM_ROOT_PASSWORD: "yes"
|
||||
- name: mysql55
|
||||
id: mysql:5.5
|
||||
env:
|
||||
MYSQL_DATABASE: gorm
|
||||
MYSQL_USER: gorm
|
||||
MYSQL_PASSWORD: gorm
|
||||
MYSQL_RANDOM_ROOT_PASSWORD: "yes"
|
||||
- name: postgres
|
||||
id: postgres:latest
|
||||
env:
|
||||
POSTGRES_USER: gorm
|
||||
POSTGRES_PASSWORD: gorm
|
||||
POSTGRES_DB: gorm
|
||||
- name: postgres96
|
||||
id: postgres:9.6
|
||||
env:
|
||||
POSTGRES_USER: gorm
|
||||
POSTGRES_PASSWORD: gorm
|
||||
POSTGRES_DB: gorm
|
||||
- name: postgres95
|
||||
id: postgres:9.5
|
||||
env:
|
||||
POSTGRES_USER: gorm
|
||||
POSTGRES_PASSWORD: gorm
|
||||
POSTGRES_DB: gorm
|
||||
- name: postgres94
|
||||
id: postgres:9.4
|
||||
env:
|
||||
POSTGRES_USER: gorm
|
||||
POSTGRES_PASSWORD: gorm
|
||||
POSTGRES_DB: gorm
|
||||
- name: postgres93
|
||||
id: postgres:9.3
|
||||
env:
|
||||
POSTGRES_USER: gorm
|
||||
POSTGRES_PASSWORD: gorm
|
||||
POSTGRES_DB: gorm
|
||||
- name: mssql
|
||||
id: mcmoe/mssqldocker:latest
|
||||
env:
|
||||
ACCEPT_EULA: Y
|
||||
SA_PASSWORD: LoremIpsum86
|
||||
MSSQL_DB: gorm
|
||||
MSSQL_USER: gorm
|
||||
MSSQL_PASSWORD: LoremIpsum86
|
||||
|
||||
# The steps that will be executed in the build pipeline
|
||||
build:
|
||||
# The steps that will be executed on build
|
||||
steps:
|
||||
# Sets the go workspace and places you package
|
||||
# at the right place in the workspace tree
|
||||
- setup-go-workspace
|
||||
|
||||
# Gets the dependencies
|
||||
- script:
|
||||
name: go get
|
||||
code: |
|
||||
cd $WERCKER_SOURCE_DIR
|
||||
go version
|
||||
go get -t ./...
|
||||
|
||||
# Build the project
|
||||
- script:
|
||||
name: go build
|
||||
code: |
|
||||
go build ./...
|
||||
|
||||
# Test the project
|
||||
- script:
|
||||
name: test sqlite
|
||||
code: |
|
||||
go test ./...
|
||||
|
||||
- script:
|
||||
name: test mariadb
|
||||
code: |
|
||||
GORM_DIALECT=mysql GORM_DSN="gorm:gorm@tcp(mariadb:3306)/gorm?charset=utf8&parseTime=True" go test ./...
|
||||
|
||||
- script:
|
||||
name: test mysql5.7
|
||||
code: |
|
||||
GORM_DIALECT=mysql GORM_DSN="gorm:gorm@tcp(mysql57:3306)/gorm?charset=utf8&parseTime=True" go test ./...
|
||||
|
||||
- script:
|
||||
name: test mysql5.6
|
||||
code: |
|
||||
GORM_DIALECT=mysql GORM_DSN="gorm:gorm@tcp(mysql56:3306)/gorm?charset=utf8&parseTime=True" go test ./...
|
||||
|
||||
- script:
|
||||
name: test mysql5.5
|
||||
code: |
|
||||
GORM_DIALECT=mysql GORM_DSN="gorm:gorm@tcp(mysql55:3306)/gorm?charset=utf8&parseTime=True" go test ./...
|
||||
|
||||
- script:
|
||||
name: test postgres
|
||||
code: |
|
||||
GORM_DIALECT=postgres GORM_DSN="host=postgres user=gorm password=gorm DB.name=gorm port=5432 sslmode=disable" go test ./...
|
||||
|
||||
- script:
|
||||
name: test postgres96
|
||||
code: |
|
||||
GORM_DIALECT=postgres GORM_DSN="host=postgres96 user=gorm password=gorm DB.name=gorm port=5432 sslmode=disable" go test ./...
|
||||
|
||||
- script:
|
||||
name: test postgres95
|
||||
code: |
|
||||
GORM_DIALECT=postgres GORM_DSN="host=postgres95 user=gorm password=gorm DB.name=gorm port=5432 sslmode=disable" go test ./...
|
||||
|
||||
- script:
|
||||
name: test postgres94
|
||||
code: |
|
||||
GORM_DIALECT=postgres GORM_DSN="host=postgres94 user=gorm password=gorm DB.name=gorm port=5432 sslmode=disable" go test ./...
|
||||
|
||||
- script:
|
||||
name: test postgres93
|
||||
code: |
|
||||
GORM_DIALECT=postgres GORM_DSN="host=postgres93 user=gorm password=gorm DB.name=gorm port=5432 sslmode=disable" go test ./...
|
||||
|
||||
- script:
|
||||
name: test mssql
|
||||
code: |
|
||||
GORM_DIALECT=mssql GORM_DSN="sqlserver://gorm:LoremIpsum86@mssql:1433?database=gorm" go test ./...
|
||||
21
vendor/github.com/jinzhu/inflection/LICENSE
generated
vendored
Normal file
21
vendor/github.com/jinzhu/inflection/LICENSE
generated
vendored
Normal file
@@ -0,0 +1,21 @@
|
||||
The MIT License (MIT)
|
||||
|
||||
Copyright (c) 2015 - Jinzhu
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
in the Software without restriction, including without limitation the rights
|
||||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
copies of the Software, and to permit persons to whom the Software is
|
||||
furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in all
|
||||
copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||
SOFTWARE.
|
||||
55
vendor/github.com/jinzhu/inflection/README.md
generated
vendored
Normal file
55
vendor/github.com/jinzhu/inflection/README.md
generated
vendored
Normal file
@@ -0,0 +1,55 @@
|
||||
# Inflection
|
||||
|
||||
Inflection pluralizes and singularizes English nouns
|
||||
|
||||
[](https://app.wercker.com/project/byKey/f8c7432b097d1f4ce636879670be0930)
|
||||
|
||||
## Basic Usage
|
||||
|
||||
```go
|
||||
inflection.Plural("person") => "people"
|
||||
inflection.Plural("Person") => "People"
|
||||
inflection.Plural("PERSON") => "PEOPLE"
|
||||
inflection.Plural("bus") => "buses"
|
||||
inflection.Plural("BUS") => "BUSES"
|
||||
inflection.Plural("Bus") => "Buses"
|
||||
|
||||
inflection.Singular("people") => "person"
|
||||
inflection.Singular("People") => "Person"
|
||||
inflection.Singular("PEOPLE") => "PERSON"
|
||||
inflection.Singular("buses") => "bus"
|
||||
inflection.Singular("BUSES") => "BUS"
|
||||
inflection.Singular("Buses") => "Bus"
|
||||
|
||||
inflection.Plural("FancyPerson") => "FancyPeople"
|
||||
inflection.Singular("FancyPeople") => "FancyPerson"
|
||||
```
|
||||
|
||||
## Register Rules
|
||||
|
||||
Standard rules are from Rails's ActiveSupport (https://github.com/rails/rails/blob/master/activesupport/lib/active_support/inflections.rb)
|
||||
|
||||
If you want to register more rules, follow:
|
||||
|
||||
```
|
||||
inflection.AddUncountable("fish")
|
||||
inflection.AddIrregular("person", "people")
|
||||
inflection.AddPlural("(bu)s$", "${1}ses") # "bus" => "buses" / "BUS" => "BUSES" / "Bus" => "Buses"
|
||||
inflection.AddSingular("(bus)(es)?$", "${1}") # "buses" => "bus" / "Buses" => "Bus" / "BUSES" => "BUS"
|
||||
```
|
||||
|
||||
## Contributing
|
||||
|
||||
You can help to make the project better, check out [http://gorm.io/contribute.html](http://gorm.io/contribute.html) for things you can do.
|
||||
|
||||
## Author
|
||||
|
||||
**jinzhu**
|
||||
|
||||
* <http://github.com/jinzhu>
|
||||
* <wosmvp@gmail.com>
|
||||
* <http://twitter.com/zhangjinzhu>
|
||||
|
||||
## License
|
||||
|
||||
Released under the [MIT License](http://www.opensource.org/licenses/MIT).
|
||||
273
vendor/github.com/jinzhu/inflection/inflections.go
generated
vendored
Normal file
273
vendor/github.com/jinzhu/inflection/inflections.go
generated
vendored
Normal file
@@ -0,0 +1,273 @@
|
||||
/*
|
||||
Package inflection pluralizes and singularizes English nouns.
|
||||
|
||||
inflection.Plural("person") => "people"
|
||||
inflection.Plural("Person") => "People"
|
||||
inflection.Plural("PERSON") => "PEOPLE"
|
||||
|
||||
inflection.Singular("people") => "person"
|
||||
inflection.Singular("People") => "Person"
|
||||
inflection.Singular("PEOPLE") => "PERSON"
|
||||
|
||||
inflection.Plural("FancyPerson") => "FancydPeople"
|
||||
inflection.Singular("FancyPeople") => "FancydPerson"
|
||||
|
||||
Standard rules are from Rails's ActiveSupport (https://github.com/rails/rails/blob/master/activesupport/lib/active_support/inflections.rb)
|
||||
|
||||
If you want to register more rules, follow:
|
||||
|
||||
inflection.AddUncountable("fish")
|
||||
inflection.AddIrregular("person", "people")
|
||||
inflection.AddPlural("(bu)s$", "${1}ses") # "bus" => "buses" / "BUS" => "BUSES" / "Bus" => "Buses"
|
||||
inflection.AddSingular("(bus)(es)?$", "${1}") # "buses" => "bus" / "Buses" => "Bus" / "BUSES" => "BUS"
|
||||
*/
|
||||
package inflection
|
||||
|
||||
import (
|
||||
"regexp"
|
||||
"strings"
|
||||
)
|
||||
|
||||
type inflection struct {
|
||||
regexp *regexp.Regexp
|
||||
replace string
|
||||
}
|
||||
|
||||
// Regular is a regexp find replace inflection
|
||||
type Regular struct {
|
||||
find string
|
||||
replace string
|
||||
}
|
||||
|
||||
// Irregular is a hard replace inflection,
|
||||
// containing both singular and plural forms
|
||||
type Irregular struct {
|
||||
singular string
|
||||
plural string
|
||||
}
|
||||
|
||||
// RegularSlice is a slice of Regular inflections
|
||||
type RegularSlice []Regular
|
||||
|
||||
// IrregularSlice is a slice of Irregular inflections
|
||||
type IrregularSlice []Irregular
|
||||
|
||||
var pluralInflections = RegularSlice{
|
||||
{"([a-z])$", "${1}s"},
|
||||
{"s$", "s"},
|
||||
{"^(ax|test)is$", "${1}es"},
|
||||
{"(octop|vir)us$", "${1}i"},
|
||||
{"(octop|vir)i$", "${1}i"},
|
||||
{"(alias|status)$", "${1}es"},
|
||||
{"(bu)s$", "${1}ses"},
|
||||
{"(buffal|tomat)o$", "${1}oes"},
|
||||
{"([ti])um$", "${1}a"},
|
||||
{"([ti])a$", "${1}a"},
|
||||
{"sis$", "ses"},
|
||||
{"(?:([^f])fe|([lr])f)$", "${1}${2}ves"},
|
||||
{"(hive)$", "${1}s"},
|
||||
{"([^aeiouy]|qu)y$", "${1}ies"},
|
||||
{"(x|ch|ss|sh)$", "${1}es"},
|
||||
{"(matr|vert|ind)(?:ix|ex)$", "${1}ices"},
|
||||
{"^(m|l)ouse$", "${1}ice"},
|
||||
{"^(m|l)ice$", "${1}ice"},
|
||||
{"^(ox)$", "${1}en"},
|
||||
{"^(oxen)$", "${1}"},
|
||||
{"(quiz)$", "${1}zes"},
|
||||
}
|
||||
|
||||
var singularInflections = RegularSlice{
|
||||
{"s$", ""},
|
||||
{"(ss)$", "${1}"},
|
||||
{"(n)ews$", "${1}ews"},
|
||||
{"([ti])a$", "${1}um"},
|
||||
{"((a)naly|(b)a|(d)iagno|(p)arenthe|(p)rogno|(s)ynop|(t)he)(sis|ses)$", "${1}sis"},
|
||||
{"(^analy)(sis|ses)$", "${1}sis"},
|
||||
{"([^f])ves$", "${1}fe"},
|
||||
{"(hive)s$", "${1}"},
|
||||
{"(tive)s$", "${1}"},
|
||||
{"([lr])ves$", "${1}f"},
|
||||
{"([^aeiouy]|qu)ies$", "${1}y"},
|
||||
{"(s)eries$", "${1}eries"},
|
||||
{"(m)ovies$", "${1}ovie"},
|
||||
{"(c)ookies$", "${1}ookie"},
|
||||
{"(x|ch|ss|sh)es$", "${1}"},
|
||||
{"^(m|l)ice$", "${1}ouse"},
|
||||
{"(bus)(es)?$", "${1}"},
|
||||
{"(o)es$", "${1}"},
|
||||
{"(shoe)s$", "${1}"},
|
||||
{"(cris|test)(is|es)$", "${1}is"},
|
||||
{"^(a)x[ie]s$", "${1}xis"},
|
||||
{"(octop|vir)(us|i)$", "${1}us"},
|
||||
{"(alias|status)(es)?$", "${1}"},
|
||||
{"^(ox)en", "${1}"},
|
||||
{"(vert|ind)ices$", "${1}ex"},
|
||||
{"(matr)ices$", "${1}ix"},
|
||||
{"(quiz)zes$", "${1}"},
|
||||
{"(database)s$", "${1}"},
|
||||
}
|
||||
|
||||
var irregularInflections = IrregularSlice{
|
||||
{"person", "people"},
|
||||
{"man", "men"},
|
||||
{"child", "children"},
|
||||
{"sex", "sexes"},
|
||||
{"move", "moves"},
|
||||
{"mombie", "mombies"},
|
||||
}
|
||||
|
||||
var uncountableInflections = []string{"equipment", "information", "rice", "money", "species", "series", "fish", "sheep", "jeans", "police"}
|
||||
|
||||
var compiledPluralMaps []inflection
|
||||
var compiledSingularMaps []inflection
|
||||
|
||||
func compile() {
|
||||
compiledPluralMaps = []inflection{}
|
||||
compiledSingularMaps = []inflection{}
|
||||
for _, uncountable := range uncountableInflections {
|
||||
inf := inflection{
|
||||
regexp: regexp.MustCompile("^(?i)(" + uncountable + ")$"),
|
||||
replace: "${1}",
|
||||
}
|
||||
compiledPluralMaps = append(compiledPluralMaps, inf)
|
||||
compiledSingularMaps = append(compiledSingularMaps, inf)
|
||||
}
|
||||
|
||||
for _, value := range irregularInflections {
|
||||
infs := []inflection{
|
||||
inflection{regexp: regexp.MustCompile(strings.ToUpper(value.singular) + "$"), replace: strings.ToUpper(value.plural)},
|
||||
inflection{regexp: regexp.MustCompile(strings.Title(value.singular) + "$"), replace: strings.Title(value.plural)},
|
||||
inflection{regexp: regexp.MustCompile(value.singular + "$"), replace: value.plural},
|
||||
}
|
||||
compiledPluralMaps = append(compiledPluralMaps, infs...)
|
||||
}
|
||||
|
||||
for _, value := range irregularInflections {
|
||||
infs := []inflection{
|
||||
inflection{regexp: regexp.MustCompile(strings.ToUpper(value.plural) + "$"), replace: strings.ToUpper(value.singular)},
|
||||
inflection{regexp: regexp.MustCompile(strings.Title(value.plural) + "$"), replace: strings.Title(value.singular)},
|
||||
inflection{regexp: regexp.MustCompile(value.plural + "$"), replace: value.singular},
|
||||
}
|
||||
compiledSingularMaps = append(compiledSingularMaps, infs...)
|
||||
}
|
||||
|
||||
for i := len(pluralInflections) - 1; i >= 0; i-- {
|
||||
value := pluralInflections[i]
|
||||
infs := []inflection{
|
||||
inflection{regexp: regexp.MustCompile(strings.ToUpper(value.find)), replace: strings.ToUpper(value.replace)},
|
||||
inflection{regexp: regexp.MustCompile(value.find), replace: value.replace},
|
||||
inflection{regexp: regexp.MustCompile("(?i)" + value.find), replace: value.replace},
|
||||
}
|
||||
compiledPluralMaps = append(compiledPluralMaps, infs...)
|
||||
}
|
||||
|
||||
for i := len(singularInflections) - 1; i >= 0; i-- {
|
||||
value := singularInflections[i]
|
||||
infs := []inflection{
|
||||
inflection{regexp: regexp.MustCompile(strings.ToUpper(value.find)), replace: strings.ToUpper(value.replace)},
|
||||
inflection{regexp: regexp.MustCompile(value.find), replace: value.replace},
|
||||
inflection{regexp: regexp.MustCompile("(?i)" + value.find), replace: value.replace},
|
||||
}
|
||||
compiledSingularMaps = append(compiledSingularMaps, infs...)
|
||||
}
|
||||
}
|
||||
|
||||
func init() {
|
||||
compile()
|
||||
}
|
||||
|
||||
// AddPlural adds a plural inflection
|
||||
func AddPlural(find, replace string) {
|
||||
pluralInflections = append(pluralInflections, Regular{find, replace})
|
||||
compile()
|
||||
}
|
||||
|
||||
// AddSingular adds a singular inflection
|
||||
func AddSingular(find, replace string) {
|
||||
singularInflections = append(singularInflections, Regular{find, replace})
|
||||
compile()
|
||||
}
|
||||
|
||||
// AddIrregular adds an irregular inflection
|
||||
func AddIrregular(singular, plural string) {
|
||||
irregularInflections = append(irregularInflections, Irregular{singular, plural})
|
||||
compile()
|
||||
}
|
||||
|
||||
// AddUncountable adds an uncountable inflection
|
||||
func AddUncountable(values ...string) {
|
||||
uncountableInflections = append(uncountableInflections, values...)
|
||||
compile()
|
||||
}
|
||||
|
||||
// GetPlural retrieves the plural inflection values
|
||||
func GetPlural() RegularSlice {
|
||||
plurals := make(RegularSlice, len(pluralInflections))
|
||||
copy(plurals, pluralInflections)
|
||||
return plurals
|
||||
}
|
||||
|
||||
// GetSingular retrieves the singular inflection values
|
||||
func GetSingular() RegularSlice {
|
||||
singulars := make(RegularSlice, len(singularInflections))
|
||||
copy(singulars, singularInflections)
|
||||
return singulars
|
||||
}
|
||||
|
||||
// GetIrregular retrieves the irregular inflection values
|
||||
func GetIrregular() IrregularSlice {
|
||||
irregular := make(IrregularSlice, len(irregularInflections))
|
||||
copy(irregular, irregularInflections)
|
||||
return irregular
|
||||
}
|
||||
|
||||
// GetUncountable retrieves the uncountable inflection values
|
||||
func GetUncountable() []string {
|
||||
uncountables := make([]string, len(uncountableInflections))
|
||||
copy(uncountables, uncountableInflections)
|
||||
return uncountables
|
||||
}
|
||||
|
||||
// SetPlural sets the plural inflections slice
|
||||
func SetPlural(inflections RegularSlice) {
|
||||
pluralInflections = inflections
|
||||
compile()
|
||||
}
|
||||
|
||||
// SetSingular sets the singular inflections slice
|
||||
func SetSingular(inflections RegularSlice) {
|
||||
singularInflections = inflections
|
||||
compile()
|
||||
}
|
||||
|
||||
// SetIrregular sets the irregular inflections slice
|
||||
func SetIrregular(inflections IrregularSlice) {
|
||||
irregularInflections = inflections
|
||||
compile()
|
||||
}
|
||||
|
||||
// SetUncountable sets the uncountable inflections slice
|
||||
func SetUncountable(inflections []string) {
|
||||
uncountableInflections = inflections
|
||||
compile()
|
||||
}
|
||||
|
||||
// Plural converts a word to its plural form
|
||||
func Plural(str string) string {
|
||||
for _, inflection := range compiledPluralMaps {
|
||||
if inflection.regexp.MatchString(str) {
|
||||
return inflection.regexp.ReplaceAllString(str, inflection.replace)
|
||||
}
|
||||
}
|
||||
return str
|
||||
}
|
||||
|
||||
// Singular converts a word to its singular form
|
||||
func Singular(str string) string {
|
||||
for _, inflection := range compiledSingularMaps {
|
||||
if inflection.regexp.MatchString(str) {
|
||||
return inflection.regexp.ReplaceAllString(str, inflection.replace)
|
||||
}
|
||||
}
|
||||
return str
|
||||
}
|
||||
23
vendor/github.com/jinzhu/inflection/wercker.yml
generated
vendored
Normal file
23
vendor/github.com/jinzhu/inflection/wercker.yml
generated
vendored
Normal file
@@ -0,0 +1,23 @@
|
||||
box: golang
|
||||
|
||||
build:
|
||||
steps:
|
||||
- setup-go-workspace
|
||||
|
||||
# Gets the dependencies
|
||||
- script:
|
||||
name: go get
|
||||
code: |
|
||||
go get
|
||||
|
||||
# Build the project
|
||||
- script:
|
||||
name: go build
|
||||
code: |
|
||||
go build ./...
|
||||
|
||||
# Test the project
|
||||
- script:
|
||||
name: go test
|
||||
code: |
|
||||
go test ./...
|
||||
14
vendor/github.com/karrick/godirwalk/.gitignore
generated
vendored
Normal file
14
vendor/github.com/karrick/godirwalk/.gitignore
generated
vendored
Normal file
@@ -0,0 +1,14 @@
|
||||
# Binaries for programs and plugins
|
||||
*.exe
|
||||
*.dll
|
||||
*.so
|
||||
*.dylib
|
||||
|
||||
# Test binary, build with `go test -c`
|
||||
*.test
|
||||
|
||||
# Output of the go coverage tool, specifically when used with LiteIDE
|
||||
*.out
|
||||
|
||||
# Project-local glide cache, RE: https://github.com/Masterminds/glide/issues/736
|
||||
.glide/
|
||||
25
vendor/github.com/karrick/godirwalk/LICENSE
generated
vendored
Normal file
25
vendor/github.com/karrick/godirwalk/LICENSE
generated
vendored
Normal file
@@ -0,0 +1,25 @@
|
||||
BSD 2-Clause License
|
||||
|
||||
Copyright (c) 2017, Karrick McDermott
|
||||
All rights reserved.
|
||||
|
||||
Redistribution and use in source and binary forms, with or without
|
||||
modification, are permitted provided that the following conditions are met:
|
||||
|
||||
* Redistributions of source code must retain the above copyright notice, this
|
||||
list of conditions and the following disclaimer.
|
||||
|
||||
* Redistributions in binary form must reproduce the above copyright notice,
|
||||
this list of conditions and the following disclaimer in the documentation
|
||||
and/or other materials provided with the distribution.
|
||||
|
||||
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
|
||||
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
|
||||
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
|
||||
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
|
||||
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
|
||||
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
|
||||
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
|
||||
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
|
||||
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
208
vendor/github.com/karrick/godirwalk/README.md
generated
vendored
Normal file
208
vendor/github.com/karrick/godirwalk/README.md
generated
vendored
Normal file
@@ -0,0 +1,208 @@
|
||||
# godirwalk
|
||||
|
||||
`godirwalk` is a library for traversing a directory tree on a file
|
||||
system.
|
||||
|
||||
In short, why do I use this library?
|
||||
|
||||
1. It's faster than `filepath.Walk`.
|
||||
1. It's more correct on Windows than `filepath.Walk`.
|
||||
1. It's more easy to use than `filepath.Walk`.
|
||||
1. It's more flexible than `filepath.Walk`.
|
||||
|
||||
## Usage Example
|
||||
|
||||
Additional examples are provided in the `examples/` subdirectory.
|
||||
|
||||
This library will normalize the provided top level directory name
|
||||
based on the os-specific path separator by calling `filepath.Clean` on
|
||||
its first argument. However it always provides the pathname created by
|
||||
using the correct os-specific path separator when invoking the
|
||||
provided callback function.
|
||||
|
||||
```Go
|
||||
dirname := "some/directory/root"
|
||||
err := godirwalk.Walk(dirname, &godirwalk.Options{
|
||||
Callback: func(osPathname string, de *godirwalk.Dirent) error {
|
||||
fmt.Printf("%s %s\n", de.ModeType(), osPathname)
|
||||
return nil
|
||||
},
|
||||
Unsorted: true, // (optional) set true for faster yet non-deterministic enumeration (see godoc)
|
||||
})
|
||||
```
|
||||
|
||||
This library not only provides functions for traversing a file system
|
||||
directory tree, but also for obtaining a list of immediate descendants
|
||||
of a particular directory, typically much more quickly than using
|
||||
`os.ReadDir` or `os.ReadDirnames`.
|
||||
|
||||
Documentation is available via
|
||||
[](https://godoc.org/github.com/karrick/godirwalk).
|
||||
|
||||
## Description
|
||||
|
||||
Here's why I use `godirwalk` in preference to `filepath.Walk`,
|
||||
`os.ReadDir`, and `os.ReadDirnames`.
|
||||
|
||||
### It's faster than `filepath.Walk`
|
||||
|
||||
When compared against `filepath.Walk` in benchmarks, it has been
|
||||
observed to run between five and ten times the speed on darwin, at
|
||||
speeds comparable to the that of the unix `find` utility; about twice
|
||||
the speed on linux; and about four times the speed on Windows.
|
||||
|
||||
How does it obtain this performance boost? It does less work to give
|
||||
you nearly the same output. This library calls the same `syscall`
|
||||
functions to do the work, but it makes fewer calls, does not throw
|
||||
away information that it might need, and creates less memory churn
|
||||
along the way by reusing the same scratch buffer rather than
|
||||
reallocating a new buffer every time it reads data from the operating
|
||||
system.
|
||||
|
||||
While traversing a file system directory tree, `filepath.Walk` obtains
|
||||
the list of immediate descendants of a directory, and throws away the
|
||||
file system node type information provided by the operating system
|
||||
that comes with the node's name. Then, immediately prior to invoking
|
||||
the callback function, `filepath.Walk` invokes `os.Stat` for each
|
||||
node, and passes the returned `os.FileInfo` information to the
|
||||
callback.
|
||||
|
||||
While the `os.FileInfo` information provided by `os.Stat` is extremely
|
||||
helpful--and even includes the `os.FileMode` data--providing it
|
||||
requires an additional system call for each node.
|
||||
|
||||
Because most callbacks only care about what the node type is, this
|
||||
library does not throw the type information away, but rather provides
|
||||
that information to the callback function in the form of a
|
||||
`os.FileMode` value. Note that the provided `os.FileMode` value that
|
||||
this library provides only has the node type information, and does not
|
||||
have the permission bits, sticky bits, or other information from the
|
||||
file's mode. If the callback does care about a particular node's
|
||||
entire `os.FileInfo` data structure, the callback can easiy invoke
|
||||
`os.Stat` when needed, and only when needed.
|
||||
|
||||
#### Benchmarks
|
||||
|
||||
##### macOS
|
||||
|
||||
```Bash
|
||||
go test -bench=.
|
||||
goos: darwin
|
||||
goarch: amd64
|
||||
pkg: github.com/karrick/godirwalk
|
||||
BenchmarkFilepathWalk-8 1 3001274570 ns/op
|
||||
BenchmarkGoDirWalk-8 3 465573172 ns/op
|
||||
BenchmarkFlameGraphFilepathWalk-8 1 6957916936 ns/op
|
||||
BenchmarkFlameGraphGoDirWalk-8 1 4210582571 ns/op
|
||||
PASS
|
||||
ok github.com/karrick/godirwalk 16.822s
|
||||
```
|
||||
|
||||
##### Linux
|
||||
|
||||
```Bash
|
||||
go test -bench=.
|
||||
goos: linux
|
||||
goarch: amd64
|
||||
pkg: github.com/karrick/godirwalk
|
||||
BenchmarkFilepathWalk-12 1 1609189170 ns/op
|
||||
BenchmarkGoDirWalk-12 5 211336628 ns/op
|
||||
BenchmarkFlameGraphFilepathWalk-12 1 3968119932 ns/op
|
||||
BenchmarkFlameGraphGoDirWalk-12 1 2139598998 ns/op
|
||||
PASS
|
||||
ok github.com/karrick/godirwalk 9.007s
|
||||
```
|
||||
|
||||
### It's more correct on Windows than `filepath.Walk`
|
||||
|
||||
I did not previously care about this either, but humor me. We all love
|
||||
how we can write once and run everywhere. It is essential for the
|
||||
language's adoption, growth, and success, that the software we create
|
||||
can run unmodified on all architectures and operating systems
|
||||
supported by Go.
|
||||
|
||||
When the traversed file system has a logical loop caused by symbolic
|
||||
links to directories, on unix `filepath.Walk` ignores symbolic links
|
||||
and traverses the entire directory tree without error. On Windows
|
||||
however, `filepath.Walk` will continue following directory symbolic
|
||||
links, even though it is not supposed to, eventually causing
|
||||
`filepath.Walk` to terminate early and return an error when the
|
||||
pathname gets too long from concatenating endless loops of symbolic
|
||||
links onto the pathname. This error comes from Windows, passes through
|
||||
`filepath.Walk`, and to the upstream client running `filepath.Walk`.
|
||||
|
||||
The takeaway is that behavior is different based on which platform
|
||||
`filepath.Walk` is running. While this is clearly not intentional,
|
||||
until it is fixed in the standard library, it presents a compatibility
|
||||
problem.
|
||||
|
||||
This library correctly identifies symbolic links that point to
|
||||
directories and will only follow them when `FollowSymbolicLinks` is
|
||||
set to true. Behavior on Windows and other operating systems is
|
||||
identical.
|
||||
|
||||
### It's more easy to use than `filepath.Walk`
|
||||
|
||||
Since this library does not invoke `os.Stat` on every file system node
|
||||
it encounters, there is no possible error event for the callback
|
||||
function to filter on. The third argument in the `filepath.WalkFunc`
|
||||
function signature to pass the error from `os.Stat` to the callback
|
||||
function is no longer necessary, and thus eliminated from signature of
|
||||
the callback function from this library.
|
||||
|
||||
Also, `filepath.Walk` invokes the callback function with a solidus
|
||||
delimited pathname regardless of the os-specific path separator. This
|
||||
library invokes the callback function with the os-specific pathname
|
||||
separator, obviating a call to `filepath.Clean` in the callback
|
||||
function for each node prior to actually using the provided pathname.
|
||||
|
||||
In other words, even on Windows, `filepath.Walk` will invoke the
|
||||
callback with `some/path/to/foo.txt`, requiring well written clients
|
||||
to perform pathname normalization for every file prior to working with
|
||||
the specified file. In truth, many clients developed on unix and not
|
||||
tested on Windows neglect this subtlety, and will result in software
|
||||
bugs when running on Windows. This library would invoke the callback
|
||||
function with `some\path\to\foo.txt` for the same file when running on
|
||||
Windows, eliminating the need to normalize the pathname by the client,
|
||||
and lessen the likelyhood that a client will work on unix but not on
|
||||
Windows.
|
||||
|
||||
### It's more flexible than `filepath.Walk`
|
||||
|
||||
#### Configurable Handling of Symbolic Links
|
||||
|
||||
The default behavior of this library is to ignore symbolic links to
|
||||
directories when walking a directory tree, just like `filepath.Walk`
|
||||
does. However, it does invoke the callback function with each node it
|
||||
finds, including symbolic links. If a particular use case exists to
|
||||
follow symbolic links when traversing a directory tree, this library
|
||||
can be invoked in manner to do so, by setting the
|
||||
`FollowSymbolicLinks` parameter to true.
|
||||
|
||||
#### Configurable Sorting of Directory Children
|
||||
|
||||
The default behavior of this library is to always sort the immediate
|
||||
descendants of a directory prior to visiting each node, just like
|
||||
`filepath.Walk` does. This is usually the desired behavior. However,
|
||||
this does come at a performance penalty to sort the names when a
|
||||
directory node has many entries. If a particular use case exists that
|
||||
does not require sorting the directory's immediate descendants prior
|
||||
to visiting its nodes, this library will skip the sorting step when
|
||||
the `Unsorted` parameter is set to true.
|
||||
|
||||
#### Configurable Post Children Callback
|
||||
|
||||
This library provides upstream code with the ability to specify a
|
||||
callback to be invoked for each directory after its children are
|
||||
processed. This has been used to recursively delete empty directories
|
||||
after traversing the file system in a more efficient manner. See the
|
||||
`examples/clean-empties` directory for an example of this usage.
|
||||
|
||||
#### Configurable Error Callback
|
||||
|
||||
This library provides upstream code with the ability to specify a
|
||||
callback to be invoked for errors that the operating system returns,
|
||||
allowing the upstream code to determine the next course of action to
|
||||
take, whether to halt walking the hierarchy, as it would do were no
|
||||
error callback provided, or skip the node that caused the error. See
|
||||
the `examples/walk-fast` directory for an example of this usage.
|
||||
72
vendor/github.com/karrick/godirwalk/dirent.go
generated
vendored
Normal file
72
vendor/github.com/karrick/godirwalk/dirent.go
generated
vendored
Normal file
@@ -0,0 +1,72 @@
|
||||
package godirwalk
|
||||
|
||||
import (
|
||||
"os"
|
||||
"path/filepath"
|
||||
)
|
||||
|
||||
// Dirent stores the name and file system mode type of discovered file system
|
||||
// entries.
|
||||
type Dirent struct {
|
||||
name string
|
||||
modeType os.FileMode
|
||||
}
|
||||
|
||||
// NewDirent returns a newly initialized Dirent structure, or an error. This
|
||||
// function does not follow symbolic links.
|
||||
//
|
||||
// This function is rarely used, as Dirent structures are provided by other
|
||||
// functions in this library that read and walk directories.
|
||||
func NewDirent(osPathname string) (*Dirent, error) {
|
||||
fi, err := os.Lstat(osPathname)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return &Dirent{
|
||||
name: filepath.Base(osPathname),
|
||||
modeType: fi.Mode() & os.ModeType,
|
||||
}, nil
|
||||
}
|
||||
|
||||
// Name returns the basename of the file system entry.
|
||||
func (de Dirent) Name() string { return de.name }
|
||||
|
||||
// ModeType returns the mode bits that specify the file system node type. We
|
||||
// could make our own enum-like data type for encoding the file type, but Go's
|
||||
// runtime already gives us architecture independent file modes, as discussed in
|
||||
// `os/types.go`:
|
||||
//
|
||||
// Go's runtime FileMode type has same definition on all systems, so that
|
||||
// information about files can be moved from one system to another portably.
|
||||
func (de Dirent) ModeType() os.FileMode { return de.modeType }
|
||||
|
||||
// IsDir returns true if and only if the Dirent represents a file system
|
||||
// directory. Note that on some operating systems, more than one file mode bit
|
||||
// may be set for a node. For instance, on Windows, a symbolic link that points
|
||||
// to a directory will have both the directory and the symbolic link bits set.
|
||||
func (de Dirent) IsDir() bool { return de.modeType&os.ModeDir != 0 }
|
||||
|
||||
// IsRegular returns true if and only if the Dirent represents a regular
|
||||
// file. That is, it ensures that no mode type bits are set.
|
||||
func (de Dirent) IsRegular() bool { return de.modeType&os.ModeType == 0 }
|
||||
|
||||
// IsSymlink returns true if and only if the Dirent represents a file system
|
||||
// symbolic link. Note that on some operating systems, more than one file mode
|
||||
// bit may be set for a node. For instance, on Windows, a symbolic link that
|
||||
// points to a directory will have both the directory and the symbolic link bits
|
||||
// set.
|
||||
func (de Dirent) IsSymlink() bool { return de.modeType&os.ModeSymlink != 0 }
|
||||
|
||||
// Dirents represents a slice of Dirent pointers, which are sortable by
|
||||
// name. This type satisfies the `sort.Interface` interface.
|
||||
type Dirents []*Dirent
|
||||
|
||||
// Len returns the count of Dirent structures in the slice.
|
||||
func (l Dirents) Len() int { return len(l) }
|
||||
|
||||
// Less returns true if and only if the Name of the element specified by the
|
||||
// first index is lexicographically less than that of the second index.
|
||||
func (l Dirents) Less(i, j int) bool { return l[i].name < l[j].name }
|
||||
|
||||
// Swap exchanges the two Dirent entries specified by the two provided indexes.
|
||||
func (l Dirents) Swap(i, j int) { l[i], l[j] = l[j], l[i] }
|
||||
34
vendor/github.com/karrick/godirwalk/doc.go
generated
vendored
Normal file
34
vendor/github.com/karrick/godirwalk/doc.go
generated
vendored
Normal file
@@ -0,0 +1,34 @@
|
||||
/*
|
||||
Package godirwalk provides functions to read and traverse directory trees.
|
||||
|
||||
In short, why do I use this library?
|
||||
|
||||
* It's faster than `filepath.Walk`.
|
||||
|
||||
* It's more correct on Windows than `filepath.Walk`.
|
||||
|
||||
* It's more easy to use than `filepath.Walk`.
|
||||
|
||||
* It's more flexible than `filepath.Walk`.
|
||||
|
||||
USAGE
|
||||
|
||||
This library will normalize the provided top level directory name based on the
|
||||
os-specific path separator by calling `filepath.Clean` on its first
|
||||
argument. However it always provides the pathname created by using the correct
|
||||
os-specific path separator when invoking the provided callback function.
|
||||
|
||||
dirname := "some/directory/root"
|
||||
err := godirwalk.Walk(dirname, &godirwalk.Options{
|
||||
Callback: func(osPathname string, de *godirwalk.Dirent) error {
|
||||
fmt.Printf("%s %s\n", de.ModeType(), osPathname)
|
||||
return nil
|
||||
},
|
||||
})
|
||||
|
||||
This library not only provides functions for traversing a file system directory
|
||||
tree, but also for obtaining a list of immediate descendants of a particular
|
||||
directory, typically much more quickly than using `os.ReadDir` or
|
||||
`os.ReadDirnames`.
|
||||
*/
|
||||
package godirwalk
|
||||
1
vendor/github.com/karrick/godirwalk/go.mod
generated
vendored
Normal file
1
vendor/github.com/karrick/godirwalk/go.mod
generated
vendored
Normal file
@@ -0,0 +1 @@
|
||||
module github.com/karrick/godirwalk
|
||||
0
vendor/github.com/karrick/godirwalk/go.sum
generated
vendored
Normal file
0
vendor/github.com/karrick/godirwalk/go.sum
generated
vendored
Normal file
47
vendor/github.com/karrick/godirwalk/readdir.go
generated
vendored
Normal file
47
vendor/github.com/karrick/godirwalk/readdir.go
generated
vendored
Normal file
@@ -0,0 +1,47 @@
|
||||
package godirwalk
|
||||
|
||||
// ReadDirents returns a sortable slice of pointers to Dirent structures, each
|
||||
// representing the file system name and mode type for one of the immediate
|
||||
// descendant of the specified directory. If the specified directory is a
|
||||
// symbolic link, it will be resolved.
|
||||
//
|
||||
// If an optional scratch buffer is provided that is at least one page of
|
||||
// memory, it will be used when reading directory entries from the file system.
|
||||
//
|
||||
// children, err := godirwalk.ReadDirents(osDirname, nil)
|
||||
// if err != nil {
|
||||
// return nil, errors.Wrap(err, "cannot get list of directory children")
|
||||
// }
|
||||
// sort.Sort(children)
|
||||
// for _, child := range children {
|
||||
// fmt.Printf("%s %s\n", child.ModeType, child.Name)
|
||||
// }
|
||||
func ReadDirents(osDirname string, scratchBuffer []byte) (Dirents, error) {
|
||||
return readdirents(osDirname, scratchBuffer)
|
||||
}
|
||||
|
||||
// ReadDirnames returns a slice of strings, representing the immediate
|
||||
// descendants of the specified directory. If the specified directory is a
|
||||
// symbolic link, it will be resolved.
|
||||
//
|
||||
// If an optional scratch buffer is provided that is at least one page of
|
||||
// memory, it will be used when reading directory entries from the file system.
|
||||
//
|
||||
// Note that this function, depending on operating system, may or may not invoke
|
||||
// the ReadDirents function, in order to prepare the list of immediate
|
||||
// descendants. Therefore, if your program needs both the names and the file
|
||||
// system mode types of descendants, it will always be faster to invoke
|
||||
// ReadDirents directly, rather than calling this function, then looping over
|
||||
// the results and calling os.Stat for each child.
|
||||
//
|
||||
// children, err := godirwalk.ReadDirnames(osDirname, nil)
|
||||
// if err != nil {
|
||||
// return nil, errors.Wrap(err, "cannot get list of directory children")
|
||||
// }
|
||||
// sort.Strings(children)
|
||||
// for _, child := range children {
|
||||
// fmt.Printf("%s\n", child)
|
||||
// }
|
||||
func ReadDirnames(osDirname string, scratchBuffer []byte) ([]string, error) {
|
||||
return readdirnames(osDirname, scratchBuffer)
|
||||
}
|
||||
107
vendor/github.com/karrick/godirwalk/readdir_unix.go
generated
vendored
Normal file
107
vendor/github.com/karrick/godirwalk/readdir_unix.go
generated
vendored
Normal file
@@ -0,0 +1,107 @@
|
||||
// +build darwin freebsd linux netbsd openbsd
|
||||
|
||||
package godirwalk
|
||||
|
||||
import (
|
||||
"os"
|
||||
"path/filepath"
|
||||
"syscall"
|
||||
"unsafe"
|
||||
)
|
||||
|
||||
func readdirents(osDirname string, scratchBuffer []byte) (Dirents, error) {
|
||||
dh, err := os.Open(osDirname)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
var entries Dirents
|
||||
|
||||
fd := int(dh.Fd())
|
||||
|
||||
if len(scratchBuffer) < MinimumScratchBufferSize {
|
||||
scratchBuffer = make([]byte, DefaultScratchBufferSize)
|
||||
}
|
||||
|
||||
var de *syscall.Dirent
|
||||
|
||||
for {
|
||||
n, err := syscall.ReadDirent(fd, scratchBuffer)
|
||||
if err != nil {
|
||||
_ = dh.Close() // ignore potential error returned by Close
|
||||
return nil, err
|
||||
}
|
||||
if n <= 0 {
|
||||
break // end of directory reached
|
||||
}
|
||||
// Loop over the bytes returned by reading the directory entries.
|
||||
buf := scratchBuffer[:n]
|
||||
for len(buf) > 0 {
|
||||
de = (*syscall.Dirent)(unsafe.Pointer(&buf[0])) // point entry to first syscall.Dirent in buffer
|
||||
buf = buf[de.Reclen:] // advance buffer
|
||||
|
||||
if inoFromDirent(de) == 0 {
|
||||
continue // this item has been deleted, but not yet removed from directory
|
||||
}
|
||||
|
||||
nameSlice := nameFromDirent(de)
|
||||
namlen := len(nameSlice)
|
||||
if (namlen == 0) || (namlen == 1 && nameSlice[0] == '.') || (namlen == 2 && nameSlice[0] == '.' && nameSlice[1] == '.') {
|
||||
continue // skip unimportant entries
|
||||
}
|
||||
osChildname := string(nameSlice)
|
||||
|
||||
// Convert syscall constant, which is in purview of OS, to a
|
||||
// constant defined by Go, assumed by this project to be stable.
|
||||
var mode os.FileMode
|
||||
switch de.Type {
|
||||
case syscall.DT_REG:
|
||||
// regular file
|
||||
case syscall.DT_DIR:
|
||||
mode = os.ModeDir
|
||||
case syscall.DT_LNK:
|
||||
mode = os.ModeSymlink
|
||||
case syscall.DT_CHR:
|
||||
mode = os.ModeDevice | os.ModeCharDevice
|
||||
case syscall.DT_BLK:
|
||||
mode = os.ModeDevice
|
||||
case syscall.DT_FIFO:
|
||||
mode = os.ModeNamedPipe
|
||||
case syscall.DT_SOCK:
|
||||
mode = os.ModeSocket
|
||||
default:
|
||||
// If syscall returned unknown type (e.g., DT_UNKNOWN, DT_WHT),
|
||||
// then resolve actual mode by getting stat.
|
||||
fi, err := os.Lstat(filepath.Join(osDirname, osChildname))
|
||||
if err != nil {
|
||||
_ = dh.Close() // ignore potential error returned by Close
|
||||
return nil, err
|
||||
}
|
||||
// We only care about the bits that identify the type of a file
|
||||
// system node, and can ignore append, exclusive, temporary,
|
||||
// setuid, setgid, permission bits, and sticky bits, which are
|
||||
// coincident to the bits that declare type of the file system
|
||||
// node.
|
||||
mode = fi.Mode() & os.ModeType
|
||||
}
|
||||
|
||||
entries = append(entries, &Dirent{name: osChildname, modeType: mode})
|
||||
}
|
||||
}
|
||||
if err = dh.Close(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return entries, nil
|
||||
}
|
||||
|
||||
func readdirnames(osDirname string, scratchBuffer []byte) ([]string, error) {
|
||||
des, err := readdirents(osDirname, scratchBuffer)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
names := make([]string, len(des))
|
||||
for i, v := range des {
|
||||
names[i] = v.name
|
||||
}
|
||||
return names, nil
|
||||
}
|
||||
52
vendor/github.com/karrick/godirwalk/readdir_windows.go
generated
vendored
Normal file
52
vendor/github.com/karrick/godirwalk/readdir_windows.go
generated
vendored
Normal file
@@ -0,0 +1,52 @@
|
||||
package godirwalk
|
||||
|
||||
import (
|
||||
"os"
|
||||
)
|
||||
|
||||
// The functions in this file are mere wrappers of what is already provided by
|
||||
// standard library, in order to provide the same API as this library provides.
|
||||
//
|
||||
// The scratch buffer argument is ignored by this architecture.
|
||||
//
|
||||
// Please send PR or link to article if you know of a more performant way of
|
||||
// enumerating directory contents and mode types on Windows.
|
||||
|
||||
func readdirents(osDirname string, _ []byte) (Dirents, error) {
|
||||
dh, err := os.Open(osDirname)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
fileinfos, err := dh.Readdir(0)
|
||||
if er := dh.Close(); err == nil {
|
||||
err = er
|
||||
}
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
entries := make(Dirents, len(fileinfos))
|
||||
for i, info := range fileinfos {
|
||||
entries[i] = &Dirent{name: info.Name(), modeType: info.Mode() & os.ModeType}
|
||||
}
|
||||
|
||||
return entries, nil
|
||||
}
|
||||
|
||||
func readdirnames(osDirname string, _ []byte) ([]string, error) {
|
||||
dh, err := os.Open(osDirname)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
entries, err := dh.Readdirnames(0)
|
||||
if er := dh.Close(); err == nil {
|
||||
err = er
|
||||
}
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return entries, nil
|
||||
}
|
||||
365
vendor/github.com/karrick/godirwalk/walk.go
generated
vendored
Normal file
365
vendor/github.com/karrick/godirwalk/walk.go
generated
vendored
Normal file
@@ -0,0 +1,365 @@
|
||||
package godirwalk
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"sort"
|
||||
)
|
||||
|
||||
// DefaultScratchBufferSize specifies the size of the scratch buffer that will
|
||||
// be allocated by Walk, ReadDirents, or ReadDirnames when a scratch buffer is
|
||||
// not provided or the scratch buffer that is provided is smaller than
|
||||
// MinimumScratchBufferSize bytes. This may seem like a large value; however,
|
||||
// when a program intends to enumerate large directories, having a larger
|
||||
// scratch buffer results in fewer operating system calls.
|
||||
const DefaultScratchBufferSize = 64 * 1024
|
||||
|
||||
// MinimumScratchBufferSize specifies the minimum size of the scratch buffer
|
||||
// that Walk, ReadDirents, and ReadDirnames will use when reading file entries
|
||||
// from the operating system. It is initialized to the result from calling
|
||||
// `os.Getpagesize()` during program startup.
|
||||
var MinimumScratchBufferSize int
|
||||
|
||||
func init() {
|
||||
MinimumScratchBufferSize = os.Getpagesize()
|
||||
}
|
||||
|
||||
// Options provide parameters for how the Walk function operates.
|
||||
type Options struct {
|
||||
// ErrorCallback specifies a function to be invoked in the case of an error
|
||||
// that could potentially be ignored while walking a file system
|
||||
// hierarchy. When set to nil or left as its zero-value, any error condition
|
||||
// causes Walk to immediately return the error describing what took
|
||||
// place. When non-nil, this user supplied function is invoked with the OS
|
||||
// pathname of the file system object that caused the error along with the
|
||||
// error that took place. The return value of the supplied ErrorCallback
|
||||
// function determines whether the error will cause Walk to halt immediately
|
||||
// as it would were no ErrorCallback value provided, or skip this file
|
||||
// system node yet continue on with the remaining nodes in the file system
|
||||
// hierarchy.
|
||||
//
|
||||
// ErrorCallback is invoked both for errors that are returned by the
|
||||
// runtime, and for errors returned by other user supplied callback
|
||||
// functions.
|
||||
ErrorCallback func(string, error) ErrorAction
|
||||
|
||||
// FollowSymbolicLinks specifies whether Walk will follow symbolic links
|
||||
// that refer to directories. When set to false or left as its zero-value,
|
||||
// Walk will still invoke the callback function with symbolic link nodes,
|
||||
// but if the symbolic link refers to a directory, it will not recurse on
|
||||
// that directory. When set to true, Walk will recurse on symbolic links
|
||||
// that refer to a directory.
|
||||
FollowSymbolicLinks bool
|
||||
|
||||
// Unsorted controls whether or not Walk will sort the immediate descendants
|
||||
// of a directory by their relative names prior to visiting each of those
|
||||
// entries.
|
||||
//
|
||||
// When set to false or left at its zero-value, Walk will get the list of
|
||||
// immediate descendants of a particular directory, sort that list by
|
||||
// lexical order of their names, and then visit each node in the list in
|
||||
// sorted order. This will cause Walk to always traverse the same directory
|
||||
// tree in the same order, however may be inefficient for directories with
|
||||
// many immediate descendants.
|
||||
//
|
||||
// When set to true, Walk skips sorting the list of immediate descendants
|
||||
// for a directory, and simply visits each node in the order the operating
|
||||
// system enumerated them. This will be more fast, but with the side effect
|
||||
// that the traversal order may be different from one invocation to the
|
||||
// next.
|
||||
Unsorted bool
|
||||
|
||||
// Callback is a required function that Walk will invoke for every file
|
||||
// system node it encounters.
|
||||
Callback WalkFunc
|
||||
|
||||
// PostChildrenCallback is an option function that Walk will invoke for
|
||||
// every file system directory it encounters after its children have been
|
||||
// processed.
|
||||
PostChildrenCallback WalkFunc
|
||||
|
||||
// ScratchBuffer is an optional byte slice to use as a scratch buffer for
|
||||
// Walk to use when reading directory entries, to reduce amount of garbage
|
||||
// generation. Not all architectures take advantage of the scratch
|
||||
// buffer. If omitted or the provided buffer has fewer bytes than
|
||||
// MinimumScratchBufferSize, then a buffer with DefaultScratchBufferSize
|
||||
// bytes will be created and used once per Walk invocation.
|
||||
ScratchBuffer []byte
|
||||
}
|
||||
|
||||
// ErrorAction defines a set of actions the Walk function could take based on
|
||||
// the occurrence of an error while walking the file system. See the
|
||||
// documentation for the ErrorCallback field of the Options structure for more
|
||||
// information.
|
||||
type ErrorAction int
|
||||
|
||||
const (
|
||||
// Halt is the ErrorAction return value when the upstream code wants to halt
|
||||
// the walk process when a runtime error takes place. It matches the default
|
||||
// action the Walk function would take were no ErrorCallback provided.
|
||||
Halt ErrorAction = iota
|
||||
|
||||
// SkipNode is the ErrorAction return value when the upstream code wants to
|
||||
// ignore the runtime error for the current file system node, skip
|
||||
// processing of the node that caused the error, and continue walking the
|
||||
// file system hierarchy with the remaining nodes.
|
||||
SkipNode
|
||||
)
|
||||
|
||||
// WalkFunc is the type of the function called for each file system node visited
|
||||
// by Walk. The pathname argument will contain the argument to Walk as a prefix;
|
||||
// that is, if Walk is called with "dir", which is a directory containing the
|
||||
// file "a", the provided WalkFunc will be invoked with the argument "dir/a",
|
||||
// using the correct os.PathSeparator for the Go Operating System architecture,
|
||||
// GOOS. The directory entry argument is a pointer to a Dirent for the node,
|
||||
// providing access to both the basename and the mode type of the file system
|
||||
// node.
|
||||
//
|
||||
// If an error is returned by the Callback or PostChildrenCallback functions,
|
||||
// and no ErrorCallback function is provided, processing stops. If an
|
||||
// ErrorCallback function is provided, then it is invoked with the OS pathname
|
||||
// of the node that caused the error along along with the error. The return
|
||||
// value of the ErrorCallback function determines whether to halt processing, or
|
||||
// skip this node and continue processing remaining file system nodes.
|
||||
//
|
||||
// The exception is when the function returns the special value
|
||||
// filepath.SkipDir. If the function returns filepath.SkipDir when invoked on a
|
||||
// directory, Walk skips the directory's contents entirely. If the function
|
||||
// returns filepath.SkipDir when invoked on a non-directory file system node,
|
||||
// Walk skips the remaining files in the containing directory. Note that any
|
||||
// supplied ErrorCallback function is not invoked with filepath.SkipDir when the
|
||||
// Callback or PostChildrenCallback functions return that special value.
|
||||
type WalkFunc func(osPathname string, directoryEntry *Dirent) error
|
||||
|
||||
// Walk walks the file tree rooted at the specified directory, calling the
|
||||
// specified callback function for each file system node in the tree, including
|
||||
// root, symbolic links, and other node types. The nodes are walked in lexical
|
||||
// order, which makes the output deterministic but means that for very large
|
||||
// directories this function can be inefficient.
|
||||
//
|
||||
// This function is often much faster than filepath.Walk because it does not
|
||||
// invoke os.Stat for every node it encounters, but rather obtains the file
|
||||
// system node type when it reads the parent directory.
|
||||
//
|
||||
// If a runtime error occurs, either from the operating system or from the
|
||||
// upstream Callback or PostChildrenCallback functions, processing typically
|
||||
// halts. However, when an ErrorCallback function is provided in the provided
|
||||
// Options structure, that function is invoked with the error along with the OS
|
||||
// pathname of the file system node that caused the error. The ErrorCallback
|
||||
// function's return value determines the action that Walk will then take.
|
||||
//
|
||||
// func main() {
|
||||
// dirname := "."
|
||||
// if len(os.Args) > 1 {
|
||||
// dirname = os.Args[1]
|
||||
// }
|
||||
// err := godirwalk.Walk(dirname, &godirwalk.Options{
|
||||
// Callback: func(osPathname string, de *godirwalk.Dirent) error {
|
||||
// fmt.Printf("%s %s\n", de.ModeType(), osPathname)
|
||||
// return nil
|
||||
// },
|
||||
// ErrorCallback: func(osPathname string, err error) godirwalk.ErrorAction {
|
||||
// // Your program may want to log the error somehow.
|
||||
// fmt.Fprintf(os.Stderr, "ERROR: %s\n", err)
|
||||
//
|
||||
// // For the purposes of this example, a simple SkipNode will suffice,
|
||||
// // although in reality perhaps additional logic might be called for.
|
||||
// return godirwalk.SkipNode
|
||||
// },
|
||||
// })
|
||||
// if err != nil {
|
||||
// fmt.Fprintf(os.Stderr, "%s\n", err)
|
||||
// os.Exit(1)
|
||||
// }
|
||||
// }
|
||||
func Walk(pathname string, options *Options) error {
|
||||
pathname = filepath.Clean(pathname)
|
||||
|
||||
var fi os.FileInfo
|
||||
var err error
|
||||
|
||||
if options.FollowSymbolicLinks {
|
||||
fi, err = os.Stat(pathname)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
} else {
|
||||
fi, err = os.Lstat(pathname)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
mode := fi.Mode()
|
||||
if mode&os.ModeDir == 0 {
|
||||
return fmt.Errorf("cannot Walk non-directory: %s", pathname)
|
||||
}
|
||||
|
||||
dirent := &Dirent{
|
||||
name: filepath.Base(pathname),
|
||||
modeType: mode & os.ModeType,
|
||||
}
|
||||
|
||||
// If ErrorCallback is nil, set to a default value that halts the walk
|
||||
// process on all operating system errors. This is done to allow error
|
||||
// handling to be more succinct in the walk code.
|
||||
if options.ErrorCallback == nil {
|
||||
options.ErrorCallback = defaultErrorCallback
|
||||
}
|
||||
|
||||
if len(options.ScratchBuffer) < MinimumScratchBufferSize {
|
||||
options.ScratchBuffer = make([]byte, DefaultScratchBufferSize)
|
||||
}
|
||||
|
||||
err = walk(pathname, dirent, options)
|
||||
if err == filepath.SkipDir {
|
||||
return nil // silence SkipDir for top level
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
// defaultErrorCallback always returns Halt because if the upstream code did not
|
||||
// provide an ErrorCallback function, walking the file system hierarchy ought to
|
||||
// halt upon any operating system error.
|
||||
func defaultErrorCallback(_ string, _ error) ErrorAction { return Halt }
|
||||
|
||||
// walk recursively traverses the file system node specified by pathname and the
|
||||
// Dirent.
|
||||
func walk(osPathname string, dirent *Dirent, options *Options) error {
|
||||
err := options.Callback(osPathname, dirent)
|
||||
if err != nil {
|
||||
if err == filepath.SkipDir {
|
||||
return err
|
||||
}
|
||||
err = errCallback(err.Error()) // wrap potential errors returned by callback
|
||||
if action := options.ErrorCallback(osPathname, err); action == SkipNode {
|
||||
return nil
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
// On some platforms, an entry can have more than one mode type bit set.
|
||||
// For instance, it could have both the symlink bit and the directory bit
|
||||
// set indicating it's a symlink to a directory.
|
||||
if dirent.IsSymlink() {
|
||||
if !options.FollowSymbolicLinks {
|
||||
return nil
|
||||
}
|
||||
// Only need to Stat entry if platform did not already have os.ModeDir
|
||||
// set, such as would be the case for unix like operating systems. (This
|
||||
// guard eliminates extra os.Stat check on Windows.)
|
||||
if !dirent.IsDir() {
|
||||
referent, err := os.Readlink(osPathname)
|
||||
if err != nil {
|
||||
if action := options.ErrorCallback(osPathname, err); action == SkipNode {
|
||||
return nil
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
var osp string
|
||||
if filepath.IsAbs(referent) {
|
||||
osp = referent
|
||||
} else {
|
||||
osp = filepath.Join(filepath.Dir(osPathname), referent)
|
||||
}
|
||||
|
||||
fi, err := os.Stat(osp)
|
||||
if err != nil {
|
||||
if action := options.ErrorCallback(osp, err); action == SkipNode {
|
||||
return nil
|
||||
}
|
||||
return err
|
||||
}
|
||||
dirent.modeType = fi.Mode() & os.ModeType
|
||||
}
|
||||
}
|
||||
|
||||
if !dirent.IsDir() {
|
||||
return nil
|
||||
}
|
||||
|
||||
// If get here, then specified pathname refers to a directory.
|
||||
deChildren, err := ReadDirents(osPathname, options.ScratchBuffer)
|
||||
if err != nil {
|
||||
if action := options.ErrorCallback(osPathname, err); action == SkipNode {
|
||||
return nil
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
if !options.Unsorted {
|
||||
sort.Sort(deChildren) // sort children entries unless upstream says to leave unsorted
|
||||
}
|
||||
|
||||
for _, deChild := range deChildren {
|
||||
osChildname := filepath.Join(osPathname, deChild.name)
|
||||
err = walk(osChildname, deChild, options)
|
||||
if err != nil {
|
||||
if err != filepath.SkipDir {
|
||||
return err
|
||||
}
|
||||
// If received skipdir on a directory, stop processing that
|
||||
// directory, but continue to its siblings. If received skipdir on a
|
||||
// non-directory, stop processing remaining siblings.
|
||||
if deChild.IsSymlink() {
|
||||
// Only need to Stat entry if platform did not already have
|
||||
// os.ModeDir set, such as would be the case for unix like
|
||||
// operating systems. (This guard eliminates extra os.Stat check
|
||||
// on Windows.)
|
||||
if !deChild.IsDir() {
|
||||
// Resolve symbolic link referent to determine whether node
|
||||
// is directory or not.
|
||||
referent, err := os.Readlink(osChildname)
|
||||
if err != nil {
|
||||
if action := options.ErrorCallback(osChildname, err); action == SkipNode {
|
||||
continue // with next child
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
var osp string
|
||||
if filepath.IsAbs(referent) {
|
||||
osp = referent
|
||||
} else {
|
||||
osp = filepath.Join(osPathname, referent)
|
||||
}
|
||||
|
||||
fi, err := os.Stat(osp)
|
||||
if err != nil {
|
||||
if action := options.ErrorCallback(osp, err); action == SkipNode {
|
||||
continue // with next child
|
||||
}
|
||||
return err
|
||||
}
|
||||
deChild.modeType = fi.Mode() & os.ModeType
|
||||
}
|
||||
}
|
||||
if !deChild.IsDir() {
|
||||
// If not directory, return immediately, thus skipping remainder
|
||||
// of siblings.
|
||||
return nil
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if options.PostChildrenCallback == nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
err = options.PostChildrenCallback(osPathname, dirent)
|
||||
if err == nil || err == filepath.SkipDir {
|
||||
return err
|
||||
}
|
||||
|
||||
err = errCallback(err.Error()) // wrap potential errors returned by callback
|
||||
if action := options.ErrorCallback(osPathname, err); action == SkipNode {
|
||||
return nil
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
type errCallback string
|
||||
|
||||
func (e errCallback) Error() string { return string(e) }
|
||||
9
vendor/github.com/karrick/godirwalk/withFileno.go
generated
vendored
Normal file
9
vendor/github.com/karrick/godirwalk/withFileno.go
generated
vendored
Normal file
@@ -0,0 +1,9 @@
|
||||
// +build dragonfly freebsd openbsd netbsd
|
||||
|
||||
package godirwalk
|
||||
|
||||
import "syscall"
|
||||
|
||||
func inoFromDirent(de *syscall.Dirent) uint64 {
|
||||
return uint64(de.Fileno)
|
||||
}
|
||||
9
vendor/github.com/karrick/godirwalk/withIno.go
generated
vendored
Normal file
9
vendor/github.com/karrick/godirwalk/withIno.go
generated
vendored
Normal file
@@ -0,0 +1,9 @@
|
||||
// +build darwin linux
|
||||
|
||||
package godirwalk
|
||||
|
||||
import "syscall"
|
||||
|
||||
func inoFromDirent(de *syscall.Dirent) uint64 {
|
||||
return de.Ino
|
||||
}
|
||||
29
vendor/github.com/karrick/godirwalk/withNamlen.go
generated
vendored
Normal file
29
vendor/github.com/karrick/godirwalk/withNamlen.go
generated
vendored
Normal file
@@ -0,0 +1,29 @@
|
||||
// +build darwin dragonfly freebsd netbsd openbsd
|
||||
|
||||
package godirwalk
|
||||
|
||||
import (
|
||||
"reflect"
|
||||
"syscall"
|
||||
"unsafe"
|
||||
)
|
||||
|
||||
func nameFromDirent(de *syscall.Dirent) []byte {
|
||||
// Because this GOOS' syscall.Dirent provides a Namlen field that says how
|
||||
// long the name is, this function does not need to search for the NULL
|
||||
// byte.
|
||||
ml := int(de.Namlen)
|
||||
|
||||
// Convert syscall.Dirent.Name, which is array of int8, to []byte, by
|
||||
// overwriting Cap, Len, and Data slice header fields to values from
|
||||
// syscall.Dirent fields. Setting the Cap, Len, and Data field values for
|
||||
// the slice header modifies what the slice header points to, and in this
|
||||
// case, the name buffer.
|
||||
var name []byte
|
||||
sh := (*reflect.SliceHeader)(unsafe.Pointer(&name))
|
||||
sh.Cap = ml
|
||||
sh.Len = ml
|
||||
sh.Data = uintptr(unsafe.Pointer(&de.Name[0]))
|
||||
|
||||
return name
|
||||
}
|
||||
36
vendor/github.com/karrick/godirwalk/withoutNamlen.go
generated
vendored
Normal file
36
vendor/github.com/karrick/godirwalk/withoutNamlen.go
generated
vendored
Normal file
@@ -0,0 +1,36 @@
|
||||
// +build nacl linux solaris
|
||||
|
||||
package godirwalk
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"reflect"
|
||||
"syscall"
|
||||
"unsafe"
|
||||
)
|
||||
|
||||
func nameFromDirent(de *syscall.Dirent) []byte {
|
||||
// Because this GOOS' syscall.Dirent does not provide a field that specifies
|
||||
// the name length, this function must first calculate the max possible name
|
||||
// length, and then search for the NULL byte.
|
||||
ml := int(uint64(de.Reclen) - uint64(unsafe.Offsetof(syscall.Dirent{}.Name)))
|
||||
|
||||
// Convert syscall.Dirent.Name, which is array of int8, to []byte, by
|
||||
// overwriting Cap, Len, and Data slice header fields to values from
|
||||
// syscall.Dirent fields. Setting the Cap, Len, and Data field values for
|
||||
// the slice header modifies what the slice header points to, and in this
|
||||
// case, the name buffer.
|
||||
var name []byte
|
||||
sh := (*reflect.SliceHeader)(unsafe.Pointer(&name))
|
||||
sh.Cap = ml
|
||||
sh.Len = ml
|
||||
sh.Data = uintptr(unsafe.Pointer(&de.Name[0]))
|
||||
|
||||
if index := bytes.IndexByte(name, 0); index >= 0 {
|
||||
// Found NULL byte; set slice's cap and len accordingly.
|
||||
sh.Cap = index
|
||||
sh.Len = index
|
||||
}
|
||||
|
||||
return name
|
||||
}
|
||||
25
vendor/github.com/labstack/echo/.editorconfig
generated
vendored
Normal file
25
vendor/github.com/labstack/echo/.editorconfig
generated
vendored
Normal file
@@ -0,0 +1,25 @@
|
||||
# EditorConfig coding styles definitions. For more information about the
|
||||
# properties used in this file, please see the EditorConfig documentation:
|
||||
# http://editorconfig.org/
|
||||
|
||||
# indicate this is the root of the project
|
||||
root = true
|
||||
|
||||
[*]
|
||||
charset = utf-8
|
||||
|
||||
end_of_line = LF
|
||||
insert_final_newline = true
|
||||
trim_trailing_whitespace = true
|
||||
|
||||
indent_style = space
|
||||
indent_size = 2
|
||||
|
||||
[Makefile]
|
||||
indent_style = tab
|
||||
|
||||
[*.md]
|
||||
trim_trailing_whitespace = false
|
||||
|
||||
[*.go]
|
||||
indent_style = tab
|
||||
20
vendor/github.com/labstack/echo/.gitattributes
generated
vendored
Normal file
20
vendor/github.com/labstack/echo/.gitattributes
generated
vendored
Normal file
@@ -0,0 +1,20 @@
|
||||
# Automatically normalize line endings for all text-based files
|
||||
# http://git-scm.com/docs/gitattributes#_end_of_line_conversion
|
||||
* text=auto
|
||||
|
||||
# For the following file types, normalize line endings to LF on checking and
|
||||
# prevent conversion to CRLF when they are checked out (this is required in
|
||||
# order to prevent newline related issues)
|
||||
.* text eol=lf
|
||||
*.go text eol=lf
|
||||
*.yml text eol=lf
|
||||
*.html text eol=lf
|
||||
*.css text eol=lf
|
||||
*.js text eol=lf
|
||||
*.json text eol=lf
|
||||
LICENSE text eol=lf
|
||||
|
||||
# Exclude `website` and `cookbook` from GitHub's language statistics
|
||||
# https://github.com/github/linguist#using-gitattributes
|
||||
cookbook/* linguist-documentation
|
||||
website/* linguist-documentation
|
||||
7
vendor/github.com/labstack/echo/.gitignore
generated
vendored
Normal file
7
vendor/github.com/labstack/echo/.gitignore
generated
vendored
Normal file
@@ -0,0 +1,7 @@
|
||||
.DS_Store
|
||||
coverage.txt
|
||||
_test
|
||||
vendor
|
||||
.idea
|
||||
*.iml
|
||||
*.out
|
||||
16
vendor/github.com/labstack/echo/.travis.yml
generated
vendored
Normal file
16
vendor/github.com/labstack/echo/.travis.yml
generated
vendored
Normal file
@@ -0,0 +1,16 @@
|
||||
language: go
|
||||
go:
|
||||
- 1.11.x
|
||||
- tip
|
||||
env:
|
||||
- GO111MODULE=on
|
||||
install:
|
||||
- go get -v golang.org/x/lint/golint
|
||||
script:
|
||||
- golint -set_exit_status ./...
|
||||
- go test -race -coverprofile=coverage.txt -covermode=atomic ./...
|
||||
after_success:
|
||||
- bash <(curl -s https://codecov.io/bash)
|
||||
matrix:
|
||||
allow_failures:
|
||||
- go: tip
|
||||
21
vendor/github.com/labstack/echo/LICENSE
generated
vendored
Normal file
21
vendor/github.com/labstack/echo/LICENSE
generated
vendored
Normal file
@@ -0,0 +1,21 @@
|
||||
The MIT License (MIT)
|
||||
|
||||
Copyright (c) 2017 LabStack
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
in the Software without restriction, including without limitation the rights
|
||||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
copies of the Software, and to permit persons to whom the Software is
|
||||
furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in all
|
||||
copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||
SOFTWARE.
|
||||
3
vendor/github.com/labstack/echo/Makefile
generated
vendored
Normal file
3
vendor/github.com/labstack/echo/Makefile
generated
vendored
Normal file
@@ -0,0 +1,3 @@
|
||||
tag:
|
||||
@git tag `grep -P '^\tversion = ' echo.go|cut -f2 -d'"'`
|
||||
@git tag|grep -v ^v
|
||||
99
vendor/github.com/labstack/echo/README.md
generated
vendored
Normal file
99
vendor/github.com/labstack/echo/README.md
generated
vendored
Normal file
@@ -0,0 +1,99 @@
|
||||
<a href="https://echo.labstack.com"><img height="80" src="https://cdn.labstack.com/images/echo-logo.svg"></a>
|
||||
|
||||
[](https://sourcegraph.com/github.com/labstack/echo?badge)
|
||||
[](http://godoc.org/github.com/labstack/echo)
|
||||
[](https://goreportcard.com/report/github.com/labstack/echo)
|
||||
[](https://travis-ci.org/labstack/echo)
|
||||
[](https://codecov.io/gh/labstack/echo)
|
||||
[](https://gitter.im/labstack/echo)
|
||||
[](https://forum.labstack.com)
|
||||
[](https://twitter.com/labstack)
|
||||
[](https://raw.githubusercontent.com/labstack/echo/master/LICENSE)
|
||||
|
||||
## Feature Overview
|
||||
|
||||
- Optimized HTTP router which smartly prioritize routes
|
||||
- Build robust and scalable RESTful APIs
|
||||
- Group APIs
|
||||
- Extensible middleware framework
|
||||
- Define middleware at root, group or route level
|
||||
- Data binding for JSON, XML and form payload
|
||||
- Handy functions to send variety of HTTP responses
|
||||
- Centralized HTTP error handling
|
||||
- Template rendering with any template engine
|
||||
- Define your format for the logger
|
||||
- Highly customizable
|
||||
- Automatic TLS via Let’s Encrypt
|
||||
- HTTP/2 support
|
||||
|
||||
## Benchmarks
|
||||
|
||||
Date: 2018/03/15<br>
|
||||
Source: https://github.com/vishr/web-framework-benchmark<br>
|
||||
Lower is better!
|
||||
|
||||
<img src="https://i.imgur.com/I32VdMJ.png">
|
||||
|
||||
## [Guide](https://echo.labstack.com/guide)
|
||||
|
||||
### Example
|
||||
|
||||
```go
|
||||
package main
|
||||
|
||||
import (
|
||||
"net/http"
|
||||
|
||||
"github.com/labstack/echo"
|
||||
"github.com/labstack/echo/middleware"
|
||||
)
|
||||
|
||||
func main() {
|
||||
// Echo instance
|
||||
e := echo.New()
|
||||
|
||||
// Middleware
|
||||
e.Use(middleware.Logger())
|
||||
e.Use(middleware.Recover())
|
||||
|
||||
// Routes
|
||||
e.GET("/", hello)
|
||||
|
||||
// Start server
|
||||
e.Logger.Fatal(e.Start(":1323"))
|
||||
}
|
||||
|
||||
// Handler
|
||||
func hello(c echo.Context) error {
|
||||
return c.String(http.StatusOK, "Hello, World!")
|
||||
}
|
||||
```
|
||||
|
||||
## Help
|
||||
|
||||
- [Forum](https://forum.labstack.com)
|
||||
- [Chat](https://gitter.im/labstack/echo)
|
||||
|
||||
## Contribute
|
||||
|
||||
**Use issues for everything**
|
||||
|
||||
- For a small change, just send a PR.
|
||||
- For bigger changes open an issue for discussion before sending a PR.
|
||||
- PR should have:
|
||||
- Test case
|
||||
- Documentation
|
||||
- Example (If it makes sense)
|
||||
- You can also contribute by:
|
||||
- Reporting issues
|
||||
- Suggesting new features or enhancements
|
||||
- Improve/fix documentation
|
||||
|
||||
## Credits
|
||||
- [Vishal Rana](https://github.com/vishr) - Author
|
||||
- [Nitin Rana](https://github.com/nr17) - Consultant
|
||||
- [Contributors](https://github.com/labstack/echo/graphs/contributors)
|
||||
|
||||
## License
|
||||
|
||||
[MIT](https://github.com/labstack/echo/blob/master/LICENSE)
|
||||
277
vendor/github.com/labstack/echo/bind.go
generated
vendored
Normal file
277
vendor/github.com/labstack/echo/bind.go
generated
vendored
Normal file
@@ -0,0 +1,277 @@
|
||||
package echo
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"encoding/xml"
|
||||
"errors"
|
||||
"fmt"
|
||||
"net/http"
|
||||
"reflect"
|
||||
"strconv"
|
||||
"strings"
|
||||
)
|
||||
|
||||
type (
|
||||
// Binder is the interface that wraps the Bind method.
|
||||
Binder interface {
|
||||
Bind(i interface{}, c Context) error
|
||||
}
|
||||
|
||||
// DefaultBinder is the default implementation of the Binder interface.
|
||||
DefaultBinder struct{}
|
||||
|
||||
// BindUnmarshaler is the interface used to wrap the UnmarshalParam method.
|
||||
BindUnmarshaler interface {
|
||||
// UnmarshalParam decodes and assigns a value from an form or query param.
|
||||
UnmarshalParam(param string) error
|
||||
}
|
||||
)
|
||||
|
||||
// Bind implements the `Binder#Bind` function.
|
||||
func (b *DefaultBinder) Bind(i interface{}, c Context) (err error) {
|
||||
req := c.Request()
|
||||
if req.ContentLength == 0 {
|
||||
if req.Method == http.MethodGet || req.Method == http.MethodDelete {
|
||||
if err = b.bindData(i, c.QueryParams(), "query"); err != nil {
|
||||
return NewHTTPError(http.StatusBadRequest, err.Error()).SetInternal(err)
|
||||
}
|
||||
return
|
||||
}
|
||||
return NewHTTPError(http.StatusBadRequest, "Request body can't be empty")
|
||||
}
|
||||
ctype := req.Header.Get(HeaderContentType)
|
||||
switch {
|
||||
case strings.HasPrefix(ctype, MIMEApplicationJSON):
|
||||
if err = json.NewDecoder(req.Body).Decode(i); err != nil {
|
||||
if ute, ok := err.(*json.UnmarshalTypeError); ok {
|
||||
return NewHTTPError(http.StatusBadRequest, fmt.Sprintf("Unmarshal type error: expected=%v, got=%v, field=%v, offset=%v", ute.Type, ute.Value, ute.Field, ute.Offset)).SetInternal(err)
|
||||
} else if se, ok := err.(*json.SyntaxError); ok {
|
||||
return NewHTTPError(http.StatusBadRequest, fmt.Sprintf("Syntax error: offset=%v, error=%v", se.Offset, se.Error())).SetInternal(err)
|
||||
} else {
|
||||
return NewHTTPError(http.StatusBadRequest, err.Error()).SetInternal(err)
|
||||
}
|
||||
return NewHTTPError(http.StatusBadRequest, err.Error())
|
||||
}
|
||||
case strings.HasPrefix(ctype, MIMEApplicationXML), strings.HasPrefix(ctype, MIMETextXML):
|
||||
if err = xml.NewDecoder(req.Body).Decode(i); err != nil {
|
||||
if ute, ok := err.(*xml.UnsupportedTypeError); ok {
|
||||
return NewHTTPError(http.StatusBadRequest, fmt.Sprintf("Unsupported type error: type=%v, error=%v", ute.Type, ute.Error())).SetInternal(err)
|
||||
} else if se, ok := err.(*xml.SyntaxError); ok {
|
||||
return NewHTTPError(http.StatusBadRequest, fmt.Sprintf("Syntax error: line=%v, error=%v", se.Line, se.Error())).SetInternal(err)
|
||||
} else {
|
||||
return NewHTTPError(http.StatusBadRequest, err.Error()).SetInternal(err)
|
||||
}
|
||||
return NewHTTPError(http.StatusBadRequest, err.Error())
|
||||
}
|
||||
case strings.HasPrefix(ctype, MIMEApplicationForm), strings.HasPrefix(ctype, MIMEMultipartForm):
|
||||
params, err := c.FormParams()
|
||||
if err != nil {
|
||||
return NewHTTPError(http.StatusBadRequest, err.Error()).SetInternal(err)
|
||||
}
|
||||
if err = b.bindData(i, params, "form"); err != nil {
|
||||
return NewHTTPError(http.StatusBadRequest, err.Error()).SetInternal(err)
|
||||
}
|
||||
default:
|
||||
return ErrUnsupportedMediaType
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
func (b *DefaultBinder) bindData(ptr interface{}, data map[string][]string, tag string) error {
|
||||
typ := reflect.TypeOf(ptr).Elem()
|
||||
val := reflect.ValueOf(ptr).Elem()
|
||||
|
||||
if typ.Kind() != reflect.Struct {
|
||||
return errors.New("binding element must be a struct")
|
||||
}
|
||||
|
||||
for i := 0; i < typ.NumField(); i++ {
|
||||
typeField := typ.Field(i)
|
||||
structField := val.Field(i)
|
||||
if !structField.CanSet() {
|
||||
continue
|
||||
}
|
||||
structFieldKind := structField.Kind()
|
||||
inputFieldName := typeField.Tag.Get(tag)
|
||||
|
||||
if inputFieldName == "" {
|
||||
inputFieldName = typeField.Name
|
||||
// If tag is nil, we inspect if the field is a struct.
|
||||
if _, ok := bindUnmarshaler(structField); !ok && structFieldKind == reflect.Struct {
|
||||
if err := b.bindData(structField.Addr().Interface(), data, tag); err != nil {
|
||||
return err
|
||||
}
|
||||
continue
|
||||
}
|
||||
}
|
||||
|
||||
inputValue, exists := data[inputFieldName]
|
||||
if !exists {
|
||||
// Go json.Unmarshal supports case insensitive binding. However the
|
||||
// url params are bound case sensitive which is inconsistent. To
|
||||
// fix this we must check all of the map values in a
|
||||
// case-insensitive search.
|
||||
inputFieldName = strings.ToLower(inputFieldName)
|
||||
for k, v := range data {
|
||||
if strings.ToLower(k) == inputFieldName {
|
||||
inputValue = v
|
||||
exists = true
|
||||
break
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if !exists {
|
||||
continue
|
||||
}
|
||||
|
||||
// Call this first, in case we're dealing with an alias to an array type
|
||||
if ok, err := unmarshalField(typeField.Type.Kind(), inputValue[0], structField); ok {
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
continue
|
||||
}
|
||||
|
||||
numElems := len(inputValue)
|
||||
if structFieldKind == reflect.Slice && numElems > 0 {
|
||||
sliceOf := structField.Type().Elem().Kind()
|
||||
slice := reflect.MakeSlice(structField.Type(), numElems, numElems)
|
||||
for j := 0; j < numElems; j++ {
|
||||
if err := setWithProperType(sliceOf, inputValue[j], slice.Index(j)); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
val.Field(i).Set(slice)
|
||||
} else if err := setWithProperType(typeField.Type.Kind(), inputValue[0], structField); err != nil {
|
||||
return err
|
||||
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func setWithProperType(valueKind reflect.Kind, val string, structField reflect.Value) error {
|
||||
// But also call it here, in case we're dealing with an array of BindUnmarshalers
|
||||
if ok, err := unmarshalField(valueKind, val, structField); ok {
|
||||
return err
|
||||
}
|
||||
|
||||
switch valueKind {
|
||||
case reflect.Ptr:
|
||||
return setWithProperType(structField.Elem().Kind(), val, structField.Elem())
|
||||
case reflect.Int:
|
||||
return setIntField(val, 0, structField)
|
||||
case reflect.Int8:
|
||||
return setIntField(val, 8, structField)
|
||||
case reflect.Int16:
|
||||
return setIntField(val, 16, structField)
|
||||
case reflect.Int32:
|
||||
return setIntField(val, 32, structField)
|
||||
case reflect.Int64:
|
||||
return setIntField(val, 64, structField)
|
||||
case reflect.Uint:
|
||||
return setUintField(val, 0, structField)
|
||||
case reflect.Uint8:
|
||||
return setUintField(val, 8, structField)
|
||||
case reflect.Uint16:
|
||||
return setUintField(val, 16, structField)
|
||||
case reflect.Uint32:
|
||||
return setUintField(val, 32, structField)
|
||||
case reflect.Uint64:
|
||||
return setUintField(val, 64, structField)
|
||||
case reflect.Bool:
|
||||
return setBoolField(val, structField)
|
||||
case reflect.Float32:
|
||||
return setFloatField(val, 32, structField)
|
||||
case reflect.Float64:
|
||||
return setFloatField(val, 64, structField)
|
||||
case reflect.String:
|
||||
structField.SetString(val)
|
||||
default:
|
||||
return errors.New("unknown type")
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func unmarshalField(valueKind reflect.Kind, val string, field reflect.Value) (bool, error) {
|
||||
switch valueKind {
|
||||
case reflect.Ptr:
|
||||
return unmarshalFieldPtr(val, field)
|
||||
default:
|
||||
return unmarshalFieldNonPtr(val, field)
|
||||
}
|
||||
}
|
||||
|
||||
// bindUnmarshaler attempts to unmarshal a reflect.Value into a BindUnmarshaler
|
||||
func bindUnmarshaler(field reflect.Value) (BindUnmarshaler, bool) {
|
||||
ptr := reflect.New(field.Type())
|
||||
if ptr.CanInterface() {
|
||||
iface := ptr.Interface()
|
||||
if unmarshaler, ok := iface.(BindUnmarshaler); ok {
|
||||
return unmarshaler, ok
|
||||
}
|
||||
}
|
||||
return nil, false
|
||||
}
|
||||
|
||||
func unmarshalFieldNonPtr(value string, field reflect.Value) (bool, error) {
|
||||
if unmarshaler, ok := bindUnmarshaler(field); ok {
|
||||
err := unmarshaler.UnmarshalParam(value)
|
||||
field.Set(reflect.ValueOf(unmarshaler).Elem())
|
||||
return true, err
|
||||
}
|
||||
return false, nil
|
||||
}
|
||||
|
||||
func unmarshalFieldPtr(value string, field reflect.Value) (bool, error) {
|
||||
if field.IsNil() {
|
||||
// Initialize the pointer to a nil value
|
||||
field.Set(reflect.New(field.Type().Elem()))
|
||||
}
|
||||
return unmarshalFieldNonPtr(value, field.Elem())
|
||||
}
|
||||
|
||||
func setIntField(value string, bitSize int, field reflect.Value) error {
|
||||
if value == "" {
|
||||
value = "0"
|
||||
}
|
||||
intVal, err := strconv.ParseInt(value, 10, bitSize)
|
||||
if err == nil {
|
||||
field.SetInt(intVal)
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
func setUintField(value string, bitSize int, field reflect.Value) error {
|
||||
if value == "" {
|
||||
value = "0"
|
||||
}
|
||||
uintVal, err := strconv.ParseUint(value, 10, bitSize)
|
||||
if err == nil {
|
||||
field.SetUint(uintVal)
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
func setBoolField(value string, field reflect.Value) error {
|
||||
if value == "" {
|
||||
value = "false"
|
||||
}
|
||||
boolVal, err := strconv.ParseBool(value)
|
||||
if err == nil {
|
||||
field.SetBool(boolVal)
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
func setFloatField(value string, bitSize int, field reflect.Value) error {
|
||||
if value == "" {
|
||||
value = "0.0"
|
||||
}
|
||||
floatVal, err := strconv.ParseFloat(value, bitSize)
|
||||
if err == nil {
|
||||
field.SetFloat(floatVal)
|
||||
}
|
||||
return err
|
||||
}
|
||||
600
vendor/github.com/labstack/echo/context.go
generated
vendored
Normal file
600
vendor/github.com/labstack/echo/context.go
generated
vendored
Normal file
@@ -0,0 +1,600 @@
|
||||
package echo
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"encoding/json"
|
||||
"encoding/xml"
|
||||
"fmt"
|
||||
"io"
|
||||
"mime/multipart"
|
||||
"net"
|
||||
"net/http"
|
||||
"net/url"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
)
|
||||
|
||||
type (
|
||||
// Context represents the context of the current HTTP request. It holds request and
|
||||
// response objects, path, path parameters, data and registered handler.
|
||||
Context interface {
|
||||
// Request returns `*http.Request`.
|
||||
Request() *http.Request
|
||||
|
||||
// SetRequest sets `*http.Request`.
|
||||
SetRequest(r *http.Request)
|
||||
|
||||
// Response returns `*Response`.
|
||||
Response() *Response
|
||||
|
||||
// IsTLS returns true if HTTP connection is TLS otherwise false.
|
||||
IsTLS() bool
|
||||
|
||||
// IsWebSocket returns true if HTTP connection is WebSocket otherwise false.
|
||||
IsWebSocket() bool
|
||||
|
||||
// Scheme returns the HTTP protocol scheme, `http` or `https`.
|
||||
Scheme() string
|
||||
|
||||
// RealIP returns the client's network address based on `X-Forwarded-For`
|
||||
// or `X-Real-IP` request header.
|
||||
RealIP() string
|
||||
|
||||
// Path returns the registered path for the handler.
|
||||
Path() string
|
||||
|
||||
// SetPath sets the registered path for the handler.
|
||||
SetPath(p string)
|
||||
|
||||
// Param returns path parameter by name.
|
||||
Param(name string) string
|
||||
|
||||
// ParamNames returns path parameter names.
|
||||
ParamNames() []string
|
||||
|
||||
// SetParamNames sets path parameter names.
|
||||
SetParamNames(names ...string)
|
||||
|
||||
// ParamValues returns path parameter values.
|
||||
ParamValues() []string
|
||||
|
||||
// SetParamValues sets path parameter values.
|
||||
SetParamValues(values ...string)
|
||||
|
||||
// QueryParam returns the query param for the provided name.
|
||||
QueryParam(name string) string
|
||||
|
||||
// QueryParams returns the query parameters as `url.Values`.
|
||||
QueryParams() url.Values
|
||||
|
||||
// QueryString returns the URL query string.
|
||||
QueryString() string
|
||||
|
||||
// FormValue returns the form field value for the provided name.
|
||||
FormValue(name string) string
|
||||
|
||||
// FormParams returns the form parameters as `url.Values`.
|
||||
FormParams() (url.Values, error)
|
||||
|
||||
// FormFile returns the multipart form file for the provided name.
|
||||
FormFile(name string) (*multipart.FileHeader, error)
|
||||
|
||||
// MultipartForm returns the multipart form.
|
||||
MultipartForm() (*multipart.Form, error)
|
||||
|
||||
// Cookie returns the named cookie provided in the request.
|
||||
Cookie(name string) (*http.Cookie, error)
|
||||
|
||||
// SetCookie adds a `Set-Cookie` header in HTTP response.
|
||||
SetCookie(cookie *http.Cookie)
|
||||
|
||||
// Cookies returns the HTTP cookies sent with the request.
|
||||
Cookies() []*http.Cookie
|
||||
|
||||
// Get retrieves data from the context.
|
||||
Get(key string) interface{}
|
||||
|
||||
// Set saves data in the context.
|
||||
Set(key string, val interface{})
|
||||
|
||||
// Bind binds the request body into provided type `i`. The default binder
|
||||
// does it based on Content-Type header.
|
||||
Bind(i interface{}) error
|
||||
|
||||
// Validate validates provided `i`. It is usually called after `Context#Bind()`.
|
||||
// Validator must be registered using `Echo#Validator`.
|
||||
Validate(i interface{}) error
|
||||
|
||||
// Render renders a template with data and sends a text/html response with status
|
||||
// code. Renderer must be registered using `Echo.Renderer`.
|
||||
Render(code int, name string, data interface{}) error
|
||||
|
||||
// HTML sends an HTTP response with status code.
|
||||
HTML(code int, html string) error
|
||||
|
||||
// HTMLBlob sends an HTTP blob response with status code.
|
||||
HTMLBlob(code int, b []byte) error
|
||||
|
||||
// String sends a string response with status code.
|
||||
String(code int, s string) error
|
||||
|
||||
// JSON sends a JSON response with status code.
|
||||
JSON(code int, i interface{}) error
|
||||
|
||||
// JSONPretty sends a pretty-print JSON with status code.
|
||||
JSONPretty(code int, i interface{}, indent string) error
|
||||
|
||||
// JSONBlob sends a JSON blob response with status code.
|
||||
JSONBlob(code int, b []byte) error
|
||||
|
||||
// JSONP sends a JSONP response with status code. It uses `callback` to construct
|
||||
// the JSONP payload.
|
||||
JSONP(code int, callback string, i interface{}) error
|
||||
|
||||
// JSONPBlob sends a JSONP blob response with status code. It uses `callback`
|
||||
// to construct the JSONP payload.
|
||||
JSONPBlob(code int, callback string, b []byte) error
|
||||
|
||||
// XML sends an XML response with status code.
|
||||
XML(code int, i interface{}) error
|
||||
|
||||
// XMLPretty sends a pretty-print XML with status code.
|
||||
XMLPretty(code int, i interface{}, indent string) error
|
||||
|
||||
// XMLBlob sends an XML blob response with status code.
|
||||
XMLBlob(code int, b []byte) error
|
||||
|
||||
// Blob sends a blob response with status code and content type.
|
||||
Blob(code int, contentType string, b []byte) error
|
||||
|
||||
// Stream sends a streaming response with status code and content type.
|
||||
Stream(code int, contentType string, r io.Reader) error
|
||||
|
||||
// File sends a response with the content of the file.
|
||||
File(file string) error
|
||||
|
||||
// Attachment sends a response as attachment, prompting client to save the
|
||||
// file.
|
||||
Attachment(file string, name string) error
|
||||
|
||||
// Inline sends a response as inline, opening the file in the browser.
|
||||
Inline(file string, name string) error
|
||||
|
||||
// NoContent sends a response with no body and a status code.
|
||||
NoContent(code int) error
|
||||
|
||||
// Redirect redirects the request to a provided URL with status code.
|
||||
Redirect(code int, url string) error
|
||||
|
||||
// Error invokes the registered HTTP error handler. Generally used by middleware.
|
||||
Error(err error)
|
||||
|
||||
// Handler returns the matched handler by router.
|
||||
Handler() HandlerFunc
|
||||
|
||||
// SetHandler sets the matched handler by router.
|
||||
SetHandler(h HandlerFunc)
|
||||
|
||||
// Logger returns the `Logger` instance.
|
||||
Logger() Logger
|
||||
|
||||
// Echo returns the `Echo` instance.
|
||||
Echo() *Echo
|
||||
|
||||
// Reset resets the context after request completes. It must be called along
|
||||
// with `Echo#AcquireContext()` and `Echo#ReleaseContext()`.
|
||||
// See `Echo#ServeHTTP()`
|
||||
Reset(r *http.Request, w http.ResponseWriter)
|
||||
}
|
||||
|
||||
context struct {
|
||||
request *http.Request
|
||||
response *Response
|
||||
path string
|
||||
pnames []string
|
||||
pvalues []string
|
||||
query url.Values
|
||||
handler HandlerFunc
|
||||
store Map
|
||||
echo *Echo
|
||||
}
|
||||
)
|
||||
|
||||
const (
|
||||
defaultMemory = 32 << 20 // 32 MB
|
||||
indexPage = "index.html"
|
||||
defaultIndent = " "
|
||||
)
|
||||
|
||||
func (c *context) writeContentType(value string) {
|
||||
header := c.Response().Header()
|
||||
if header.Get(HeaderContentType) == "" {
|
||||
header.Set(HeaderContentType, value)
|
||||
}
|
||||
}
|
||||
|
||||
func (c *context) Request() *http.Request {
|
||||
return c.request
|
||||
}
|
||||
|
||||
func (c *context) SetRequest(r *http.Request) {
|
||||
c.request = r
|
||||
}
|
||||
|
||||
func (c *context) Response() *Response {
|
||||
return c.response
|
||||
}
|
||||
|
||||
func (c *context) IsTLS() bool {
|
||||
return c.request.TLS != nil
|
||||
}
|
||||
|
||||
func (c *context) IsWebSocket() bool {
|
||||
upgrade := c.request.Header.Get(HeaderUpgrade)
|
||||
return upgrade == "websocket" || upgrade == "Websocket"
|
||||
}
|
||||
|
||||
func (c *context) Scheme() string {
|
||||
// Can't use `r.Request.URL.Scheme`
|
||||
// See: https://groups.google.com/forum/#!topic/golang-nuts/pMUkBlQBDF0
|
||||
if c.IsTLS() {
|
||||
return "https"
|
||||
}
|
||||
if scheme := c.request.Header.Get(HeaderXForwardedProto); scheme != "" {
|
||||
return scheme
|
||||
}
|
||||
if scheme := c.request.Header.Get(HeaderXForwardedProtocol); scheme != "" {
|
||||
return scheme
|
||||
}
|
||||
if ssl := c.request.Header.Get(HeaderXForwardedSsl); ssl == "on" {
|
||||
return "https"
|
||||
}
|
||||
if scheme := c.request.Header.Get(HeaderXUrlScheme); scheme != "" {
|
||||
return scheme
|
||||
}
|
||||
return "http"
|
||||
}
|
||||
|
||||
func (c *context) RealIP() string {
|
||||
if ip := c.request.Header.Get(HeaderXForwardedFor); ip != "" {
|
||||
return strings.Split(ip, ", ")[0]
|
||||
}
|
||||
if ip := c.request.Header.Get(HeaderXRealIP); ip != "" {
|
||||
return ip
|
||||
}
|
||||
ra, _, _ := net.SplitHostPort(c.request.RemoteAddr)
|
||||
return ra
|
||||
}
|
||||
|
||||
func (c *context) Path() string {
|
||||
return c.path
|
||||
}
|
||||
|
||||
func (c *context) SetPath(p string) {
|
||||
c.path = p
|
||||
}
|
||||
|
||||
func (c *context) Param(name string) string {
|
||||
for i, n := range c.pnames {
|
||||
if i < len(c.pvalues) {
|
||||
if n == name {
|
||||
return c.pvalues[i]
|
||||
}
|
||||
}
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
func (c *context) ParamNames() []string {
|
||||
return c.pnames
|
||||
}
|
||||
|
||||
func (c *context) SetParamNames(names ...string) {
|
||||
c.pnames = names
|
||||
}
|
||||
|
||||
func (c *context) ParamValues() []string {
|
||||
return c.pvalues[:len(c.pnames)]
|
||||
}
|
||||
|
||||
func (c *context) SetParamValues(values ...string) {
|
||||
c.pvalues = values
|
||||
}
|
||||
|
||||
func (c *context) QueryParam(name string) string {
|
||||
if c.query == nil {
|
||||
c.query = c.request.URL.Query()
|
||||
}
|
||||
return c.query.Get(name)
|
||||
}
|
||||
|
||||
func (c *context) QueryParams() url.Values {
|
||||
if c.query == nil {
|
||||
c.query = c.request.URL.Query()
|
||||
}
|
||||
return c.query
|
||||
}
|
||||
|
||||
func (c *context) QueryString() string {
|
||||
return c.request.URL.RawQuery
|
||||
}
|
||||
|
||||
func (c *context) FormValue(name string) string {
|
||||
return c.request.FormValue(name)
|
||||
}
|
||||
|
||||
func (c *context) FormParams() (url.Values, error) {
|
||||
if strings.HasPrefix(c.request.Header.Get(HeaderContentType), MIMEMultipartForm) {
|
||||
if err := c.request.ParseMultipartForm(defaultMemory); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
} else {
|
||||
if err := c.request.ParseForm(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
return c.request.Form, nil
|
||||
}
|
||||
|
||||
func (c *context) FormFile(name string) (*multipart.FileHeader, error) {
|
||||
_, fh, err := c.request.FormFile(name)
|
||||
return fh, err
|
||||
}
|
||||
|
||||
func (c *context) MultipartForm() (*multipart.Form, error) {
|
||||
err := c.request.ParseMultipartForm(defaultMemory)
|
||||
return c.request.MultipartForm, err
|
||||
}
|
||||
|
||||
func (c *context) Cookie(name string) (*http.Cookie, error) {
|
||||
return c.request.Cookie(name)
|
||||
}
|
||||
|
||||
func (c *context) SetCookie(cookie *http.Cookie) {
|
||||
http.SetCookie(c.Response(), cookie)
|
||||
}
|
||||
|
||||
func (c *context) Cookies() []*http.Cookie {
|
||||
return c.request.Cookies()
|
||||
}
|
||||
|
||||
func (c *context) Get(key string) interface{} {
|
||||
return c.store[key]
|
||||
}
|
||||
|
||||
func (c *context) Set(key string, val interface{}) {
|
||||
if c.store == nil {
|
||||
c.store = make(Map)
|
||||
}
|
||||
c.store[key] = val
|
||||
}
|
||||
|
||||
func (c *context) Bind(i interface{}) error {
|
||||
return c.echo.Binder.Bind(i, c)
|
||||
}
|
||||
|
||||
func (c *context) Validate(i interface{}) error {
|
||||
if c.echo.Validator == nil {
|
||||
return ErrValidatorNotRegistered
|
||||
}
|
||||
return c.echo.Validator.Validate(i)
|
||||
}
|
||||
|
||||
func (c *context) Render(code int, name string, data interface{}) (err error) {
|
||||
if c.echo.Renderer == nil {
|
||||
return ErrRendererNotRegistered
|
||||
}
|
||||
buf := new(bytes.Buffer)
|
||||
if err = c.echo.Renderer.Render(buf, name, data, c); err != nil {
|
||||
return
|
||||
}
|
||||
return c.HTMLBlob(code, buf.Bytes())
|
||||
}
|
||||
|
||||
func (c *context) HTML(code int, html string) (err error) {
|
||||
return c.HTMLBlob(code, []byte(html))
|
||||
}
|
||||
|
||||
func (c *context) HTMLBlob(code int, b []byte) (err error) {
|
||||
return c.Blob(code, MIMETextHTMLCharsetUTF8, b)
|
||||
}
|
||||
|
||||
func (c *context) String(code int, s string) (err error) {
|
||||
return c.Blob(code, MIMETextPlainCharsetUTF8, []byte(s))
|
||||
}
|
||||
|
||||
func (c *context) jsonPBlob(code int, callback string, i interface{}) (err error) {
|
||||
enc := json.NewEncoder(c.response)
|
||||
_, pretty := c.QueryParams()["pretty"]
|
||||
if c.echo.Debug || pretty {
|
||||
enc.SetIndent("", " ")
|
||||
}
|
||||
c.writeContentType(MIMEApplicationJavaScriptCharsetUTF8)
|
||||
c.response.WriteHeader(code)
|
||||
if _, err = c.response.Write([]byte(callback + "(")); err != nil {
|
||||
return
|
||||
}
|
||||
if err = enc.Encode(i); err != nil {
|
||||
return
|
||||
}
|
||||
if _, err = c.response.Write([]byte(");")); err != nil {
|
||||
return
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
func (c *context) json(code int, i interface{}, indent string) error {
|
||||
enc := json.NewEncoder(c.response)
|
||||
if indent != "" {
|
||||
enc.SetIndent("", indent)
|
||||
}
|
||||
c.writeContentType(MIMEApplicationJSONCharsetUTF8)
|
||||
c.response.WriteHeader(code)
|
||||
return enc.Encode(i)
|
||||
}
|
||||
|
||||
func (c *context) JSON(code int, i interface{}) (err error) {
|
||||
indent := ""
|
||||
if _, pretty := c.QueryParams()["pretty"]; c.echo.Debug || pretty {
|
||||
indent = defaultIndent
|
||||
}
|
||||
return c.json(code, i, indent)
|
||||
}
|
||||
|
||||
func (c *context) JSONPretty(code int, i interface{}, indent string) (err error) {
|
||||
return c.json(code, i, indent)
|
||||
}
|
||||
|
||||
func (c *context) JSONBlob(code int, b []byte) (err error) {
|
||||
return c.Blob(code, MIMEApplicationJSONCharsetUTF8, b)
|
||||
}
|
||||
|
||||
func (c *context) JSONP(code int, callback string, i interface{}) (err error) {
|
||||
return c.jsonPBlob(code, callback, i)
|
||||
}
|
||||
|
||||
func (c *context) JSONPBlob(code int, callback string, b []byte) (err error) {
|
||||
c.writeContentType(MIMEApplicationJavaScriptCharsetUTF8)
|
||||
c.response.WriteHeader(code)
|
||||
if _, err = c.response.Write([]byte(callback + "(")); err != nil {
|
||||
return
|
||||
}
|
||||
if _, err = c.response.Write(b); err != nil {
|
||||
return
|
||||
}
|
||||
_, err = c.response.Write([]byte(");"))
|
||||
return
|
||||
}
|
||||
|
||||
func (c *context) xml(code int, i interface{}, indent string) (err error) {
|
||||
c.writeContentType(MIMEApplicationXMLCharsetUTF8)
|
||||
c.response.WriteHeader(code)
|
||||
enc := xml.NewEncoder(c.response)
|
||||
if indent != "" {
|
||||
enc.Indent("", indent)
|
||||
}
|
||||
if _, err = c.response.Write([]byte(xml.Header)); err != nil {
|
||||
return
|
||||
}
|
||||
return enc.Encode(i)
|
||||
}
|
||||
|
||||
func (c *context) XML(code int, i interface{}) (err error) {
|
||||
indent := ""
|
||||
if _, pretty := c.QueryParams()["pretty"]; c.echo.Debug || pretty {
|
||||
indent = defaultIndent
|
||||
}
|
||||
return c.xml(code, i, indent)
|
||||
}
|
||||
|
||||
func (c *context) XMLPretty(code int, i interface{}, indent string) (err error) {
|
||||
return c.xml(code, i, indent)
|
||||
}
|
||||
|
||||
func (c *context) XMLBlob(code int, b []byte) (err error) {
|
||||
c.writeContentType(MIMEApplicationXMLCharsetUTF8)
|
||||
c.response.WriteHeader(code)
|
||||
if _, err = c.response.Write([]byte(xml.Header)); err != nil {
|
||||
return
|
||||
}
|
||||
_, err = c.response.Write(b)
|
||||
return
|
||||
}
|
||||
|
||||
func (c *context) Blob(code int, contentType string, b []byte) (err error) {
|
||||
c.writeContentType(contentType)
|
||||
c.response.WriteHeader(code)
|
||||
_, err = c.response.Write(b)
|
||||
return
|
||||
}
|
||||
|
||||
func (c *context) Stream(code int, contentType string, r io.Reader) (err error) {
|
||||
c.writeContentType(contentType)
|
||||
c.response.WriteHeader(code)
|
||||
_, err = io.Copy(c.response, r)
|
||||
return
|
||||
}
|
||||
|
||||
func (c *context) File(file string) (err error) {
|
||||
f, err := os.Open(file)
|
||||
if err != nil {
|
||||
return NotFoundHandler(c)
|
||||
}
|
||||
defer f.Close()
|
||||
|
||||
fi, _ := f.Stat()
|
||||
if fi.IsDir() {
|
||||
file = filepath.Join(file, indexPage)
|
||||
f, err = os.Open(file)
|
||||
if err != nil {
|
||||
return NotFoundHandler(c)
|
||||
}
|
||||
defer f.Close()
|
||||
if fi, err = f.Stat(); err != nil {
|
||||
return
|
||||
}
|
||||
}
|
||||
http.ServeContent(c.Response(), c.Request(), fi.Name(), fi.ModTime(), f)
|
||||
return
|
||||
}
|
||||
|
||||
func (c *context) Attachment(file, name string) error {
|
||||
return c.contentDisposition(file, name, "attachment")
|
||||
}
|
||||
|
||||
func (c *context) Inline(file, name string) error {
|
||||
return c.contentDisposition(file, name, "inline")
|
||||
}
|
||||
|
||||
func (c *context) contentDisposition(file, name, dispositionType string) error {
|
||||
c.response.Header().Set(HeaderContentDisposition, fmt.Sprintf("%s; filename=%q", dispositionType, name))
|
||||
return c.File(file)
|
||||
}
|
||||
|
||||
func (c *context) NoContent(code int) error {
|
||||
c.response.WriteHeader(code)
|
||||
return nil
|
||||
}
|
||||
|
||||
func (c *context) Redirect(code int, url string) error {
|
||||
if code < 300 || code > 308 {
|
||||
return ErrInvalidRedirectCode
|
||||
}
|
||||
c.response.Header().Set(HeaderLocation, url)
|
||||
c.response.WriteHeader(code)
|
||||
return nil
|
||||
}
|
||||
|
||||
func (c *context) Error(err error) {
|
||||
c.echo.HTTPErrorHandler(err, c)
|
||||
}
|
||||
|
||||
func (c *context) Echo() *Echo {
|
||||
return c.echo
|
||||
}
|
||||
|
||||
func (c *context) Handler() HandlerFunc {
|
||||
return c.handler
|
||||
}
|
||||
|
||||
func (c *context) SetHandler(h HandlerFunc) {
|
||||
c.handler = h
|
||||
}
|
||||
|
||||
func (c *context) Logger() Logger {
|
||||
return c.echo.Logger
|
||||
}
|
||||
|
||||
func (c *context) Reset(r *http.Request, w http.ResponseWriter) {
|
||||
c.request = r
|
||||
c.response.reset(w)
|
||||
c.query = nil
|
||||
c.handler = NotFoundHandler
|
||||
c.store = nil
|
||||
c.path = ""
|
||||
c.pnames = nil
|
||||
// NOTE: Don't reset because it has to have length c.echo.maxParam at all times
|
||||
// c.pvalues = nil
|
||||
}
|
||||
|
||||
782
vendor/github.com/labstack/echo/echo.go
generated
vendored
Normal file
782
vendor/github.com/labstack/echo/echo.go
generated
vendored
Normal file
@@ -0,0 +1,782 @@
|
||||
/*
|
||||
Package echo implements high performance, minimalist Go web framework.
|
||||
|
||||
Example:
|
||||
|
||||
package main
|
||||
|
||||
import (
|
||||
"net/http"
|
||||
|
||||
"github.com/labstack/echo"
|
||||
"github.com/labstack/echo/middleware"
|
||||
)
|
||||
|
||||
// Handler
|
||||
func hello(c echo.Context) error {
|
||||
return c.String(http.StatusOK, "Hello, World!")
|
||||
}
|
||||
|
||||
func main() {
|
||||
// Echo instance
|
||||
e := echo.New()
|
||||
|
||||
// Middleware
|
||||
e.Use(middleware.Logger())
|
||||
e.Use(middleware.Recover())
|
||||
|
||||
// Routes
|
||||
e.GET("/", hello)
|
||||
|
||||
// Start server
|
||||
e.Logger.Fatal(e.Start(":1323"))
|
||||
}
|
||||
|
||||
Learn more at https://echo.labstack.com
|
||||
*/
|
||||
package echo
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
stdContext "context"
|
||||
"crypto/tls"
|
||||
"errors"
|
||||
"fmt"
|
||||
"io"
|
||||
stdLog "log"
|
||||
"net"
|
||||
"net/http"
|
||||
"net/url"
|
||||
"path"
|
||||
"path/filepath"
|
||||
"reflect"
|
||||
"runtime"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/labstack/gommon/color"
|
||||
"github.com/labstack/gommon/log"
|
||||
"golang.org/x/crypto/acme/autocert"
|
||||
)
|
||||
|
||||
type (
|
||||
// Echo is the top-level framework instance.
|
||||
Echo struct {
|
||||
StdLogger *stdLog.Logger
|
||||
colorer *color.Color
|
||||
premiddleware []MiddlewareFunc
|
||||
middleware []MiddlewareFunc
|
||||
maxParam *int
|
||||
router *Router
|
||||
notFoundHandler HandlerFunc
|
||||
pool sync.Pool
|
||||
Server *http.Server
|
||||
TLSServer *http.Server
|
||||
Listener net.Listener
|
||||
TLSListener net.Listener
|
||||
AutoTLSManager autocert.Manager
|
||||
DisableHTTP2 bool
|
||||
Debug bool
|
||||
HideBanner bool
|
||||
HidePort bool
|
||||
HTTPErrorHandler HTTPErrorHandler
|
||||
Binder Binder
|
||||
Validator Validator
|
||||
Renderer Renderer
|
||||
Logger Logger
|
||||
}
|
||||
|
||||
// Route contains a handler and information for matching against requests.
|
||||
Route struct {
|
||||
Method string `json:"method"`
|
||||
Path string `json:"path"`
|
||||
Name string `json:"name"`
|
||||
}
|
||||
|
||||
// HTTPError represents an error that occurred while handling a request.
|
||||
HTTPError struct {
|
||||
Code int
|
||||
Message interface{}
|
||||
Internal error // Stores the error returned by an external dependency
|
||||
}
|
||||
|
||||
// MiddlewareFunc defines a function to process middleware.
|
||||
MiddlewareFunc func(HandlerFunc) HandlerFunc
|
||||
|
||||
// HandlerFunc defines a function to serve HTTP requests.
|
||||
HandlerFunc func(Context) error
|
||||
|
||||
// HTTPErrorHandler is a centralized HTTP error handler.
|
||||
HTTPErrorHandler func(error, Context)
|
||||
|
||||
// Validator is the interface that wraps the Validate function.
|
||||
Validator interface {
|
||||
Validate(i interface{}) error
|
||||
}
|
||||
|
||||
// Renderer is the interface that wraps the Render function.
|
||||
Renderer interface {
|
||||
Render(io.Writer, string, interface{}, Context) error
|
||||
}
|
||||
|
||||
// Map defines a generic map of type `map[string]interface{}`.
|
||||
Map map[string]interface{}
|
||||
|
||||
// i is the interface for Echo and Group.
|
||||
i interface {
|
||||
GET(string, HandlerFunc, ...MiddlewareFunc) *Route
|
||||
}
|
||||
)
|
||||
|
||||
// HTTP methods
|
||||
// NOTE: Deprecated, please use the stdlib constants directly instead.
|
||||
const (
|
||||
CONNECT = http.MethodConnect
|
||||
DELETE = http.MethodDelete
|
||||
GET = http.MethodGet
|
||||
HEAD = http.MethodHead
|
||||
OPTIONS = http.MethodOptions
|
||||
PATCH = http.MethodPatch
|
||||
POST = http.MethodPost
|
||||
// PROPFIND = "PROPFIND"
|
||||
PUT = http.MethodPut
|
||||
TRACE = http.MethodTrace
|
||||
)
|
||||
|
||||
// MIME types
|
||||
const (
|
||||
MIMEApplicationJSON = "application/json"
|
||||
MIMEApplicationJSONCharsetUTF8 = MIMEApplicationJSON + "; " + charsetUTF8
|
||||
MIMEApplicationJavaScript = "application/javascript"
|
||||
MIMEApplicationJavaScriptCharsetUTF8 = MIMEApplicationJavaScript + "; " + charsetUTF8
|
||||
MIMEApplicationXML = "application/xml"
|
||||
MIMEApplicationXMLCharsetUTF8 = MIMEApplicationXML + "; " + charsetUTF8
|
||||
MIMETextXML = "text/xml"
|
||||
MIMETextXMLCharsetUTF8 = MIMETextXML + "; " + charsetUTF8
|
||||
MIMEApplicationForm = "application/x-www-form-urlencoded"
|
||||
MIMEApplicationProtobuf = "application/protobuf"
|
||||
MIMEApplicationMsgpack = "application/msgpack"
|
||||
MIMETextHTML = "text/html"
|
||||
MIMETextHTMLCharsetUTF8 = MIMETextHTML + "; " + charsetUTF8
|
||||
MIMETextPlain = "text/plain"
|
||||
MIMETextPlainCharsetUTF8 = MIMETextPlain + "; " + charsetUTF8
|
||||
MIMEMultipartForm = "multipart/form-data"
|
||||
MIMEOctetStream = "application/octet-stream"
|
||||
)
|
||||
|
||||
const (
|
||||
charsetUTF8 = "charset=UTF-8"
|
||||
// PROPFIND Method can be used on collection and property resources.
|
||||
PROPFIND = "PROPFIND"
|
||||
)
|
||||
|
||||
// Headers
|
||||
const (
|
||||
HeaderAccept = "Accept"
|
||||
HeaderAcceptEncoding = "Accept-Encoding"
|
||||
HeaderAllow = "Allow"
|
||||
HeaderAuthorization = "Authorization"
|
||||
HeaderContentDisposition = "Content-Disposition"
|
||||
HeaderContentEncoding = "Content-Encoding"
|
||||
HeaderContentLength = "Content-Length"
|
||||
HeaderContentType = "Content-Type"
|
||||
HeaderCookie = "Cookie"
|
||||
HeaderSetCookie = "Set-Cookie"
|
||||
HeaderIfModifiedSince = "If-Modified-Since"
|
||||
HeaderLastModified = "Last-Modified"
|
||||
HeaderLocation = "Location"
|
||||
HeaderUpgrade = "Upgrade"
|
||||
HeaderVary = "Vary"
|
||||
HeaderWWWAuthenticate = "WWW-Authenticate"
|
||||
HeaderXForwardedFor = "X-Forwarded-For"
|
||||
HeaderXForwardedProto = "X-Forwarded-Proto"
|
||||
HeaderXForwardedProtocol = "X-Forwarded-Protocol"
|
||||
HeaderXForwardedSsl = "X-Forwarded-Ssl"
|
||||
HeaderXUrlScheme = "X-Url-Scheme"
|
||||
HeaderXHTTPMethodOverride = "X-HTTP-Method-Override"
|
||||
HeaderXRealIP = "X-Real-IP"
|
||||
HeaderXRequestID = "X-Request-ID"
|
||||
HeaderXRequestedWith = "X-Requested-With"
|
||||
HeaderServer = "Server"
|
||||
HeaderOrigin = "Origin"
|
||||
|
||||
// Access control
|
||||
HeaderAccessControlRequestMethod = "Access-Control-Request-Method"
|
||||
HeaderAccessControlRequestHeaders = "Access-Control-Request-Headers"
|
||||
HeaderAccessControlAllowOrigin = "Access-Control-Allow-Origin"
|
||||
HeaderAccessControlAllowMethods = "Access-Control-Allow-Methods"
|
||||
HeaderAccessControlAllowHeaders = "Access-Control-Allow-Headers"
|
||||
HeaderAccessControlAllowCredentials = "Access-Control-Allow-Credentials"
|
||||
HeaderAccessControlExposeHeaders = "Access-Control-Expose-Headers"
|
||||
HeaderAccessControlMaxAge = "Access-Control-Max-Age"
|
||||
|
||||
// Security
|
||||
HeaderStrictTransportSecurity = "Strict-Transport-Security"
|
||||
HeaderXContentTypeOptions = "X-Content-Type-Options"
|
||||
HeaderXXSSProtection = "X-XSS-Protection"
|
||||
HeaderXFrameOptions = "X-Frame-Options"
|
||||
HeaderContentSecurityPolicy = "Content-Security-Policy"
|
||||
HeaderXCSRFToken = "X-CSRF-Token"
|
||||
)
|
||||
|
||||
const (
|
||||
// Version of Echo
|
||||
Version = "3.3.10-dev"
|
||||
website = "https://echo.labstack.com"
|
||||
// http://patorjk.com/software/taag/#p=display&f=Small%20Slant&t=Echo
|
||||
banner = `
|
||||
____ __
|
||||
/ __/___/ / ___
|
||||
/ _// __/ _ \/ _ \
|
||||
/___/\__/_//_/\___/ %s
|
||||
High performance, minimalist Go web framework
|
||||
%s
|
||||
____________________________________O/_______
|
||||
O\
|
||||
`
|
||||
)
|
||||
|
||||
var (
|
||||
methods = [...]string{
|
||||
http.MethodConnect,
|
||||
http.MethodDelete,
|
||||
http.MethodGet,
|
||||
http.MethodHead,
|
||||
http.MethodOptions,
|
||||
http.MethodPatch,
|
||||
http.MethodPost,
|
||||
PROPFIND,
|
||||
http.MethodPut,
|
||||
http.MethodTrace,
|
||||
}
|
||||
)
|
||||
|
||||
// Errors
|
||||
var (
|
||||
ErrUnsupportedMediaType = NewHTTPError(http.StatusUnsupportedMediaType)
|
||||
ErrNotFound = NewHTTPError(http.StatusNotFound)
|
||||
ErrUnauthorized = NewHTTPError(http.StatusUnauthorized)
|
||||
ErrForbidden = NewHTTPError(http.StatusForbidden)
|
||||
ErrMethodNotAllowed = NewHTTPError(http.StatusMethodNotAllowed)
|
||||
ErrStatusRequestEntityTooLarge = NewHTTPError(http.StatusRequestEntityTooLarge)
|
||||
ErrTooManyRequests = NewHTTPError(http.StatusTooManyRequests)
|
||||
ErrBadRequest = NewHTTPError(http.StatusBadRequest)
|
||||
ErrBadGateway = NewHTTPError(http.StatusBadGateway)
|
||||
ErrInternalServerError = NewHTTPError(http.StatusInternalServerError)
|
||||
ErrRequestTimeout = NewHTTPError(http.StatusRequestTimeout)
|
||||
ErrServiceUnavailable = NewHTTPError(http.StatusServiceUnavailable)
|
||||
ErrValidatorNotRegistered = errors.New("validator not registered")
|
||||
ErrRendererNotRegistered = errors.New("renderer not registered")
|
||||
ErrInvalidRedirectCode = errors.New("invalid redirect status code")
|
||||
ErrCookieNotFound = errors.New("cookie not found")
|
||||
)
|
||||
|
||||
// Error handlers
|
||||
var (
|
||||
NotFoundHandler = func(c Context) error {
|
||||
return ErrNotFound
|
||||
}
|
||||
|
||||
MethodNotAllowedHandler = func(c Context) error {
|
||||
return ErrMethodNotAllowed
|
||||
}
|
||||
)
|
||||
|
||||
// New creates an instance of Echo.
|
||||
func New() (e *Echo) {
|
||||
e = &Echo{
|
||||
Server: new(http.Server),
|
||||
TLSServer: new(http.Server),
|
||||
AutoTLSManager: autocert.Manager{
|
||||
Prompt: autocert.AcceptTOS,
|
||||
},
|
||||
Logger: log.New("echo"),
|
||||
colorer: color.New(),
|
||||
maxParam: new(int),
|
||||
}
|
||||
e.Server.Handler = e
|
||||
e.TLSServer.Handler = e
|
||||
e.HTTPErrorHandler = e.DefaultHTTPErrorHandler
|
||||
e.Binder = &DefaultBinder{}
|
||||
e.Logger.SetLevel(log.ERROR)
|
||||
e.StdLogger = stdLog.New(e.Logger.Output(), e.Logger.Prefix()+": ", 0)
|
||||
e.pool.New = func() interface{} {
|
||||
return e.NewContext(nil, nil)
|
||||
}
|
||||
e.router = NewRouter(e)
|
||||
return
|
||||
}
|
||||
|
||||
// NewContext returns a Context instance.
|
||||
func (e *Echo) NewContext(r *http.Request, w http.ResponseWriter) Context {
|
||||
return &context{
|
||||
request: r,
|
||||
response: NewResponse(w, e),
|
||||
store: make(Map),
|
||||
echo: e,
|
||||
pvalues: make([]string, *e.maxParam),
|
||||
handler: NotFoundHandler,
|
||||
}
|
||||
}
|
||||
|
||||
// Router returns router.
|
||||
func (e *Echo) Router() *Router {
|
||||
return e.router
|
||||
}
|
||||
|
||||
// DefaultHTTPErrorHandler is the default HTTP error handler. It sends a JSON response
|
||||
// with status code.
|
||||
func (e *Echo) DefaultHTTPErrorHandler(err error, c Context) {
|
||||
var (
|
||||
code = http.StatusInternalServerError
|
||||
msg interface{}
|
||||
)
|
||||
|
||||
if he, ok := err.(*HTTPError); ok {
|
||||
code = he.Code
|
||||
msg = he.Message
|
||||
if he.Internal != nil {
|
||||
err = fmt.Errorf("%v, %v", err, he.Internal)
|
||||
}
|
||||
} else if e.Debug {
|
||||
msg = err.Error()
|
||||
} else {
|
||||
msg = http.StatusText(code)
|
||||
}
|
||||
if _, ok := msg.(string); ok {
|
||||
msg = Map{"message": msg}
|
||||
}
|
||||
|
||||
// Send response
|
||||
if !c.Response().Committed {
|
||||
if c.Request().Method == http.MethodHead { // Issue #608
|
||||
err = c.NoContent(code)
|
||||
} else {
|
||||
err = c.JSON(code, msg)
|
||||
}
|
||||
if err != nil {
|
||||
e.Logger.Error(err)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Pre adds middleware to the chain which is run before router.
|
||||
func (e *Echo) Pre(middleware ...MiddlewareFunc) {
|
||||
e.premiddleware = append(e.premiddleware, middleware...)
|
||||
}
|
||||
|
||||
// Use adds middleware to the chain which is run after router.
|
||||
func (e *Echo) Use(middleware ...MiddlewareFunc) {
|
||||
e.middleware = append(e.middleware, middleware...)
|
||||
}
|
||||
|
||||
// CONNECT registers a new CONNECT route for a path with matching handler in the
|
||||
// router with optional route-level middleware.
|
||||
func (e *Echo) CONNECT(path string, h HandlerFunc, m ...MiddlewareFunc) *Route {
|
||||
return e.Add(http.MethodConnect, path, h, m...)
|
||||
}
|
||||
|
||||
// DELETE registers a new DELETE route for a path with matching handler in the router
|
||||
// with optional route-level middleware.
|
||||
func (e *Echo) DELETE(path string, h HandlerFunc, m ...MiddlewareFunc) *Route {
|
||||
return e.Add(http.MethodDelete, path, h, m...)
|
||||
}
|
||||
|
||||
// GET registers a new GET route for a path with matching handler in the router
|
||||
// with optional route-level middleware.
|
||||
func (e *Echo) GET(path string, h HandlerFunc, m ...MiddlewareFunc) *Route {
|
||||
return e.Add(http.MethodGet, path, h, m...)
|
||||
}
|
||||
|
||||
// HEAD registers a new HEAD route for a path with matching handler in the
|
||||
// router with optional route-level middleware.
|
||||
func (e *Echo) HEAD(path string, h HandlerFunc, m ...MiddlewareFunc) *Route {
|
||||
return e.Add(http.MethodHead, path, h, m...)
|
||||
}
|
||||
|
||||
// OPTIONS registers a new OPTIONS route for a path with matching handler in the
|
||||
// router with optional route-level middleware.
|
||||
func (e *Echo) OPTIONS(path string, h HandlerFunc, m ...MiddlewareFunc) *Route {
|
||||
return e.Add(http.MethodOptions, path, h, m...)
|
||||
}
|
||||
|
||||
// PATCH registers a new PATCH route for a path with matching handler in the
|
||||
// router with optional route-level middleware.
|
||||
func (e *Echo) PATCH(path string, h HandlerFunc, m ...MiddlewareFunc) *Route {
|
||||
return e.Add(http.MethodPatch, path, h, m...)
|
||||
}
|
||||
|
||||
// POST registers a new POST route for a path with matching handler in the
|
||||
// router with optional route-level middleware.
|
||||
func (e *Echo) POST(path string, h HandlerFunc, m ...MiddlewareFunc) *Route {
|
||||
return e.Add(http.MethodPost, path, h, m...)
|
||||
}
|
||||
|
||||
// PUT registers a new PUT route for a path with matching handler in the
|
||||
// router with optional route-level middleware.
|
||||
func (e *Echo) PUT(path string, h HandlerFunc, m ...MiddlewareFunc) *Route {
|
||||
return e.Add(http.MethodPut, path, h, m...)
|
||||
}
|
||||
|
||||
// TRACE registers a new TRACE route for a path with matching handler in the
|
||||
// router with optional route-level middleware.
|
||||
func (e *Echo) TRACE(path string, h HandlerFunc, m ...MiddlewareFunc) *Route {
|
||||
return e.Add(http.MethodTrace, path, h, m...)
|
||||
}
|
||||
|
||||
// Any registers a new route for all HTTP methods and path with matching handler
|
||||
// in the router with optional route-level middleware.
|
||||
func (e *Echo) Any(path string, handler HandlerFunc, middleware ...MiddlewareFunc) []*Route {
|
||||
routes := make([]*Route, len(methods))
|
||||
for i, m := range methods {
|
||||
routes[i] = e.Add(m, path, handler, middleware...)
|
||||
}
|
||||
return routes
|
||||
}
|
||||
|
||||
// Match registers a new route for multiple HTTP methods and path with matching
|
||||
// handler in the router with optional route-level middleware.
|
||||
func (e *Echo) Match(methods []string, path string, handler HandlerFunc, middleware ...MiddlewareFunc) []*Route {
|
||||
routes := make([]*Route, len(methods))
|
||||
for i, m := range methods {
|
||||
routes[i] = e.Add(m, path, handler, middleware...)
|
||||
}
|
||||
return routes
|
||||
}
|
||||
|
||||
// Static registers a new route with path prefix to serve static files from the
|
||||
// provided root directory.
|
||||
func (e *Echo) Static(prefix, root string) *Route {
|
||||
if root == "" {
|
||||
root = "." // For security we want to restrict to CWD.
|
||||
}
|
||||
return static(e, prefix, root)
|
||||
}
|
||||
|
||||
func static(i i, prefix, root string) *Route {
|
||||
h := func(c Context) error {
|
||||
p, err := url.PathUnescape(c.Param("*"))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
name := filepath.Join(root, path.Clean("/"+p)) // "/"+ for security
|
||||
return c.File(name)
|
||||
}
|
||||
i.GET(prefix, h)
|
||||
if prefix == "/" {
|
||||
return i.GET(prefix+"*", h)
|
||||
}
|
||||
|
||||
return i.GET(prefix+"/*", h)
|
||||
}
|
||||
|
||||
// File registers a new route with path to serve a static file with optional route-level middleware.
|
||||
func (e *Echo) File(path, file string, m ...MiddlewareFunc) *Route {
|
||||
return e.GET(path, func(c Context) error {
|
||||
return c.File(file)
|
||||
}, m...)
|
||||
}
|
||||
|
||||
// Add registers a new route for an HTTP method and path with matching handler
|
||||
// in the router with optional route-level middleware.
|
||||
func (e *Echo) Add(method, path string, handler HandlerFunc, middleware ...MiddlewareFunc) *Route {
|
||||
name := handlerName(handler)
|
||||
e.router.Add(method, path, func(c Context) error {
|
||||
h := handler
|
||||
// Chain middleware
|
||||
for i := len(middleware) - 1; i >= 0; i-- {
|
||||
h = middleware[i](h)
|
||||
}
|
||||
return h(c)
|
||||
})
|
||||
r := &Route{
|
||||
Method: method,
|
||||
Path: path,
|
||||
Name: name,
|
||||
}
|
||||
e.router.routes[method+path] = r
|
||||
return r
|
||||
}
|
||||
|
||||
// Group creates a new router group with prefix and optional group-level middleware.
|
||||
func (e *Echo) Group(prefix string, m ...MiddlewareFunc) (g *Group) {
|
||||
g = &Group{prefix: prefix, echo: e}
|
||||
g.Use(m...)
|
||||
return
|
||||
}
|
||||
|
||||
// URI generates a URI from handler.
|
||||
func (e *Echo) URI(handler HandlerFunc, params ...interface{}) string {
|
||||
name := handlerName(handler)
|
||||
return e.Reverse(name, params...)
|
||||
}
|
||||
|
||||
// URL is an alias for `URI` function.
|
||||
func (e *Echo) URL(h HandlerFunc, params ...interface{}) string {
|
||||
return e.URI(h, params...)
|
||||
}
|
||||
|
||||
// Reverse generates an URL from route name and provided parameters.
|
||||
func (e *Echo) Reverse(name string, params ...interface{}) string {
|
||||
uri := new(bytes.Buffer)
|
||||
ln := len(params)
|
||||
n := 0
|
||||
for _, r := range e.router.routes {
|
||||
if r.Name == name {
|
||||
for i, l := 0, len(r.Path); i < l; i++ {
|
||||
if r.Path[i] == ':' && n < ln {
|
||||
for ; i < l && r.Path[i] != '/'; i++ {
|
||||
}
|
||||
uri.WriteString(fmt.Sprintf("%v", params[n]))
|
||||
n++
|
||||
}
|
||||
if i < l {
|
||||
uri.WriteByte(r.Path[i])
|
||||
}
|
||||
}
|
||||
break
|
||||
}
|
||||
}
|
||||
return uri.String()
|
||||
}
|
||||
|
||||
// Routes returns the registered routes.
|
||||
func (e *Echo) Routes() []*Route {
|
||||
routes := make([]*Route, 0, len(e.router.routes))
|
||||
for _, v := range e.router.routes {
|
||||
routes = append(routes, v)
|
||||
}
|
||||
return routes
|
||||
}
|
||||
|
||||
// AcquireContext returns an empty `Context` instance from the pool.
|
||||
// You must return the context by calling `ReleaseContext()`.
|
||||
func (e *Echo) AcquireContext() Context {
|
||||
return e.pool.Get().(Context)
|
||||
}
|
||||
|
||||
// ReleaseContext returns the `Context` instance back to the pool.
|
||||
// You must call it after `AcquireContext()`.
|
||||
func (e *Echo) ReleaseContext(c Context) {
|
||||
e.pool.Put(c)
|
||||
}
|
||||
|
||||
// ServeHTTP implements `http.Handler` interface, which serves HTTP requests.
|
||||
func (e *Echo) ServeHTTP(w http.ResponseWriter, r *http.Request) {
|
||||
// Acquire context
|
||||
c := e.pool.Get().(*context)
|
||||
c.Reset(r, w)
|
||||
|
||||
h := NotFoundHandler
|
||||
|
||||
if e.premiddleware == nil {
|
||||
e.router.Find(r.Method, getPath(r), c)
|
||||
h = c.Handler()
|
||||
for i := len(e.middleware) - 1; i >= 0; i-- {
|
||||
h = e.middleware[i](h)
|
||||
}
|
||||
} else {
|
||||
h = func(c Context) error {
|
||||
e.router.Find(r.Method, getPath(r), c)
|
||||
h := c.Handler()
|
||||
for i := len(e.middleware) - 1; i >= 0; i-- {
|
||||
h = e.middleware[i](h)
|
||||
}
|
||||
return h(c)
|
||||
}
|
||||
for i := len(e.premiddleware) - 1; i >= 0; i-- {
|
||||
h = e.premiddleware[i](h)
|
||||
}
|
||||
}
|
||||
|
||||
// Execute chain
|
||||
if err := h(c); err != nil {
|
||||
e.HTTPErrorHandler(err, c)
|
||||
}
|
||||
|
||||
// Release context
|
||||
e.pool.Put(c)
|
||||
}
|
||||
|
||||
// Start starts an HTTP server.
|
||||
func (e *Echo) Start(address string) error {
|
||||
e.Server.Addr = address
|
||||
return e.StartServer(e.Server)
|
||||
}
|
||||
|
||||
// StartTLS starts an HTTPS server.
|
||||
func (e *Echo) StartTLS(address string, certFile, keyFile string) (err error) {
|
||||
if certFile == "" || keyFile == "" {
|
||||
return errors.New("invalid tls configuration")
|
||||
}
|
||||
s := e.TLSServer
|
||||
s.TLSConfig = new(tls.Config)
|
||||
s.TLSConfig.Certificates = make([]tls.Certificate, 1)
|
||||
s.TLSConfig.Certificates[0], err = tls.LoadX509KeyPair(certFile, keyFile)
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
return e.startTLS(address)
|
||||
}
|
||||
|
||||
// StartAutoTLS starts an HTTPS server using certificates automatically installed from https://letsencrypt.org.
|
||||
func (e *Echo) StartAutoTLS(address string) error {
|
||||
s := e.TLSServer
|
||||
s.TLSConfig = new(tls.Config)
|
||||
s.TLSConfig.GetCertificate = e.AutoTLSManager.GetCertificate
|
||||
return e.startTLS(address)
|
||||
}
|
||||
|
||||
func (e *Echo) startTLS(address string) error {
|
||||
s := e.TLSServer
|
||||
s.Addr = address
|
||||
if !e.DisableHTTP2 {
|
||||
s.TLSConfig.NextProtos = append(s.TLSConfig.NextProtos, "h2")
|
||||
}
|
||||
return e.StartServer(e.TLSServer)
|
||||
}
|
||||
|
||||
// StartServer starts a custom http server.
|
||||
func (e *Echo) StartServer(s *http.Server) (err error) {
|
||||
// Setup
|
||||
e.colorer.SetOutput(e.Logger.Output())
|
||||
s.ErrorLog = e.StdLogger
|
||||
s.Handler = e
|
||||
if e.Debug {
|
||||
e.Logger.SetLevel(log.DEBUG)
|
||||
}
|
||||
|
||||
if !e.HideBanner {
|
||||
e.colorer.Printf(banner, e.colorer.Red("v"+Version), e.colorer.Blue(website))
|
||||
}
|
||||
|
||||
if s.TLSConfig == nil {
|
||||
if e.Listener == nil {
|
||||
e.Listener, err = newListener(s.Addr)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
if !e.HidePort {
|
||||
e.colorer.Printf("⇨ http server started on %s\n", e.colorer.Green(e.Listener.Addr()))
|
||||
}
|
||||
return s.Serve(e.Listener)
|
||||
}
|
||||
if e.TLSListener == nil {
|
||||
l, err := newListener(s.Addr)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
e.TLSListener = tls.NewListener(l, s.TLSConfig)
|
||||
}
|
||||
if !e.HidePort {
|
||||
e.colorer.Printf("⇨ https server started on %s\n", e.colorer.Green(e.TLSListener.Addr()))
|
||||
}
|
||||
return s.Serve(e.TLSListener)
|
||||
}
|
||||
|
||||
// Close immediately stops the server.
|
||||
// It internally calls `http.Server#Close()`.
|
||||
func (e *Echo) Close() error {
|
||||
if err := e.TLSServer.Close(); err != nil {
|
||||
return err
|
||||
}
|
||||
return e.Server.Close()
|
||||
}
|
||||
|
||||
// Shutdown stops the server gracefully.
|
||||
// It internally calls `http.Server#Shutdown()`.
|
||||
func (e *Echo) Shutdown(ctx stdContext.Context) error {
|
||||
if err := e.TLSServer.Shutdown(ctx); err != nil {
|
||||
return err
|
||||
}
|
||||
return e.Server.Shutdown(ctx)
|
||||
}
|
||||
|
||||
// NewHTTPError creates a new HTTPError instance.
|
||||
func NewHTTPError(code int, message ...interface{}) *HTTPError {
|
||||
he := &HTTPError{Code: code, Message: http.StatusText(code)}
|
||||
if len(message) > 0 {
|
||||
he.Message = message[0]
|
||||
}
|
||||
return he
|
||||
}
|
||||
|
||||
// Error makes it compatible with `error` interface.
|
||||
func (he *HTTPError) Error() string {
|
||||
return fmt.Sprintf("code=%d, message=%v", he.Code, he.Message)
|
||||
}
|
||||
|
||||
// SetInternal sets error to HTTPError.Internal
|
||||
func (he *HTTPError) SetInternal(err error) *HTTPError {
|
||||
he.Internal = err
|
||||
return he
|
||||
}
|
||||
|
||||
// WrapHandler wraps `http.Handler` into `echo.HandlerFunc`.
|
||||
func WrapHandler(h http.Handler) HandlerFunc {
|
||||
return func(c Context) error {
|
||||
h.ServeHTTP(c.Response(), c.Request())
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
// WrapMiddleware wraps `func(http.Handler) http.Handler` into `echo.MiddlewareFunc`
|
||||
func WrapMiddleware(m func(http.Handler) http.Handler) MiddlewareFunc {
|
||||
return func(next HandlerFunc) HandlerFunc {
|
||||
return func(c Context) (err error) {
|
||||
m(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||
c.SetRequest(r)
|
||||
err = next(c)
|
||||
})).ServeHTTP(c.Response(), c.Request())
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func getPath(r *http.Request) string {
|
||||
path := r.URL.RawPath
|
||||
if path == "" {
|
||||
path = r.URL.Path
|
||||
}
|
||||
return path
|
||||
}
|
||||
|
||||
func handlerName(h HandlerFunc) string {
|
||||
t := reflect.ValueOf(h).Type()
|
||||
if t.Kind() == reflect.Func {
|
||||
return runtime.FuncForPC(reflect.ValueOf(h).Pointer()).Name()
|
||||
}
|
||||
return t.String()
|
||||
}
|
||||
|
||||
// // PathUnescape is wraps `url.PathUnescape`
|
||||
// func PathUnescape(s string) (string, error) {
|
||||
// return url.PathUnescape(s)
|
||||
// }
|
||||
|
||||
// tcpKeepAliveListener sets TCP keep-alive timeouts on accepted
|
||||
// connections. It's used by ListenAndServe and ListenAndServeTLS so
|
||||
// dead TCP connections (e.g. closing laptop mid-download) eventually
|
||||
// go away.
|
||||
type tcpKeepAliveListener struct {
|
||||
*net.TCPListener
|
||||
}
|
||||
|
||||
func (ln tcpKeepAliveListener) Accept() (c net.Conn, err error) {
|
||||
tc, err := ln.AcceptTCP()
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
tc.SetKeepAlive(true)
|
||||
tc.SetKeepAlivePeriod(3 * time.Minute)
|
||||
return tc, nil
|
||||
}
|
||||
|
||||
func newListener(address string) (*tcpKeepAliveListener, error) {
|
||||
l, err := net.Listen("tcp", address)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return &tcpKeepAliveListener{l.(*net.TCPListener)}, nil
|
||||
}
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user