text stringlengths 1 2.12k | source dict |
|---|---|
react.js, user-interface, state
Answer: You can refactor that component into a main component, and each list will have its own component. Then you can reuse the list component multiple times:
export default function SearchBarSplit() {
/** Active list is used to determine whether the search list should be popup or not */
const [activeList, setActiveList] = useState<ActiveList>(null);
return (
<>
<SearchDiv listName="1" list={list1} activeList={activeList} setActiveList={setActiveList} />
<SearchDiv listName="2" list={list2} activeList={activeList} setActiveList={setActiveList} />
Active list: <strong>{activeList}</strong>
</>
);
}
function SearchDiv({listName, list, activeList, setActiveList}: {listName: '1' | '2', list: List, activeList: ActiveList, setActiveList: StateUpdater<ActiveList>}){
const [searchList, setSearchList] = useState<null | SearchList>(null);
const [cursor, setCursor] = useState<Cursor>(0);
const [selectedItem, setSelectedItem] = useState<null | SelectedItem>(null);
...
Full code.
Head over to Managing State – React to learn more many ways to manage states. | {
"domain": "codereview.stackexchange",
"id": 45616,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "react.js, user-interface, state",
"url": null
} |
python, python-3.x
Title: Converting a dict to a list + ID
Question: I wrote this code:
PROJECTS_LIST = [
project if not project.update({"project_id": project_id}) else None for project_id, project in PROJECTS.items()
]
where PROJECTS is a dict. The goal is to convert a dict like {"project123": {"a": "b"}} to [{"project_id": "project123", "a": "b"}]
I worry that this isn't the best approach.
Answer:
project if not project.update({"project_id": project_id}) else None
The above section is rather odd. I would prefer to one of the following solutions depending on Python version:
Python 3.9+: use the dictionary merging operator |:
project | {"project_id": project_id}
Before Python 3.9: The, in my opinion, nicest alternate was to use dictionary unpacking:
{**project, "project_id": project_id}
I'd recommend having a newline before the for, but otherwise the code looks fine.
PROJECTS_LIST = [
project | {"project_id": project_id}
for project_id, project in PROJECTS.items()
] | {
"domain": "codereview.stackexchange",
"id": 45617,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "python, python-3.x",
"url": null
} |
c++, functional-programming, c++17, template, template-meta-programming
Title: Function composition in the context of data processing pipelines
Question: Prior Notification
This follows a previous review of mine that addressed the core helper function named make_skippable.
The composition implementation presented here is heavily inspired by another review that already introduced the basic concept: Function composition in C++. Thanks to Nestor for providing the pattern.
The Problem
This concerns the construction of data processing pipelines. Given a series of callables, each designed to transform input into output, the goal is to find an intuitive and straightforward method to link these callables together in a chain, by creating a single, unified callable, the composition. The composition should be capable of accepting the same argument as the first callable in the chain and should return the output of the last callable in the chain. That‘s basically it. Below is a detailed list of requirements that any effective solution should satisfy. | {
"domain": "codereview.stackexchange",
"id": 45618,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c++, functional-programming, c++17, template, template-meta-programming",
"url": null
} |
c++, functional-programming, c++17, template, template-meta-programming
Language Standard: c++17
Ease of Use: constructing a composition out of given callables should be as straightforward as possible
Supported Callable Types: functions, lambdas, and functors
Callable Signatures: All callables to be composed must accept a single argument and return a non-void type. Callables that return void are excluded as they cannot be chained in data processing flows.
Composition Object: The resulting composition is a callable itself, as such it it assignable to std::function.
Breakable Chain Mechanism: Any callable may break the chain by not providing a result. In consequence the subsequent callables are not executed (skipped). Not providing a result can be considered a usual outcome and does not need to be an error.
Error Handling: The implementation should be exception-safe. Exceptions thrown by any callable in the chain should be propagated to the caller of the composition.
Generic and Overloaded Callable Support: Generic lambdas, generic functors and overloaded functors shall be supported as callables to be composed. This means a single composed chain might take different input argument types that might even result in different output types. Thus, the composition signature is as flexibel as the signatures of the composed callables itself (see Generic Lambda Example below).
Compile-Time Signature Validation: In case of mismatching callable signatures, meaningful compile time error messages should get provided.
Move Semantics and Perfect Forwarding: The implementation must fully support move semantics for callables and their arguments.
My Solution
Basically my solution provides a function named compose that can be used as follows.
Basic Usage Example
auto lambda_1 = [](bool flag) -> int { return flag ? 7 : 0; };
auto lambda_2 = [](int value) -> std::string { return std::to_string(value); };
auto lambda_3 = [](const std::string& string) -> std::string { return string + string; };
auto composition = compose(lambda_1, lambda_2, lambda_3); | {
"domain": "codereview.stackexchange",
"id": 45618,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c++, functional-programming, c++17, template, template-meta-programming",
"url": null
} |
c++, functional-programming, c++17, template, template-meta-programming
auto composition = compose(lambda_1, lambda_2, lambda_3);
std::optional<std::string> result = composition(true);
assert(result = “77”)
Generic Lambda Example
auto generic_lambda = DisablingOptionalFn{[](auto arg) { return arg; }};
auto composition = compose(generic_lambda, generic_lambda);
auto result1 = composition(true);
assert(result1 == true);
auto result2 = composition(“string”);
assert(result2 == "string");
Chain Breaking Example
auto breaking_lambda = [](bool) -> std::optional<int> { return std::nullopt; };
auto subsequent_lambda = [](int) -> int { return 0; };
auto composition = compose(breaking_lambda, subsequent_lambda, subsequent_lambda);
std::optional<int> result = composition(true);
assert(!result.has_value());
The Implementation
// A type trait that checks if a given type is std::optional.
template <typename>
struct IsOptional : std::false_type {};
template <typename T>
struct IsOptional<std::optional<T>> : std::true_type {};
// A type trait that wraps a given type in std::optional if it isn't already.
template <typename T>
struct EnsureOptional {
using Type = std::optional<T>;
};
template <typename U>
struct EnsureOptional<std::optional<U>> {
using Type = std::optional<U>;
};
// A function that takes any value and ensures it is wrapped in std::optional.
template <typename TArg>
auto ensure_optional(TArg&& arg) {
using OptionalType = typename EnsureOptional<std::decay_t<TArg>>::Type;
return OptionalType{std::forward<TArg>(arg)};
} | {
"domain": "codereview.stackexchange",
"id": 45618,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c++, functional-programming, c++17, template, template-meta-programming",
"url": null
} |
c++, functional-programming, c++17, template, template-meta-programming
// A type trait providing the return type of TFn including compile-time checks
template <typename TFn, typename TArg>
struct InvokeResult {
static_assert(std::is_invocable_v<TFn, TArg>, "Callable TFn does not support arguments of type TArg");
static_assert(not std::is_invocable_v<TFn, std::optional<TArg>>,
"Callable TFn may not support an argument of type std::optional");
using Type = typename std::invoke_result<TFn, TArg>::type;
static_assert(not std::is_void_v<Type>, "Result of TFn may not be void");
};
// Helper template function that transforms a given function (Fn) into another one (FnSkippable). FnSkippable expectes
// Fn's argument wrapped in a std::optional. Moreover, FnSkippable returns Fn's result wrapped into std::optional,
// unless it is not already. When calling FnSkippable, Fn is just executed unless FnSkippable is called with a
// std::nullopt. In this case, Fn is not executed (skipped) and FnSkippable simply returns a std::nullopt.
template <typename TFn>
auto make_skippable(TFn&& fn) {
return [fn = std::forward<TFn>(fn)](auto&& optional_arg) mutable {
using OptionalArg = std::decay_t<decltype(optional_arg)>;
using ValueArg = typename OptionalArg::value_type;
using FnResult = typename InvokeResult<TFn, ValueArg>::Type;
using OptionalFnResult = typename EnsureOptional<FnResult>::Type;
if (optional_arg.has_value()) {
auto&& unwrapped_value = std::forward<OptionalArg>(optional_arg).value();
return OptionalFnResult{fn(std::forward<decltype(unwrapped_value)>(unwrapped_value))};
}
// skip fn
return OptionalFnResult{std::nullopt};
};
} | {
"domain": "codereview.stackexchange",
"id": 45618,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c++, functional-programming, c++17, template, template-meta-programming",
"url": null
} |
c++, functional-programming, c++17, template, template-meta-programming
// skip fn
return OptionalFnResult{std::nullopt};
};
}
// Helper function template overload that terminates the recursion. The given callable essentially represents the
// composition. To make sure that the composition is always called with an optional argument (as expected by the
// skippable callables) ensure_optional is applied to the argument.
template <typename TFn>
auto compose_skippables(TFn&& fn) {
return [fn = std::forward<TFn>(fn)](auto&& arg) mutable {
return fn(ensure_optional(std::forward<decltype(arg)>(arg)));
};
}
// Helper function template overload that creates a composition out of an arbitrary number of given callables. It
// composes the first two callables into a new lambda that, when called, executes the first callable and passes its
// result to the second callable. The function then recursively composes this combined lambda with the rest of the
// provided callables.
template <typename TFn1, typename TFn2, typename... TFnOthers>
auto compose_skippables(TFn1&& fn_1, TFn2&& fn_2, TFnOthers&&... fn_others) {
auto chained_fn = [fn_1 = std::forward<TFn1>(fn_1), fn_2 = std::forward<TFn2>(fn_2)](auto&& arg) mutable {
return fn_2(fn_1(std::forward<decltype(arg)>(arg)));
};
return compose_skippables(std::move(chained_fn), std::forward<TFnOthers>(fn_others)...);
}
// Function template that creates a composition out of the given callables. It first makes each callable "skippable" and
// then composes them into a single callable chain using compose_skippables.
template <typename... TFns>
auto compose(TFns&&... fns) {
return compose_skippables(make_skippable(std::forward<TFns>(fns))...);
}
// Helper template to remove support for optional arguments for TFn
template <typename TFn>
struct DisablingOptionalArgumentFn {
explicit DisablingOptionalArgumentFn(TFn&& fn) : m_fn{std::forward<TFn>(fn)} {} | {
"domain": "codereview.stackexchange",
"id": 45618,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c++, functional-programming, c++17, template, template-meta-programming",
"url": null
} |
c++, functional-programming, c++17, template, template-meta-programming
template <typename TArg, typename = std::enable_if_t<not IsOptional<TArg>::value>>
auto operator()(TArg&& arg) {
return m_fn(std::forward<TArg>(arg));
}
private:
TFn m_fn;
};
Additional Remarks
Decision for std::optional: To enable callables to not return a result std::optional was choosen, even though std::expect being the preferable option, which is not available in c++17.
No Monadic Operations: It's recognized that c++23 introduced monadic operations for std::optional and std::except, enabling the construction of pipelines through their application. However, it's important to note that these do not offer the straightforward composition functionality desired in this context.
No Support for Callables Accepting std::optional: As pointed out in the previous review by indi, supporting callables that accept a std::optional as argument raises the question how to make them skippable in a reasonable way (see his question “What should I get in that last quadrant?”). Ease of use is a core goal here, so to keep it simple I decided to even not raise this question and therefore dropped support for callables accepting a std::optional. A static_assert was added accordingly. To address this, the DisablingOptionalArgumentFn helper template has been introduced, enabling users to adapt generic lambdas for use with this implementation, thereby raising awareness of this design choice. See Generic Lambda Example above.
How do you feel about this implementation? I'm open to any feedback you might have :-)
Please find the implementation including tests at godbolt.
Answer: Issues: | {
"domain": "codereview.stackexchange",
"id": 45618,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c++, functional-programming, c++17, template, template-meta-programming",
"url": null
} |
c++, functional-programming, c++17, template, template-meta-programming
Answer: Issues:
There's a technical issue in the implementation is the design of the type trait InvokeResult. The problem lies in the usage of static_assert. While it seems like a nice feature, as it provides feedback directly, it also comes with a hidden problem that may cause problems for users of the library.
The issue is that, as implementation stands, users cannot inspect the type trait. Say, if they try to test whether InvokeResult<A,B> is valid for some types A and B, then do some different operations based on whether or not it is valid. Say the user writes a generic code and has several compile-time options on what to do with the input types, and in one of them, he checks if they can be composed. Unfortunately, by relying on static_assert your type trait will raise a compiler error upon inspection, so the class is not inspectable. Which is not good for C++17 and would be much worse for C++20 and later, where such inspections are far more common due to the presence of concepts. It might not be that relevant for the current scenario, but it is a general design issue.
The function make_skippable(TFn&& fn) feels like a fast-fix implementation. The problem is with the output callable [...](auto&& optional_arg) as it technically accepts any input. I'd recommend writing a dedicated class that restricts input as much as possible according to the input callable type TFn. Normally, you'd want to write the restriction on the arguments in the function declaration rather than inside the function. Failing to meet such requirement is the reason why std::is_copy_assignable_v<std::vector<std::unique_ptr<int> > > returns true despite it being clearly false - the is_copy_assignable_v and other type traits and concepts perform shallow checks (they check if the function is properly declared, they don't check if it compiles successfully as that would be perceived as an implementation bug). | {
"domain": "codereview.stackexchange",
"id": 45618,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c++, functional-programming, c++17, template, template-meta-programming",
"url": null
} |
c++, functional-programming, c++17, template, template-meta-programming
There's a minor design inefficiency. I advise writing more general-purpose templates and utilizing them rather than writing many overly specialized ones. The compose_skippables template is unnecessary. You don't need a dedicated function for that. Just write a simple general compose and wrap the result in a function that converts input into the optional and calls the composed function.
If you don't want users to use certain functions/classes, let them know by writing them inside the namespace "detail". It feels to me that some of the written functions weren't intended for end-users. There's a lot less scrutiny over functions not intended for end-users.
Design Choices:
"No Support for Callables Accepting std::optional" I believe not accepting functions with an optional input type is not a good design choice. I'd solve it by making such functions inherently unskippable. Otherwise, you'd need to either write a specialized class used only for the implementation whose purpose is to indicate whether to skip the next call or use two layered std::optional; neither option is easy or convenient to use.
I think you overuse lambda functions. They are convenient callables but limited in functionality. Consider writing a dedicated class when that offers an advantage. For instance, your output and intermediate callables technically accept any input type, which is not ideal as it may lead to bugs and confusion.
If you had written a custom class, you could've specified what input parameter it accepts via a declaration using input_type = ...; which permits inspection (which is impossible in lambdas as it is difficult to inspect such properties), and then you can propagate it to all intermediate callable types what is input. Ultimately, the composed functions will have clear and type-safe input/output types. (The only problem is needing to figure out the initial input type, which we might require the user to declare). | {
"domain": "codereview.stackexchange",
"id": 45618,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c++, functional-programming, c++17, template, template-meta-programming",
"url": null
} |
c++, functional-programming, c++17, template, template-meta-programming
I believe the composition order is not ideal. You have compose(A, B, C)(x) to be equal to C(B(A(x))) but I believe it should be A(B(C(x))).
Ideas To Improve Functionality:
As it is now, all functions accept a single argument. What about accepting several arguments? It might not be a big difference but having a tuple of a bunch of types as input is never nice. But how do you deal with the problem that your intermediate callables can return only a single argument? Simple, when it returns a tuple-like object, you check if calling the following function on the tuples' elements is an option and perform it in such case. | {
"domain": "codereview.stackexchange",
"id": 45618,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c++, functional-programming, c++17, template, template-meta-programming",
"url": null
} |
python, multithreading
Title: Multi-threaded Error Handling Data Fetching Script
Question: I've been working on a Python script that fetches student data from an API using multiple threads for concurrency. The script retrieves both student information and prospectus data and saves them to JSON files.
I would appreciate feedback on the following aspects of the code:
Exception Handling: The current exception handling is quite broad (except:). I'd like to know if there are better practices for handling exceptions, especially related to network errors.
Code Duplication: I've noticed some duplication of code, especially when re-fetching data after encountering an invalid token. How can I refactor the code to make it more concise and maintainable?
Thread Safety: Given the concurrent nature of the script, I am particularly interested in your insights regarding potential race conditions, especially concerning the management of the TokenChecker list utilized for inter-thread communication. Are there more robust strategies to ensure thread safety in this context?
Note: The API vulnerability exploit mentioned here has already sent to the dev team and has been patched up. I'm just here to get some feedback on the code itself. Thanks for understanding!
import requests
from get_token import get_token
import json
import threading
from datetime import datetime
def write_file(filename, data):
f = open(filename, 'w')
f.write(data)
f.close
def main():
token = get_token()
headers = {
"Authorization" : token
}
def fetch_data(idnum, TokenChecker):
print(f"Fetching Student: {idnum}")
while True:
try:
getStudentResponse = requests.get(f'https://apiname/get_info?studid={idnum}', headers=headers, timeout=(3.05, 5))
getProspectusResponse = requests.get(f'https://apiname/prospectus?studid={idnum}', headers=headers, timeout=(3.05, 5)) | {
"domain": "codereview.stackexchange",
"id": 45619,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "python, multithreading",
"url": null
} |
python, multithreading
student_status_code = getStudentResponse.status_code
prospectus_status_code = getProspectusResponse.status_code
if((student_status_code == 200 and prospectus_status_code == 200) or (student_status_code == 500 or prospectus_status_code == 500 )):
break
except:
print(f"Fetched Timeout on ID: {idnum}")
write_file(f'prospectus/{idnum.split("-")[0]}/{idnum}.json', getProspectusResponse.text)
write_file(f'student_info/{idnum.split("-")[0]}/{idnum}.json', getStudentResponse.text)
TokenChecker.append({"responses" : [getStudentResponse, getProspectusResponse]})
return [getStudentResponse, getProspectusResponse]
for i in range(2019, 2025):
max_id = 10000
num_threads = 50
for j in range(0, max_id, num_threads):
TokenChecker = []
threads = []
for k in range(num_threads):
idnum = f"{str(i).zfill(4)}-{str(j + k).zfill(4)}"
t = threading.Thread(target=fetch_data, args=(idnum, TokenChecker,))
t.daemon = True
threads.append(t)
for k in range(num_threads):
threads[k].start()
for k in range(num_threads):
threads[k].join()
getStudentResponse, getProspectusResponse = TokenChecker[len(TokenChecker) - 1]['responses']
student_info = json.loads(getStudentResponse.text)
prospectus = json.loads(getProspectusResponse.text)
try:
if(student_info['message'] == 'Token is invalid' or prospectus['message' == 'Token is invalid']):
print(f'Change Token at: {datetime.now().strftime("%d/%m/%Y %H:%M:%S")}')
headers = {
"Authorization": get_token()
}
threads = [] | {
"domain": "codereview.stackexchange",
"id": 45619,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "python, multithreading",
"url": null
} |
python, multithreading
threads = []
for k in range(num_threads):
idnum = f"{str(i).zfill(4)}-{str(j + k).zfill(4)}"
t = threading.Thread(target=fetch_data, args=(idnum, TokenChecker,))
t.daemon = True
threads.append(t)
for k in range(num_threads):
threads[k].start()
for k in range(num_threads):
threads[k].join()
except:
pass
print("Fetching Done.")
if __name__ == "__main__":
main()
Answer:
Exception Handling: The current exception handling is quite broad
(except:).
It is worse than broad in the main block, it is nonexistent since you are swallowing all exceptions, making your program blind to errors and logic faults:
except:
pass
I would recommend to log the errors and stop your program. Investigate errors, and only then continue when you've made your application more aware and more robust.
In the other try block you have this bit of code:
except:
print(f"Fetched Timeout on ID: {idnum}")
But it is misleading, because the error could be anything else than a timeout. Consider using a logger and then logging.exception to dump the stack trace, which will contain more useful details, going down to the offending line number. A minimalistic example would be along these lines:
import logging
try:
do_something_bad()
except:
logging.exception("Exception occurred") | {
"domain": "codereview.stackexchange",
"id": 45619,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "python, multithreading",
"url": null
} |
python, multithreading
try:
do_something_bad()
except:
logging.exception("Exception occurred")
Preferably do logging to a file as well, not just the console or you could miss error messages. The Python docs have a lot of details, which take some time to assimilate but I feel like every Python programmer should get acquainted with this module and use it extensively and not just for debugging purposes.
The requests module has its own exception class, these are the exceptions you may want to handle. It is even possible to recover gracefully from transient network errors using urllib3 Retry capabilities as described here.
Regarding the HTTP status code: you normally always expect a 200 response. Sometimes 201 is used in APIs that create resource objects. Checking for 500 does not suffice, you could for example stumble on 502, 503, and others + the 4xx class errors such as 404.
So any response other than 200 should normally be treated as an error.
This code is not thread-safe if you were actually writing to the same file from several concurrent threads. Here it looks like you are writing to a different file name each time, so no problem in this case.
If you need to have multiple threads writing to the same file several approaches are possible, like using a queue, but I feel that threading.Lock is more convenient. It is not hard to implement and requires minimal refactoring on your end. Here you have a neat example: Thread-Safe Write to File in Python.
Thread safety doesn't seem to be an immediate concern here, but server response needs to be checked more accurately.
For an application like that, it could make sense to output the last ID successfully fetched in case of exception/crash, and add a command line flag (argparse) to resume scraping from that ID, so that you do not start the process all over.
Lots of reading, but definitely useful and reusable information. | {
"domain": "codereview.stackexchange",
"id": 45619,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "python, multithreading",
"url": null
} |
java, jodatime
Title: Check if current time is within the timeframe using Joda-Time
Question: Novice Java developer here. I've never really used a time/date library before and I'm curious how an experienced developer would solve this. You're given 4 ints: startHour, startMinute, endHour and endMinute. Now, check if current time is within the given timeframe. Is there a more clean way of doing this than what I've done here:
private void checkTimeframe(int startHour, int startMinute, int endHour, int endMinute) {
LocalDateTime now = LocalDateTime.now();
LocalTime localTimeStart = new LocalTime(startHour, startMinute);
LocalTime localTimeEnd = new LocalTime(endHour, endMinute);
LocalDateTime startTime = new LocalDateTime(now.getYear(), now.getMonthOfYear(),
now.getDayOfMonth(), startHour, startMinute);
LocalDateTime endTime = new LocalDateTime(now.getYear(), now.getMonthOfYear(),
now.getDayOfMonth(), endHour, endMinute);
//Check if start/end is, for instance, 23:00 - 03:00
if (localTimeStart.isAfter(localTimeEnd) || localTimeStart.equals(localTimeEnd)) {
endTime = endTime.plusDays(1);
}
if ( (now.equals(startTime) || now.isAfter(startTime) ) && now.isBefore(endTime)) {
System.out.println("Ok, we're within start/end");
} else {
System.out.println("Outside start/end");
}
}
Answer: to me it looks like a simple check of number between range of numbers.
do I would do like this
turn hour and minute into one number that is hhmm to simplify the comparison
now its a simple check between range of numbers, taking into account the case of cross-date boundary | {
"domain": "codereview.stackexchange",
"id": 45620,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "java, jodatime",
"url": null
} |
java, jodatime
complete code:
private static void checkTimeframe(int startHour, int startMinute, int endHour, int endMinute) {
// "concatanate" hour and minute into one number
int startHourMinute = startHour * 100 + startMinute;
int endHourMinute = endHour * 100 + endMinute;
LocalDateTime now = LocalDateTime.now();
int nowHourMinute = now.getHour() * 100 + now.getMinute();
// if range within date - simple between boundaries check
if (startHourMinute <= endHourMinute) {
if (nowHourMinute >= startHourMinute && nowHourMinute <= endHourMinute) {
System.out.println("Ok, we're within start/end");
} else {
System.out.println("Outside start/end");
}
// else (cross date boundary range) - check if now date is either within range of yesterday or within range tomorrow
} else {
if (nowHourMinute >= startHourMinute || nowHourMinute <= endHourMinute) {
System.out.println("Ok, we're within start/end");
} else {
System.out.println("Outside start/end");
}
}
} | {
"domain": "codereview.stackexchange",
"id": 45620,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "java, jodatime",
"url": null
} |
python, fastapi
Title: Python REST API using FastAPI
Question: from fastapi import APIRouter, HTTPException, status
from api.database import database, user_table
from api.models.user import UserIn
from api.security import authenticate_user, get_password_hash, get_user, create_access_token
router = APIRouter()
@router.post("/register", status_code=201)
async def register(user: UserIn):
if await get_user(user.email):
raise HTTPException(status_code=400, detail="A user with that email already exists")
hashed_password = get_password_hash(user.password)
query = user_table.insert().values(email=user.email, password=hashed_password)
await database.execute(query)
return {"detail": "User created"}
@router.post("/login")
async def login(user: UserIn):
user = await authenticate_user(user.email, user.password)
access_token = create_access_token(user.email)
return {"access_token": access_token, "token_type": "bearer"}
User router
import datetime
from typing import Annotated
from fastapi import Depends, HTTPException, status
from fastapi.security import OAuth2PasswordBearer
from jose import ExpiredSignatureError, JWTError, jwt
from passlib.context import CryptContext
from api.database import database, user_table
SECRET_KEY = "2b31f2a"
ALGORITHM = "HS256"
oauth2_scheme = OAuth2PasswordBearer(tokenUrl="token")
pwd_context = CryptContext(schemes=["bcrypt"])
credentials_exception = HTTPException(
status_code=401,
detail="Could not validate credentials",
headers={"WWW-Authenticate": "Bearer"}
)
def access_token_expire_minutes() -> int:
return 30
def create_access_token(email: str):
expire = datetime.datetime.now(datetime.timezone.utc) + datetime.timedelta(minutes=30)
jwt_data = {"sub": email, "exp": expire}
encoded_jwt = jwt.encode(jwt_data, key=SECRET_KEY, algorithm=ALGORITHM)
return encoded_jwt
def get_password_hash(password: str) -> str:
return pwd_context.hash(password) | {
"domain": "codereview.stackexchange",
"id": 45621,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "python, fastapi",
"url": null
} |
python, fastapi
def get_password_hash(password: str) -> str:
return pwd_context.hash(password)
def verify_password(plain_password: str, hashed_password: str) -> bool:
return pwd_context.verify(plain_password, hashed_password)
async def get_user(email:str):
query = user_table.select().where(user_table.c.email == email)
result = await database.fetch_one(query)
if result:
return result
async def authenticate_user(email: str, password: str):
user = await get_user(email)
if not user:
raise credentials_exception
if not verify_password(password, user.password):
raise credentials_exception
return user
async def get_current_user(token: Annotated[str, Depends(oauth2_scheme)]):
try:
payload = jwt.decode(token, key=SECRET_KEY, algorithms=[ALGORITHM])
email = payload.get("sub")
if email is None:
raise credentials_exception
except ExpiredSignatureError as e:
raise HTTPException(
status_code=401,
detail="Token has expired",
headers={"WWW-Authenticate": "Bearer"}
) from e
except JWTError as e:
raise credentials_exception from e
user = await get_user(email=email)
if user is None:
raise credentials_exception
return user
Security module
So the way it is configured right now makes it difficult to implement a logout, and the way it's implemented right now forces us to use get_user several times if we were to hold a column called session_active to implement logout, or a column called secret to hold the secret so we can logout, but then as I've said we need to call get_user several times if we were to do that, so I am wondering if there's any issue with the code.
Answer: secrets in the source ?!?
SECRET_KEY = "2b31f2a" | {
"domain": "codereview.stackexchange",
"id": 45621,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "python, fastapi",
"url": null
} |
python, fastapi
Answer: secrets in the source ?!?
SECRET_KEY = "2b31f2a"
This is a problem.
Delete the line.
And generate a new key, invalidating the old one.
Secrets go into a vault, an HSM, or maybe into env vars or a text file.
Source repos like git are good at many things,
but managing secrets is not one of them.
Maybe we wish to use the identifier PEPPER for this secret?
hash vs cleartext
query = user_table.insert().values(email=user.email, password=hashed_password)
I don't understand what's going on with that line.
Let's leave aside that what we're assigning is clearly an insert command
rather than a select query.
Somewhere in the backend we're apparently passing around
a cleartext password, yet we invite confusion between
whether we persisted it in cleartext form or in hashed form?
That's going to be a problem for every maintenance engineer
who joins the project, and for each annual security review you conduct.
If it's an argon2id hash,
then call it a hash.
Consider this subsequent line:
async def login(user: UserIn):
user = await authenticate_user(user.email, user.password)
At this point, I have no idea if user.password denotes some
cleartext that, after a breach, could be used used to authenticate
to unrelated sites, or if it actually denotes a hashed credential.
non-SI units
def access_token_expire_minutes() -> int:
Thank you for explaining the units very clearly; that's helpful.
Consider adjusting by a factor of 60,
so we can demote the "it's in seconds!" aspect to just a comment.
hardcoded parameter
This is very nice:
import datetime as dt
def create_access_token(email: str):
expire = dt.datetime.now(dt.timezone.utc) + dt.timedelta(minutes=30)
Consider throwing that parameter into the signature as an optional kwarg:
def create_access_token(email: str, expire_sec: int = 30 * 60):
expire = dt.datetime.now(dt.timezone.utc) + dt.timedelta(seconds=expire_sec) | {
"domain": "codereview.stackexchange",
"id": 45621,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "python, fastapi",
"url": null
} |
python, fastapi
nit: Use $ black *.py
to make this more legible, imposing "4 blanks per tab stop".
Annotating email as str is helpful, thank you.
Feel free to tell us about the -> return type, as well.
View it with type(encoded_jwt),
or use reveal_type() with mypy.
implicit return
async def get_user(email: str):
...
if result:
return result
This isn't great.
And there's no signature annotation of a return type,
which keeps mypy from being as helpful as it could be.
Consider using $ mypy --strict ...
If None comes back as the result,
we just fall off the end of the function,
implicitly returning None.
Please tack on an explicit return None to such functions,
to alert maintenance engineers of the "else" behavior.
But here, the test is pointless.
Unconditionally return result, and you're done.
Let the signature annotation do the work of pointing
out it's an optional row we're sending back.
cached result
Do we have "too many DB calls" that get a user?
No, I don't think so, especially since we're asking
for exact equality on an indexed column,
hopefully one that has a UNIQUE constraint on it.
But suppose the query takes more than a millisecond,
maybe the RDBMS is across a WAN link or something silly like that,
and async lookups aren't enough.
Then you might consider decorating with
@lru_cache(maxsize=1).
If you want to enforce a timeout, write your own
@property
getter, or tack on a final def ... , epoch: int) parameter
which will force a cache miss whenever the epoch changes.
Make the epoch be current time rounded to two seconds,
or whatever you're comfortable with.
cipher choice
pwd_context = CryptContext(schemes=["bcrypt"]) | {
"domain": "codereview.stackexchange",
"id": 45621,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "python, fastapi",
"url": null
} |
python, fastapi
Documentation written since 2015
will typically recommend a different scheme, a
contest winner.
The passlib
guidance
appears to be on the somewhat dated side.
It would be helpful to future maintenance engineers to add a comment
or docstring explaining the design rationale for not going with
argon2id.
explicit logout
Consider adding an additional column to the user table, holding a timestamp.
Use it to record the time of last login,
and pass that timestamp around when making
"is this request authenticated?" decisions.
Then your /logout button simply sets it
to a timestamp in January 1970, or maybe makes it NULL,
to invalidate future replay attempts of that credential.
End user will have to /login after that
in order to put a fresh timestamp there. | {
"domain": "codereview.stackexchange",
"id": 45621,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "python, fastapi",
"url": null
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.