instruction stringlengths 0 30k ⌀ |
|---|
In my flutter app Interstitial Ad doesn't cover all screen ,
I test my app and blank app same issuse the top of the screen not coverd by ads in same device like (Pixel 6 Pro API 34 emalutor) my flutter last version and i use google_mobile_ads: ^4.0.0 package
void _createInterstitialAd() {
InterstitialAd.load(
adUnitId:'ca-app-pub-3940256099942544/1033173712',
request: request,
adLoadCallback: InterstitialAdLoadCallback(
onAdLoaded: (InterstitialAd ad) {
print('$ad loaded');
_interstitialAd = ad;
_numInterstitialLoadAttempts = 0;
_interstitialAd!.setImmersiveMode(true);
},
onAdFailedToLoad: (LoadAdError error) {
print('InterstitialAd failed to load: $error.');
_numInterstitialLoadAttempts += 1;
_interstitialAd = null;
if (_numInterstitialLoadAttempts < 3) {
_createInterstitialAd();
}
},
));
}
void _showInterstitialAd() {
if (_interstitialAd == null) {
print('Warning: attempt to show interstitial before loaded.');
return;
}
_interstitialAd!.fullScreenContentCallback = FullScreenContentCallback(
onAdShowedFullScreenContent: (InterstitialAd ad) =>
print('ad onAdShowedFullScreenContent.'),
onAdDismissedFullScreenContent: (InterstitialAd ad) {
print('$ad onAdDismissedFullScreenContent.');
ad.dispose();
_createInterstitialAd();
},
onAdFailedToShowFullScreenContent: (InterstitialAd ad, AdError error) {
print('$ad onAdFailedToShowFullScreenContent: $error');
ad.dispose();
_createInterstitialAd();
},
);
_interstitialAd!.show();
_interstitialAd = null;
}
@override
void initState() {
_createInterstitialAd();
// checkconsent();
super.initState();
}
[![enter image description here][1]][1]
[1]: https://i.stack.imgur.com/cQPux.jpg
|
Step 1:
@Component
public class RequestInterceptor implements HandlerInterceptor {
@Override
public boolean preHandle(@Nullable HttpServletRequest request,
@Nullable HttpServletResponse response, @Nullable Object object) {
System.out.println("interceptor");
try {
System.out.println(new String(request.getInputStream().readAllBytes()));
//printing body
} catch (IOException e) {
System.out.println("interceptor error: "+e.getMessage());
}
return true;
}
@Override
public void postHandle(HttpServletRequest request, HttpServletResponse response, Object object, ModelAndView model) {
}
@Override
public void afterCompletion(HttpServletRequest request, HttpServletResponse response, Object object, Exception exception) {
}
}
step 2:
@Configuration
public class WebConfig implements WebMvcConfigurer {
@Override
public void addInterceptors(InterceptorRegistry registry) {
registry.addInterceptor(new RequestInterceptor());
}
} |
{"Voters":[{"Id":16076,"DisplayName":"Mitch Wheat"},{"Id":839601,"DisplayName":"gnat"},{"Id":13086128,"DisplayName":"Goku - stands with Palestine"}]} |
I am making an editor for [TavernAI character cards](https://github.com/malfoyslastname/character-card-spec-v2/blob/main/spec_v1.md) that encode the character data into an tEXt EXIF field.
It seemed to work fine at first until i loaded a character card into [Silly Tavern](https://github.com/SillyTavern/SillyTavern), a popular TavernAI fork, at which point it said that there was no character metadata. Turns out that ImageSharp was putting long text metadata in zTXt format, which is not supported by the spec.
This is how i approached saving a character card:
```cs
using SixLabors.ImageSharp;
using SixLabors.ImageSharp.Formats.Png.Chunks;
using SixLabors.ImageSharp.PixelFormats;
using var image = new Image<Rgba32>(1, 1);
var imageMetadata = image.Metadata.GetPngMetadata();
var metadataList = (List<PngTextData>)imageMetadata.TextData;
metadataList.RemoveAll(m => m.Keyword == "chara");
var aLotOfText = new string('a', 50000); // It seems to compress only if the text is big
var metadataEntry = new PngTextData("chara", aLotOfText, string.Empty, string.Empty);
metadataList.Add(metadataEntry);
image.SaveAsPng("test.png");
```
You can check that it outputs zTXt metadata by running `exiftool -v test.png`. Is there a way to tell SixLabors.ImageSharp to write long metadata chunks as tEXt and not zTXt? |
How to tell SixLabors.ImageSharp to write metadata as tEXt and not zTXt? |
|c#|png|exif|imagesharp|sixlabors.imagesharp| |
null |
Why do we need to add parentheses when adding function as an expression to lambda?
def print_():
print('hello')
show = lambda: print_
show()
gives `<function print_ at 0x0000019046503560>`whereas
show1 = lambda: print_()
show1()
gives `hello`. Why this behaviour? |
Imagine a micro services architecture using EF Core code first with migrations.
A kubernetes init container will migrate the SQL Server database before running the actual application container.
But sometimes we rollback to a previous version of the application.
This means the database is in an invalid and possibly incompatible state.
I would like to migrate the database back to the most recent migration present in the code base of that version, but the `down` migration won't be know.
Is it possible at all to facilitate this? I guess the database itself needs knowledge about the way to migrate back, but is it possible?
I did some research but it seems to me this scenario is unsupported at the time of writing. |
Is it possible to downgrade an Entity Framework Core code-first migration without the migration present in the code base? |
For my model evaluation, I am using cosine similarity with below code:
cosine_similarity = (np.dot(np.array(v1), np.array(v2))) / (norm(np.array(v1)) * norm(np.array(v2)))
But getting below error, any help here
<scipy.stats._distn_infrastructure.rv_continuous_frozen object at 0x000001E703C665A0>
<scipy.stats._distn_infrastructure.rv_continuous_frozen object at 0x000001E701D1EEA0>
Traceback (most recent call last):
File "c:\Users\91888\OneDrive\Desktop\SkillMatch\Doc2Vec_Test_Resume.py", line 102, in <module>
print((norm(np.array(v1)) * norm(np.array(v2))))
~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~
TypeError: unsupported operand type(s) for *: 'rv_continuous_frozen' and 'rv_continuous_frozen'
For my model evaluation, I am using cosine similarity |
Model evaluation with cosine similarity TypeError: unsupported operand type(s) for *: 'rv_continuous_frozen' and 'rv_continuous_frozen' |
|python|python-pdfreader| |
null |
As the title suggests.
If the hostname cannot be determined by Java, then the IP address is sent to Eureka. Only explict way of setting the hostname is by setting eureka.instance.hostname property. You can set your hostname at the run-time by using an environment variable — for example, eureka.instance.hostname=${HOST_NAME}.
The above paragraph is from the spring cloud documentation page, What does it mean please ? And How do I set it up ? or where does it pull the value from ?
And to add context, I am trying to learn how to set up microservices, it takes my laptop name which has underscore in it, and it determines that as an illegal character. |
I keep seeing in spring configuration in application.properties or yml, we have ${something}, what is this and How I can set it up ? Beginner here |
|java|spring|spring-boot| |
null |
Using VS code on UBUNTU 6.5.0-17-generic and it fail to compile the program with bellow Error:
Starting build...
/usr/bin/gcc -fdiagnostics-color=always -g program.c -o program -lX11 '`imlib2-config --cflags`'
**/usr/bin/ld: cannot find `imlib2-config --cflags`: No such file or directory**
collect2: error: ld returned 1 exit status
My **tasks.json args:**
```
"args": [
"-fdiagnostics-color=always",
"-g",
"${file}",
"-o",
"${fileDirname}/${fileBasenameNoExtension}",
"-lX11","imlib2-config --cflags"
]
```
I tried to "break the IMLIB2-CONFIG flag into two sections "`imlib2-config ", "--cflags`"
still failing with Error /usr/bin/gcc -fdiagnostics-color=always -g program.c -o program -lX11 `imlib2-config ' --cflags`'
**/bin/sh: 1: Syntax error: Unterminated quoted string**
When running the same compile command from the command line as in
$ gcc -g ./program.c -o ./program -lX11 `imlib2-config --cflags` `imlib2-config --libs`
I'm expecting that compilation is successful and my "program" runs as expected |
Apply a common criteria to all Spring JPA repository methods |
|java|spring|spring-data-jpa| |
{"Voters":[{"Id":1974224,"DisplayName":"Cristik"},{"Id":17562044,"DisplayName":"Sunderam Dubey"},{"Id":6463558,"DisplayName":"Lin Du"}]} |
{"Voters":[{"Id":1974224,"DisplayName":"Cristik"},{"Id":17562044,"DisplayName":"Sunderam Dubey"},{"Id":6463558,"DisplayName":"Lin Du"}],"SiteSpecificCloseReasonIds":[]} |
I made a website (studytapa.com) using Wordpress and astra theme and setting SEO & meta tag.
When I sent my website's link in Instagram DM, I can't see preview image for my website.
[Instagram DM](https://i.stack.imgur.com/ZzGsD.jpg)
I already set meta og:image, and og:image:secure_url as well.
Does anyone know how to fix?
Any answer helps me. Thank you!
|
Instagram DM doesn't show wordpress website's image |
|wordpress|image|instagram|preview|dm| |
null |
> Is there a way to figure out how I generated it?
I copy-pasted into an online text-to-hex converter and identified that what you have there is [U+00AD 'SOFT HYPHEN'](https://en.m.wikipedia.org/wiki/Soft_hyphen). To type it, by default:
* on Windows: hold <kbd>ALT</kbd>, type <kbd>+</kbd><kbd>A</kbd><kbd>D</kbd>, then release <kbd>ALT</kbd>; and
* on Linux: hold <kbd>CTRL</kbd> and <kbd>SHIFT</kbd>, type
<kbd>U</kbd><kbd>A</kbd><kbd>D</kbd>, then release <kbd>CTRL</kbd>
and <kbd>SHIFT</kbd>.
> How can I avoid, find, and fix this problem if it happens in a real-world scenario?
I'd recommend the [Gremlins tracker](https://marketplace.visualstudio.com/items?itemName=nhoizey.gremlins) extension.
|
{"Voters":[{"Id":3418066,"DisplayName":"Paulw11"},{"Id":3306020,"DisplayName":"Magnas"},{"Id":6463558,"DisplayName":"Lin Du"}]} |
I made a website (studytapa.com) using Wordpress and astra theme and setting SEO & meta tag.
When I sent my website's link in Instagram DM, I can't see preview image for my website.
[Instagram DM Image](https://i.stack.imgur.com/ZzGsD.jpg)
I already set meta og:image, and og:image:secure_url as well.
Does anyone know how to fix?
Any answer helps me. Thank you!
|
|scikit-learn|decision-tree| |
do you mean:
```java
public class Main {
public static void main(String[] args) {
// your array
int[] arr = {1,2,3,4,5};
// loop to print the array backwards
for(int i = arr.length - 1; i >= 0; i--){
System.out.print(arr[i] + " ");
}
}
}
``` |
I just need to make sure the **tablename** in `query.tablename` matches the **tablename** in `export const tablename = pgTable` |
|flutter|error-handling|flutter-dependencies| |
I'm trying to create query: searching stats of persons by phone number.
I start using `INNER JOIN` in internal query and now I don't understand how type I must return.
I'm trying to use view as type, but they can't contain atributes with same name. Okay, I trim it to 3.
But I still get error and dont understand what it means:
ERROR: structure of query does not match function result type
DETAIL: Number of returned columns (3) does not match expected column count (2).
CONTEXT: PL/pgSQL function get_fio_by_phone_number(bigint) line 3 at RETURN QUERY
SQL state: 42804
Code:
```sql
CREATE OR REPLACE VIEW persons_fios_and_telephons AS
SELECT p.name, p.surname, pt.number
FROM person p
INNER JOIN persons_telefons pt on p.id = pt.person_id;
DROP FUNCTION get_fio_by_phone_number(bigint);
CREATE OR REPLACE FUNCTION get_fio_by_phone_number(user_phone BIGINT) RETURNS SETOF persons_and_telephons
AS $$
BEGIN
RETURN QUERY
SELECT *
FROM persons_fios_and_telephons pat
WHERE pat.number = get_fio_by_phone_number.user_phone;
EXCEPTION
WHEN NO_DATA_FOUND THEN
RAISE EXCEPTION 'There is no person with number "%".', user_phone;
END;
$$ LANGUAGE plpgsql;
SELECT * FROM get_fio_by_phone_number(89998887766)
```
Is there any way to make function without rough atributes enumeration? Thanks! |
|sql|database|postgresql|plpgsql|dbfunctions| |
I'm currently implementing a booking solution that provides the user to send booking requests from their phone and the vendor receiving it on their cash desk software.
I have setup (highlighting only the necessary components) DynamoDB as a database, a websocket connection on API Gateway and a lambda connected both to the API Gateway API and DynamoDB streams.
Giving these two scenarios:
User updates a booking -> dynamo entity gets updated -> stream sends it to lambda -> lambda sends it to WS
Vendor updates booking -> dynamo entity gets updated
The problem I'm facing is of course that the vendor cash desk software will receive the update event even when generated by himself, which he wouldn't need since he was the one who sent the update.
Is this something I should prevent happening?
Adding a 'synced' flag would remove this issue but would imply making an additional save operation every time I send the order to the cash desk software
User updates -> save entity with 'synced' false -> sends message to cash desk software -> save entity with 'synced' to true
I'm not sure not being an expert on this kind of architecture and can't see what could be the issues I might face in the long run
|
Should I prevent an update event to return to the sender by adding more save operations? |
|events|websocket|amazon-dynamodb-streams| |
I followed instructions from apple website (https://developer.apple.com/metal/pytorch/) and when I verified mps support with its Python script, it just gave me back something I do not understand. (It too long, partial listed below) I wish I could use the GPU acceleration for stable diffusion. My Macbook has Radeon Pro 555 with Ventura OS. Help please :(
```
Python 3.11.1 (v3.11.1:a7a450f84a, Dec 6 2022, 15:24:06) [Clang 13.0.0 (clang-1300.0.29.30)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> if torch.backends.mps.is_available():
... mps_device = torch.device("mps")
... x = torch.ones(1, device=mps_device)
... print (x)
... else:
... print ("MPS device not found.")
...
Traceback (most recent call last):
File "<stdin>", line 4, in <module>
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/torch/_tensor.py", line 461, in __repr__
return torch._tensor_str._str(self, tensor_contents=tensor_contents)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/torch/_tensor_str.py", line 677, in _str
return _str_intern(self, tensor_contents=tensor_contents)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/torch/_tensor_str.py", line 597, in _str_intern
tensor_str = _tensor_str(self, indent)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/torch/_tensor_str.py", line 349, in _tensor_str
formatter = _Formatter(get_summarized_data(self) if summarize else self)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/torch/_tensor_str.py", line 137, in __init__
nonzero_finite_vals = torch.masked_select(
^^^^^^^^^^^^^^^^^^^^
RuntimeError: Failed to create indexing library, error: Error Domain=MTLLibraryErrorDomain Code=3 "program_source:168:1: error: type 'const constant ulong3 *' is not valid for attribute 'buffer'
REGISTER_INDEX_OP_ALL_DTYPES(select);
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
program_source:160:5: note: expanded from macro 'REGISTER_INDEX_OP_ALL_DTYPES'
REGISTER_INDEX_OP(8bit, idx64, char, INDEX_OP_TYPE, ulong3); \
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
program_source:138:5: note: expanded from macro 'REGISTER_INDEX_OP'
constant IDX_DTYPE * offsets [[buffer(3)]], \
^ ~~~~~~~~~
program_source:168:1: note: type 'ulong3' (vector of 3 'unsigned long' values) cannot be used in buffer pointee type
program_source:160:59: note: expanded from macro 'REGISTER_INDEX_OP_ALL_DTYPES'
REGISTER_INDEX_OP(8bit, idx64, char, INDEX_OP_TYPE, ulong3); \
^
program_source:168:1: error: explicit instantiation of 'index_select' does not refer to a function template, variable template, member function, member class, or static data member
REGISTER_INDEX_OP_ALL_DTYPES(select);
^
program_source:160:5: note: expanded from macro 'REGISTER_INDEX_OP_ALL_DTYPES'
REGISTER_INDEX_OP(8bit, idx64, char, INDEX_OP_TYPE, ulong3); \
^
program_source:134:13: note: expanded from macro 'REGISTER_INDEX_OP'
kernel void index_ ## INDEX_OP_TYPE<DTYPE, IDX_DTYPE>( \
^
<scratch space>:9:1: note: expanded from here
index_select
^
program_source:20:13: note: candidate template ignored: substitution failure [with T = char, OffsetsT = unsigned long __attribute__((ext_vector_type(3)))]: type 'unsigned long const constant * __attribute__((ext_vector_type(3)))' is not valid for attribute 'buffer'
kernel void index_select(
^
program_source:168:1: error: type 'const constant ulong3 *' is not valid for attribute 'buffer'
REGISTER_INDEX_OP_ALL_DTYPES(select);
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
program_source:162:5: note: expanded from macro 'REGISTER_INDEX_OP_ALL_DTYPES'
REGISTER_INDEX_OP(16bit, idx64, short, INDEX_OP_TYPE, ulong3); \
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
program_source:138:5: note: expanded from macro 'REGISTER_INDEX_OP'
constant IDX_DTYPE * offsets [[buffer(3)]], \
^ ~~~~~~~~~
program_source:168:1: note: type 'ulong3' (vector of 3 'unsigned long' values) cannot be used in buffer pointee type
program_source:162:59: note: expanded from macro 'REGISTER_INDEX_OP_ALL_DTYPES'
REGISTER_INDEX_OP(16bit, idx64, short, INDEX_OP_TYPE, ulong3); \
^
program_source:168:1: error: explicit instantiation of 'index_select' does not refer to a function template, variable template, member function, member class, or static data member
REGISTER_INDEX_OP_ALL_DTYPES(select);
^
program_source:162:5: note: expanded from macro 'REGISTER_INDEX_OP_ALL_DTYPES'
REGISTER_INDEX_OP(16bit, idx64, short, INDEX_OP_TYPE, ulong3); \
^
program_source:134:13: note: expanded from macro 'REGISTER_INDEX_OP'
kernel void index_ ## INDEX_OP_TYPE<DTYPE, IDX_DTYPE>( \
^
<scratch space>:17:1: note: expanded from here
index_select
^
....
...
program_source:248:13: note: candidate template ignored: substitution failure [with T = metal::_atomic<int, void>, E = int, OffsetsT = unsigned long __attribute__((ext_vector_type(3)))]: type 'unsigned long const constant * __attribute__((ext_vector_type(3)))' is not valid for attribute 'buffer'
kernel void index_put_accumulate_native_dtypes(
^
}
>>>
>>>
```
I degrade Python form 3.12.1 to 3.11.1, and reinstall the latest version of Pytorch nightly, still no luck with the result. |
[NOT WORKING?]Accelerated PyTorch for Macbook with AMD GPUS |
|python-3.x|pytorch|amd|macos-ventura| |
null |
[Asciidoc][1] supports [callouts][2]. How can one write similar callouts using [reStructuredText][3]?
[1]: https://asciidoc.org
[2]: https://docs.asciidoctor.org/asciidoc/latest/verbatim/callouts/
[3]: https://docutils.sourceforge.io/rst.html |
Output Formatting Issue in Python Code for Employee Hierarchy |
|python|data-structures|formatting|hierarchy|output-formatting| |
null |
I need a script to copy a set of filetypes (but just the most recent (Backup) file!) from a backup structure and delete older existing files in the destination.
This is what I got, but I am stuck in the last 2 commands (del and the copy command) :
# Define Source
$Source = "D:\Fullbackups"
$Destination = "X:"
# Array to filter Types
$Filetype = @("*.vbk", "*.vbm", "*.bco")
# Loop through each Subfolder
foreach ($Folder in Get-ChildItem $Source -Directory) {
# Get latest File per Filetype
foreach ($fType in $Filetype) {
# del existing $fType in Destination, so older backups do not waste space
del $Destination\$Folder\*.$fType
get-childitem -path $Ordner.FullName -Filter $Dateityp | where-object { -not $_.PSIsContainer } | sort-object -Property $_.CreationTime | select-object -last 1 | copy-item -Destination (join-path $Destination\$Folder $_)
}
} |
This `Too Many Redirectes` error usually occurs when there is a redirection loop. In your `middleware.js` code, if a user has the "admin" role and they try to access the `/admin` page, you are redirecting them back to the `/admin` page, which causes an infinite loop.
if (session !== null) {
if (userRole === "admin" && nextUrl.pathname === "/admin") {
return NextResponse.redirect(`${process.env.NEXTAUTH_URL}/admin`); // this creates a loop
}
}
I adjusted your middleware to allow admin users to access `subpages` under `/admin` without being redirected:
export async function middleware(request) {
const session = await getToken({
req: request,
secret: process.env.NEXTAUTH_SECRET,
});
// Check if the user has a session and extract user role if present
const userRole = session?.user?.role.toLowerCase();
const { nextUrl } = request;
// If the user has a session
if (session !== null) {
// If the user is an admin
if (userRole === "admin") {
// If the user is trying to access the /admin page or any subpage under /admin, let the request proceed
if (nextUrl.pathname.startsWith("/admin")) {
return NextResponse.next(); // This allows the request to proceed without redirection
}
} else {
// Logic for other roles (if any), or you can redirect non-admins away from /admin pages
if (nextUrl.pathname.startsWith("/admin")) {
return NextResponse.redirect(`${process.env.NEXTAUTH_URL}/login`);
}
}
} else {
// If there's no session and the user is trying to access /admin, redirect them to the login page
if (nextUrl.pathname.startsWith("/admin")) {
return NextResponse.redirect(`${process.env.NEXTAUTH_URL}/login`);
}
}
}
export const config = {
matcher: ["/admin/:path*", "/donor/:path*", "/agent/:path*"],
};
|
Given the below SQL file I would expect that only fully aggregated result (for each hour) is written to MySQL db table.
However, a separate row is created for each intermediate aggregation result...
i.e.
```
|eventDate|userId|responseTotalTime|
|2020-08-21 01:00:00|1|10
|2020-08-21 01:00:00|1|20
|2020-08-21 01:00:00|1|30
....
```
I'm only interested in the latest (most actual row)..
Shouldn't Flink Window only emit one final result once it's closed?
Thanks in advance,
```
CREATE TABLE test_source
(
`eventDate` TIMESTAMP_LTZ(3),
`userId` STRING,
`responseTime` INT
WATERMARK FOR eventDate AS eventDate - INTERVAL '1' MINUTE
) WITH (
'connector' = 'kafka',
...
);
CREATE TABLE IF NOT EXISTS test_sync
(
`eventDate` STRING,
`userId` STRING,
`responseTotalTime` INT
PRIMARY KEY(`eventDate`, `userID`) NOT ENFORCED
) WITH (
'connector' = 'jdbc',
...
);
INSERT INTO test_sync
SELECT
CAST(window_start AS VARCHAR) AS eventDate,
userId,
SUM(responseTime) as responseTotalTime
FROM TABLE(TUMBLE(TABLE test_source, DESCRIPTOR(eventDate), INTERVAL '1' HOUR)) PA
GROUP BY window_start, userId;
```
**p.s.** After some further investigations and testing I can conclude the following:
1. If we **GROUP BY window_start** only - then this is a typical Group Aggregation function from Flink perspective, watermarks are not really working, updates are immediate and window is never closed. To avoid memory issues we need to set ```table.exec.state.ttl``` configuration.
2. If we **GROUP BY window_start, window_end** - then Window Aggregation is used, watermarks work as expected and window content is emitted only when the window is closed.
However, this is not explicitly mentioned in Flink documentation anyhwere.. Wish it was better documented.
Still doubt - is it the expected behavior (for option 1) or a possible defect in Flink?
|
You can use sleep function of ruby in your index function to cause delay. such as
```
sleep(2)
```
where 2 is number of seconds
|
How can I determine the type of field of a calculated field on a query I´m doing on a kaggle notebook? |
@Getter
@Setter
@ToString
@AllArgsConstructor
@NoArgsConstructor
@Data
public class ServiceRuleMgmtRequest implements Serializable {
@NotNull
private Integer costShareMapID;
private Integer costShareAdminID;
private Integer plCostShareID;
private Integer serviceRuleID;
private List<String> serviceRuleMgmtArray;
private String serviceRule;
private String serviceType;
private String description;
private String serviceCalMethod;
private Double serviceAgeMin;
private Double serviceAgeMax;
private String serviceGender;
}
using this above modal to set the data using session.setAttribute but getting security error like trust boundary violation in it.
@ApiOperation(value = "Get existing Service Rule Management Details")
@PostMapping("/setDeductibleID")
public ResponseEntity<Map<String, String>> setDeductibleID(HttpServletRequest request, @Valid @RequestBody ServiceRuleMgmtRequest serviceRuleMgmtRequest) {
if (serviceRuleMgmtRequest == null) {
return ResponseEntity.status(HttpStatus.BAD_REQUEST).body(Collections.singletonMap("serviceRuleMsg", "Invalid ServiceRuleMgmtRequest"));
}
HttpSession session = request.getSession();
session.setAttribute("serviceRuleMgmtRequest", serviceRuleMgmtRequest); //getting trust boundary violation here
return ResponseEntity.status(HttpStatus.OK).body(Collections.singletonMap("serviceRuleMsg", "SUCCESS"));
}
I have tried using @valid still not able to solve the issue with trust boundary violation.Also tried with Encode.forJava to use it but it takes the string and we having it as a class.Also tried to use Encode.forJava in the setter of the modal by parsing it into integer then string but still not able to solve it. |
How to Fix trust Boundary Violation with modal request body |
|java|spring-boot|security| |
The part of the implementation should go like this which will guarantee your conditions I think.
```swift
import SwiftUI
struct ContentView: View {
@State var title = "Title text"
@State var subTitle = "Subtitle text"
var isSubtitleLarger: Bool {
subTitle.count > title.count
}
var body: some View {
VStack(alignment: .leading) {
Text(title)
.lineLimit(isSubtitleLarger ? 1 : 2)
Text(subTitle)
.lineLimit(isSubtitleLarger ? 2 : 1)
}
.frame(minHeight: 150) // Add minimum height to meet minimum size, so that the view doesn't shrink when texts are small
}
}
```
This is important to guarantee there is always **3 lines at max**. |
In My Worker application, I need to log all outgoing requests and incoming responses. I found that there is an `IHttpClientAsyncLogger` that can add logging functionality in desired httpclient. So, I implemented the `IHttpClientAsyncLogger` interface like this:
public class HttpClientLogger : IHttpClientAsyncLogger
{
private readonly ILogger<HttpClientLogger> _logger;
public HttpClientLogger(ILogger<HttpClientLogger> logger)
{
_logger = logger;
_logger.LogInformation("httpclient logger initiated");
}
public object? LogRequestStart(HttpRequestMessage request)
{
_logger.LogInformation("My logger hit on : {function}", nameof(LogRequestStart));
return request;
}
public void LogRequestStop(object? context, HttpRequestMessage request, HttpResponseMessage response, TimeSpan elapsed)
{
_logger.LogInformation("My logger hit on : {function}", nameof(LogRequestStop));
}
public void LogRequestFailed(object? context, HttpRequestMessage request, HttpResponseMessage? response, Exception exception,
TimeSpan elapsed)
{
_logger.LogInformation("My logger hit on : {function}", nameof(LogRequestFailedAsync));
}
public ValueTask<object?> LogRequestStartAsync(HttpRequestMessage request,
CancellationToken cancellationToken = new CancellationToken())
{
_logger.LogInformation("My logger hit on : {function}", nameof(LogRequestStartAsync));
return ValueTask.FromResult<object?>(request);
}
public ValueTask LogRequestStopAsync(object? context, HttpRequestMessage request, HttpResponseMessage response,
TimeSpan elapsed, CancellationToken cancellationToken = new CancellationToken())
{
_logger.LogInformation("My logger hit on : {function}", nameof(LogRequestStopAsync));
return ValueTask.CompletedTask;
}
public ValueTask LogRequestFailedAsync(object? context, HttpRequestMessage request, HttpResponseMessage? response,
Exception exception, TimeSpan elapsed, CancellationToken cancellationToken = new CancellationToken())
{
_logger.LogInformation("My logger hit on : {function}", nameof(LogRequestFailedAsync));
return ValueTask.CompletedTask;
}
}
and then register it as below in my service collection in program.cs:
Host.CreateDefaultBuilder(args)
.ConfigureLogging((context, loggingBuilder) =>
{
var logger = new LoggerConfiguration()
.ReadFrom.Configuration(context.Configuration).CreateLogger();
loggingBuilder.AddSerilog(logger);
})
.ConfigureServices(services =>
{
services.AddHttpClient("PartoHttpClient", client =>
{
client.Timeout = TimeSpan.FromSeconds(150);
}).RemoveAllLoggers()
.AddLogger<HttpClientLogger>().AddPolicyHandler(GetRetryPolicy());
};
but when my client composes the HTTP request and receives the response none of the logging methods will be called.
How should I use the HTTP client logging infrastructure?
**Update:**
I Checked the "System.Net.Http.HttpClient" logging filter and also Implementing the `IHttpClientLogger` but no change in the final result.
Also, it is useful to mention that according to [this][1] documentation the sync methods are called in sync `Send` method of the HttpMessageHandler and async methods are called by `SendAsync` method.
[1]: https://learn.microsoft.com/en-us/dotnet/api/microsoft.extensions.http.logging.ihttpclientasynclogger?view=dotnet-plat-ext-8.0#remarks |
How to write callouts using reStructuredText? |
You can add an inline if to determine if the cell is `primary`
```python
for linecount in range(24):
for i in range(24):
line.append([
queue,
"white" if (linecount // 3) % 2 == (i // 3) % 2 else "black",
"a" + str(i//3),
"primary" if (linecount%3 == 1) and (i%3 == 1) else "secondary",
"none"])
``` |
How to create table of content parser in JavaScript from markdown? |
I have a npc class which has a method called `chase` that is referenced inside the `update` function. when `chase` is called it says its not a function or it's not defined. I have another method being referenced in the same way that works.

chase method.

`chase` is called at `npc_spr.chase();` this creates an error while above the `bullet_spr.updateMe()` works. `updateMe` is also a method.
what I expected to happen was for the method to be called like the one above. I have tried many different ways of rewriting it but everything results in the not defined or not a function error. |
|phaser-framework|phaserjs| |
int c;
printf("Replace tabs, backspaces and backslashes with escape sequences while typing text\n");
printf("Press Enter to stop:\n");
struct termios oldt, newt;
tcgetattr(STDIN_FILENO, &oldt);
newt = oldt;
newt.c_lflag &= ~(ICANON | ECHO);
tcsetattr(STDIN_FILENO, TCSANOW, &newt);
while ((c = getchar()) != EOF && (c != '\n'))
{
switch (c)
{
case '\t':
printf("\\t");
break;
case 127: // ASCII value for backspace
printf("\\b");
break;
case '\\':
printf("\\\\");
break;
default:
putchar(c);
}
}
printf("\nThank you for trying it out!\n");
tcsetattr(STDIN_FILENO, TCSANOW, &oldt);
return 0;
|
I followed instructions from apple website (https://developer.apple.com/metal/pytorch/) and when I verified mps support with its Python script, it just gave me back something I do not understand. (It's too long, partial listed below) I wish I could use the GPU acceleration for stable diffusion. My Macbook has Radeon Pro 555 with Ventura OS. Help please :(
```
Python 3.11.1 (v3.11.1:a7a450f84a, Dec 6 2022, 15:24:06) [Clang 13.0.0 (clang-1300.0.29.30)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> if torch.backends.mps.is_available():
... mps_device = torch.device("mps")
... x = torch.ones(1, device=mps_device)
... print (x)
... else:
... print ("MPS device not found.")
...
Traceback (most recent call last):
File "<stdin>", line 4, in <module>
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/torch/_tensor.py", line 461, in __repr__
return torch._tensor_str._str(self, tensor_contents=tensor_contents)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/torch/_tensor_str.py", line 677, in _str
return _str_intern(self, tensor_contents=tensor_contents)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/torch/_tensor_str.py", line 597, in _str_intern
tensor_str = _tensor_str(self, indent)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/torch/_tensor_str.py", line 349, in _tensor_str
formatter = _Formatter(get_summarized_data(self) if summarize else self)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/torch/_tensor_str.py", line 137, in __init__
nonzero_finite_vals = torch.masked_select(
^^^^^^^^^^^^^^^^^^^^
RuntimeError: Failed to create indexing library, error: Error Domain=MTLLibraryErrorDomain Code=3 "program_source:168:1: error: type 'const constant ulong3 *' is not valid for attribute 'buffer'
REGISTER_INDEX_OP_ALL_DTYPES(select);
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
program_source:160:5: note: expanded from macro 'REGISTER_INDEX_OP_ALL_DTYPES'
REGISTER_INDEX_OP(8bit, idx64, char, INDEX_OP_TYPE, ulong3); \
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
program_source:138:5: note: expanded from macro 'REGISTER_INDEX_OP'
constant IDX_DTYPE * offsets [[buffer(3)]], \
^ ~~~~~~~~~
program_source:168:1: note: type 'ulong3' (vector of 3 'unsigned long' values) cannot be used in buffer pointee type
program_source:160:59: note: expanded from macro 'REGISTER_INDEX_OP_ALL_DTYPES'
REGISTER_INDEX_OP(8bit, idx64, char, INDEX_OP_TYPE, ulong3); \
^
program_source:168:1: error: explicit instantiation of 'index_select' does not refer to a function template, variable template, member function, member class, or static data member
REGISTER_INDEX_OP_ALL_DTYPES(select);
^
program_source:160:5: note: expanded from macro 'REGISTER_INDEX_OP_ALL_DTYPES'
REGISTER_INDEX_OP(8bit, idx64, char, INDEX_OP_TYPE, ulong3); \
^
program_source:134:13: note: expanded from macro 'REGISTER_INDEX_OP'
kernel void index_ ## INDEX_OP_TYPE<DTYPE, IDX_DTYPE>( \
^
<scratch space>:9:1: note: expanded from here
index_select
^
program_source:20:13: note: candidate template ignored: substitution failure [with T = char, OffsetsT = unsigned long __attribute__((ext_vector_type(3)))]: type 'unsigned long const constant * __attribute__((ext_vector_type(3)))' is not valid for attribute 'buffer'
kernel void index_select(
^
program_source:168:1: error: type 'const constant ulong3 *' is not valid for attribute 'buffer'
REGISTER_INDEX_OP_ALL_DTYPES(select);
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
program_source:162:5: note: expanded from macro 'REGISTER_INDEX_OP_ALL_DTYPES'
REGISTER_INDEX_OP(16bit, idx64, short, INDEX_OP_TYPE, ulong3); \
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
program_source:138:5: note: expanded from macro 'REGISTER_INDEX_OP'
constant IDX_DTYPE * offsets [[buffer(3)]], \
^ ~~~~~~~~~
program_source:168:1: note: type 'ulong3' (vector of 3 'unsigned long' values) cannot be used in buffer pointee type
program_source:162:59: note: expanded from macro 'REGISTER_INDEX_OP_ALL_DTYPES'
REGISTER_INDEX_OP(16bit, idx64, short, INDEX_OP_TYPE, ulong3); \
^
program_source:168:1: error: explicit instantiation of 'index_select' does not refer to a function template, variable template, member function, member class, or static data member
REGISTER_INDEX_OP_ALL_DTYPES(select);
^
program_source:162:5: note: expanded from macro 'REGISTER_INDEX_OP_ALL_DTYPES'
REGISTER_INDEX_OP(16bit, idx64, short, INDEX_OP_TYPE, ulong3); \
^
program_source:134:13: note: expanded from macro 'REGISTER_INDEX_OP'
kernel void index_ ## INDEX_OP_TYPE<DTYPE, IDX_DTYPE>( \
^
<scratch space>:17:1: note: expanded from here
index_select
^
....
...
program_source:248:13: note: candidate template ignored: substitution failure [with T = metal::_atomic<int, void>, E = int, OffsetsT = unsigned long __attribute__((ext_vector_type(3)))]: type 'unsigned long const constant * __attribute__((ext_vector_type(3)))' is not valid for attribute 'buffer'
kernel void index_put_accumulate_native_dtypes(
^
}
>>>
>>>
```
I degrade Python form 3.12.1 to 3.11.1, and reinstall the latest version of Pytorch nightly, still no luck with the result. |
I ran into the same problem. You just need to install the python3-dev package.
|
There were a couple things I had to change in order to get this to work properly for me (for HTML-specific files):
1. Set `HTML > Format: Wrap Line Length` to 0. Setting this to 0 disables how many characters you can have on a single line before wrapping begins. Alternatively, you can set this to an amount you wish to enforce any single line to not exceed.
2. Set `HTML > Format: Wrap Attributes` to `preserve-aligned`. |
I have a custom environment created using Stable Baselines 3 where the environment is a digital twin of a fermentation reaction. It observes the enzyme activity which is the output of the fermentation and takes a binary action on the substrate addition ie. if substrate needs to be added at a certain timestep or not.
The reward function observes the slope of the enzyme activity at each timestep. If the slope is high then the agent will be rewarded more and if the slope if low then the agent is not rewarded much. slope never goes less than 0. I also add a constant at the end of each experiment while training which check how close the highest enzyme value was from the target value. 4.5 U/L is the target value and closer the final enzyme activity was at the end of the experiment more it will be rewarded which is why the difference is inverted. 10000 and 10 are used to scale the values.
reward = slope * 10000
reward_at_end_of_episode = 4.5 - max_enzyme_activity
invert_end_of_episode_reward = (1/reward_at_end_of_episode) * 10
reward = reward + invert_end_of_episode_reward
While training using PPO this is how my agent performed.
[![X-axis is timesteps and Y axis is reward][1]][1]
[1]: https://i.stack.imgur.com/lhqmS.png
Agent was exploring a lot of ways to add substrate which was giving me enzyme activity higher than what was expected but was not able to converge on it. Instead it was converging on another set of actions (add every hour) which was giving me lower enzyme activity.
I am not able to understand why its behaving like this and I am not sure how to approach to solve this.
Is there any problem in my reward function? What changes can I make to debug this issue? |
I have a JSON object and would like to update for example the array element "role\[test09\]" to "role\[hello\]", I don't have the index number. I have tried a few things but somehow can't figure it out.
How can I do this with jq ?
This is my JSON object.
```
{
"run_list": [
"role[test01]",
"role[test09]",
"role[test05]"
]
}
```
The updated object should look like this
```
{
"run_list": [
"role[test01]",
"role[hello]",
"role[test05]"
]
}
``` |
Update array element of a JSON array with jq |
|arrays|json|jq| |
null |
Which datapicker you are using doesn't make any difference, since the functionality you are searching for is based on `RxJs and angular ReactiveFormsModule`. You can use `formGroup.valueChanges` with `RxJs pairwise`
HTML
----
<form [formGroup]="dateForm">
<mat-form-field style="margin-right: 10px;">
<input matInput [matDatepicker]="picker" formControlName="startdate" placeholder="Start date"/>
<mat-datepicker-toggle matSuffix [for]="picker"></mat-datepicker-toggle>
<mat-datepicker #picker></mat-datepicker>
</mat-form-field>
<mat-form-field>
<input matInput [matDatepicker]="picker2" formControlName="enddate" placeholder="End date"/>
<mat-datepicker-toggle matSuffix [for]="picker2"></mat-datepicker-toggle>
<mat-datepicker #picker2></mat-datepicker>
</mat-form-field>
</form>
TS
----
dateForm: FormGroup;
constructor(private formBuilder: FormBuilder) {
this.dateForm = this.formBuilder.group({
startdate: [null, Validators.required],
enddate: [null, Validators.required],
});
}
ngOnInit(): void {
this.dateForm.valueChanges.pipe(
startWith({ oldValue: null, newValue: null }),
distinctUntilChanged(),
pairwise(),
map(([oldValue, newValue]) => { return { oldValue, newValue } })
)
.subscribe({
next: (val) => {
console.log(val)
// implement condition and set controls accordingly using
// this.dateForm.controls.startdate.setValue(whatever date)
// this.dateForm.controls.enddate.setValue(whatever date)
}
});
}
[Here is the stackblitz][1]
[1]: https://stackblitz.com/edit/angular-ivy-pktoy7?file=src%2Fapp%2Fapp.component.html,src%2Fapp%2Fapp.component.ts,src%2Fmain.ts
Have in mind that this is just basic example explaining how you can do that, but its not complete, to begin with you should add some types here. To `formControls`, to `rxjs`. You should destroy subscription when no longer needed, etc...
Additionally, I just took whatever project and implemented staff, although this particular example will be practically identical in the latest versions of angular, stackblitz example uses old version of angular, so this is not moduleless, you will be able to optimize some staff if you use angular 16-17 |
>Have also tried filtering on the label/subject system property by passing 'subject' but can't get that to work either. Anyone know where I'm going wrong?
I had tried the above code, and I had the same issue my messages are sending through the code but those are reaching to dead letter which is not satisfied.
- I have enabled the session Id to my subscription as shown in below.

***Code:***
```js
const { ServiceBusClient } = require("@azure/service-bus");
const express = require('express');
const app = express();
const PORT = process.env.PORT || 3000;
app.get('/send-message', async (req, res) => {
let serviceBusClient;
let sender;
try {
const connectionString = "Endpoint=sb://your-servicebusnamespace.servicebus.windows.net/;SharedAccessKeyName=RootManageSharedAccessKey;SharedAccessKey=svbfe";
const topicName = "give-ur-topic-name";
serviceBusClient = new ServiceBusClient(connectionString);
sender = serviceBusClient.createSender(topicName);
// Create a message
const messageBody = {
id: 1,
eventType: "jobCosts",
// Add other message properties if needed
};
const message = {
body: JSON.stringify(messageBody),
// Add other message properties if needed
// For correlation filter, you can set the properties here
correlationId: "12345", // Optional, you can set this to any unique identifier
sessionId: "107", // Set the session ID here
};
console.log("Sending message:", messageBody);
// Send the message
await sender.sendMessages(message);
console.log("Message sent successfully");
res.send("Message sent successfully");
} catch (error) {
console.error("Error occurred:", error);
res.status(500).send("Error occurred while sending message");
} finally {
try {
if (sender) {
await sender.close();
}
if (serviceBusClient) {
await serviceBusClient.close();
}
} catch (error) {
console.error("Error while closing sender or service bus client:", error);
}
}
});
app.listen(PORT, () => {
console.log(`Server is running on port ${PORT}`);
});
```

- ***Below are my two subscriptions one is session enabled and the other is disabled, you can see the active messages count 14 for enabled subscription.***

**Received messages:**
 |
I have been implementing feature flag management with .Net 6 solution and i came across FeatureGate attribute can be used to guard API controller or an API endpoint.
I was wondering :
1. Does FeatureGate attribute use reflection in any sense.
2. If Yes then does it cause any overhead or slows down the application.
I have been using FeatureGate attribute but someone pointed out that it uses reflection internally and can cause overhead. |
Does FeatureGate attribute causes overhead? |
|c#|.net-core|feature-flags| |
Ubuntu 20.04
Python 3.8
I'm using a python file (not written by me) with a U-Net and custom Loss functions. The code was written for tensorflow==2.13.0, but my GPU cluster only has tensorflow==2.2.0 (or lower). The available code isn't compatible with this version.
Specifically the 'if' statement in update_state. Can somebody help me rewrite this? I'm not experienced with tf.
class Distance(tf.keras.metrics.Metric):
def __init__(self, name='DistanceMetric', distance='cm', sigma=2.5, data_size=None,
validation_size=None, points=None, point=None, percentile=None):
super(Distance, self).__init__(name=name)
self.counter = tf.Variable(initial_value=0, dtype=tf.int32)
self.distance = distance
self.sigma = sigma
self.percentile = percentile
if percentile is not None and point is not None:
assert (type(percentile) == float)
self.percentile_idx = tf.Variable(tf.cast(tf.round(percentile * validation_size), dtype=tf.int32))
else:
self.percentile_idx = None
self.point = point
self.points = points
self.cache = tf.Variable(initial_value=tf.zeros([validation_size, points]),
shape=[validation_size, points])
self.val_size = validation_size
def update_state(self, y_true, y_pred, sample_weight=None):
n, h, w, p = tf.shape(y_pred)[0], tf.shape(y_pred)[1], tf.shape(y_pred)[2], tf.shape(y_pred)[3]
y_true = normal_distribution(self.sigma, y_true[:, :, 0], y_true[:, :, 1], h=h, w=w, n=n, p=p)
if self.distance == 'cm':
x1, y1 = cm(y_true, h=h, w=w, n=n, p=p)
x2, y2 = cm(y_pred, h=h, w=w, n=n, p=p)
d = ((x1 - x2) ** 2 + (y1 - y2) ** 2) ** 0.5
d = d[:, :, 0]
elif self.distance == 'argmax':
d = (tf.cast(tf.reduce_sum(((argmax_2d(y_true) - argmax_2d(y_pred)) ** 2), axis=1),
dtype=tf.float32)) ** 0.5
temp = tf.minimum(self.counter + n, self.val_size)
if self.counter <= self.val_size:
self.cache[self.counter:temp, :].assign(d[0:(temp-self.counter), :])
self.counter.assign(self.counter + n)
def result(self):
if self.percentile_idx is not None:
temp = tf.sort(self.cache[:self.val_size, self.point], axis=0, direction='ASCENDING')
return temp[self.percentile_idx]
elif self.point is not None:
return tf.reduce_mean(self.cache[:, self.point], axis=0)
else:
return tf.reduce_mean(self.cache, axis=None)
def reset_states(self):
self.cache.assign(tf.zeros_like(self.cache))
self.counter.assign(0)
if self.percentile is not None and self.point is not None:
self.percentile_idx.assign(tf.cast(self.val_size * self.percentile, dtype=tf.int32))
/trinity/home/r084755/DRF_AI/distal-radius-fractures-x-pa-and-lateral-to-clinic/Code files/LandmarkDetection.py:144 update_state
if tf.math.less_equal(self.counter, self.val_size): # Updated from self.counter <= self.val_size:
/opt/ohpc/pub/easybuild/software/TensorFlow/2.2.0-fosscuda-2019b-Python-3.7.4/lib/python3.7/site-packages/tensorflow/python/framework/ops.py:778 __bool__
self._disallow_bool_casting()
/opt/ohpc/pub/easybuild/software/TensorFlow/2.2.0-fosscuda-2019b-Python-3.7.4/lib/python3.7/site-packages/tensorflow/python/framework/ops.py:545 _disallow_bool_casting
"using a `tf.Tensor` as a Python `bool`")
/opt/ohpc/pub/easybuild/software/TensorFlow/2.2.0-fosscuda-2019b-Python-3.7.4/lib/python3.7/site-packages/tensorflow/python/framework/ops.py:532 _disallow_when_autograph_enabled
" decorating it directly with @tf.function.".format(task))
OperatorNotAllowedInGraphError: using a `tf.Tensor` as a Python `bool` is not allowed: AutoGraph did not convert this function. Try decorating it directly with @tf.function. |
Rewriting function from tensorflow==2.13.0 to 2.2.0: OperatorNotAllowedInGraphError: using a `tf.Tensor` as a Python `bool` is not allowed |
|python|tensorflow| |
null |
import { QueryClient } from '@tanstack/react-query'
const queryClient = new QueryClient({
defaultOptions: {
queries: {
staleTime: Infinity,
},
},
})
export default queryClient
Move your query client outside React.useState hook. You can create a separate file for that.Check the documentation.
[https://tanstack.com/query/latest/docs/reference/QueryClient#queryclient][1]
[1]: https://tanstack.com/query/latest/docs/reference/QueryClient#queryclient.
Your example looks extremely coupled, your toast display should be handled by a React component, not a hook that manages the Api errors.
## Example
const Todos = () => {
const { isLoading, error, data } = useQuery("todos", getTodos);
if (isLoading) return "Loading...";
if (error) {
showToast({
type: 'fail',
message: 'your message',
});
}
return (<TodoList todos={data} />)
}
|
Publish Library SDK Swift from private github to CocoaPods |
Since you are performing keyboard navigation, looks like `mat-list` is a better option, but we need to maintain the selections and insert the checkboxes manually, but it seems to work great!
ts
import { Component, QueryList, ViewChildren } from '@angular/core';
import { MatCheckbox } from '@angular/material';
/**
* @title Basic list
*/
@Component({
selector: 'list-overview-example',
templateUrl: 'list-overview-example.html',
styleUrls: ['list-overview-example.css'],
})
export class ListOverviewExample {
@ViewChildren(MatCheckbox)
MatCheckboxes: QueryList<MatCheckbox>;
items = [
{
name: 'first',
subItems: [{ name: 'sth1' }, { name: 'sth2' }, { name: 'sth3' }],
},
{
name: 'second',
subItems: [
{ name: 'sadas1' },
{ name: 'stdfgdfsgh2' },
{ name: 'stgfhfgh3' },
],
},
{
name: 'thrid',
subItems: [{ name: 'dfhfdg' }, { name: 'gfhfg' }, { name: 'fhfj3' }],
},
];
ngAfterViewChecked(): void {
this.MatCheckboxes.first.focus();
}
}
html
<mat-list>
<ng-container
*ngFor="let item of items; trackBy: $index; let outerIndex = $index"
>
<mat-list-item
><div>
<mat-checkbox
class="example-margin"
id="outer-checkbox-{{outerIndex}}"
name="outer-checkbox-{{outerIndex}}"
[checked]="item.checked"
(change)="item.checked =!item.checked"
>
{{item.name}}
</mat-checkbox>
</div></mat-list-item
>
<mat-list style="margin-left: 30px">
<div
*ngFor="let subItem of item.subItems; trackBy: $index; let innerIndex = $index"
>
<mat-list-item
><div>
<mat-checkbox
id="outer-checkbox-{{innerIndex}}"
name="outer-checkbox-{{innerIndex}}"
class="example-margin"
[checked]="subItem.checked"
(change)="subItem.checked =!subItem.checked"
>
{{subItem.name}}
</mat-checkbox>
</div></mat-list-item
>
</div>
</mat-list>
</ng-container>
</mat-list>
[Stackblitz Demo](https://stackblitz.com/edit/angular-nnkg3h-xbt552?file=app%2Flist-overview-example.ts,app%2Flist-overview-example.html) |
Reinforcement Learning agent converging at the lowest reward |
|python|reinforcement-learning|stable-baselines| |
I am having trouble writing a JUnit Test for a custom Intellij Plugin.
The Plugin registers a `EditorFactoryListener`, that instantiates a `EditorCustomElementRenderer` which starts a `javax.swing.Timer` whose `ActionListener` inserts some inline text to the current Caret Position in the open Document after some seconds.
This works fine in the `runIde` but not in a Unit Test.
The custom `EditorFactoryListener` (registered in `plugin.xml`):
```java
public class InlineRendererInitializer implements EditorFactoryListener {
private static final Logger LOG = Logger.getInstance(InlineRenderernInitializer.class);
@Override
public void editorCreated(@NotNull EditorFactoryEvent event) {
Editor editor = event.getEditor();
LOG.warn("Editor created");
new InlineRenderer(editor);
}
@Override
public void editorReleased(@NotNull EditorFactoryEvent event) {
LOG.warn("Editor released");
}
}
```
The custom `EditorCustomElementRenderer`:
```java
public class InlineRenderer implements EditorCustomElementRenderer {
private Editor editor;
private Timer timer;
public InlineRenderer(Editor editor) {
timer = new Timer(5000, e -> {
ApplicationManager.getApplication().invokeLater(() -> {
// do some stuff in editor
});
});
timer.setRepeats(true);
timer.start();
}
}
```
The Test Class:
```java
public class InlineRendererTest extends BasePlatformTestCase {
public void testInline() throws InterruptedException {
EditorFactory editorFactory = EditorFactory.getInstance();
Document document = editorFactory.createDocument("Test Test");
Editor editor = editorFactory.createEditor(document, getProject());
// Set cursor in body of toUpperCase() method
editor.getCaretModel().addCaret(new VisualPosition(0,4));
await()
.pollDelay(5, SECONDS)
.timeout(10, SECONDS)
.untilAsserted(() -> {
// Some assertions
});
}
}
```
The `timer.start()` is executed, but the Timer does not trigger until the Awaitility Timeout of 10 seconds is triggered, why is that? `Thread.sleep` instead of Awaitility is also not working.
|
IntelliJ BasePlatformTestCase with javax.swing.Timer |
|java|swing|junit|intellij-plugin|awaitility| |
I have a file with quite a few sheets (around 70). I would like to generate an independent file for each sheet whose name corresponds to the name of the sheet. I figured out how to do this part.
Except that I have certain cells that come from a formula and I want to retrieve only the values, not the formulas. I came across this answer:https://stackoverflow.com/questions/58262948/how-to-copy-format-and-values-not-formulas-when-creating-a-spreadsheet-backup/58263965#58263965
The problem is that my script cannot complete because I exceed the maximum execution time (Error: Exceeded maximum execution time).
Is there a solution to get to the end?
function copyEntireSpreadsheet() {
var id = "###"; // Please set the source Spreadsheet ID.
var ss = SpreadsheetApp.openById(id);
var srcSheets = ss.getSheets();
var tempSheets = srcSheets.map(function(sheet, i) {
var sheetName = sheet.getSheetName();
var dstSheet = sheet.copyTo(ss).setName(sheetName + "_temp");
var src = dstSheet.getDataRange();
src.copyTo(src, {contentsOnly: true});
return dstSheet;
});
var destination = ss.copy(ss.getName() + " - " + new Date().toLocaleString());
tempSheets.forEach(function(sheet) {ss.deleteSheet(sheet)});
var dstSheets = destination.getSheets();
dstSheets.forEach(function(sheet) {
var sheetName = sheet.getSheetName();
if (sheetName.indexOf("_temp") == -1) {
destination.deleteSheet(sheet);
} else {
sheet.setName(sheetName.slice(0, -5));
}
});
} |
I am trying to create an external table using partitioned parquet files stored in Oracle Object Storage.
These parquet files are written using Pyspark.
The folder structure is below:

Getting error like this:
```
ORA-20000: ORA-00904: : invalid identifier
ORA-06512: at "C##CLOUD$SERVICE.DBMS_CLOUD$PDBCS_240301_0", line 2034
ORA-06512: at "C##CLOUD$SERVICE.DBMS_CLOUD$PDBCS_240301_0", line 8662
ORA-06512: at line 2
Error at Line: 7 Column: 0
```
The query I tried:
```
BEGIN
DBMS_CLOUD.CREATE_EXTERNAL_PART_TABLE(
TABLE_NAME => 'spotify_daily',
CREDENTIAL_NAME => '<THE_CREDS_NAME>',
FILE_URI_LIST => 'https://objectstorage.<REGION_NAME>.oraclecloud.com/n/<NAMESPACE>/b/<BUCKET_NAME>/o/spotify_daily/*.parquet',
FORMAT => '{"type":"parquet", "schema": "first","partition_columns":[{"name":"year","type":"number"},{"name":"month","type":"varchar2(100)"},{"name":"date","type":"varchar2(100)"}]}');
END;
```
I followed the below guide from Oracle:
https://docs.oracle.com/en/cloud/paas/autonomous-database/serverless/adbsb/query-external-partition-hive.html#GUID-E2BE9922-4AAC-43C1-B619-8817515646E4
I am a beginner to Oracle Autonomous Data Warehouse. |
I created a repository named "gitTest", and cloned to my local. When I wanna push it to remote, I got the following log:
```
git status
On branch master
Your branch is ahead of 'origin/master' by 1 commit.
(use "git push" to publish your local commits)
nothing to commit, working tree clean
dd@dd:~/Desktop/Repositories/gitTest$ git push
ERROR: Permission to xinfengwu/gitTest.git denied to deploy key
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
dd@dd:~/Desktop/Repositories/gitTest$
```
I use ssh-key to access github.
```
dd@dd:~/Desktop/Repositories/gitTest$ ssh -T git@github.com
Hi xinfengwu/xinfengwu.github.io! You've successfully authenticated, but GitHub does not provide shell access.
```
|
ERROR: Permission to xinfengwu/gitTest.git denied to deploy key fatal: Could not read from remote repository. Please make sure you have the |
|ssh-keys| |
null |
Thank you for the help in advance.
I am using Spring boot webflux 3 with Spring doc Open API 2.4
Here is my response entity records
```
public record UserResponse(
int id,
String name,
ScoreResponse scoreSummary
){
}
```
```
public record ScoreResponse(
int avgScore,
List<Score> score
){
public record Score(
int scoreId,
int score
){
}
}
```
Here is the handler class
```
@Component
@RequiredArgsConstructor
public class UserHandler {
private final UserService userService;
private final UserMapper userMapper;
private final ClientAuthorizer clientAuthorizer;
@Bean
@RouterOperations({
@RouterOperation(
path = "/user/{userId}",
produces = {MediaType.APPLICATION_JSON_VALUE},
method = RequestMethod.GET,
beanClass = UserHandler.class,
beanMethod = "getById",
operation = @Operation(
description = "GET user by id", operationId = "getById", tags = "users",
responses = @ApiResponse(
responseCode = "200",
description = "Successful GET operation",
content = @Content(
schema = @Schema(
implementation = UserResponse.class
)
)
),
parameters = {
@Parameter(in = ParameterIn.PATH,name = "userId")
}
)
)
})
public @NonNull RouterFunction<ServerResponse> userIdRoutes() {
return RouterFunctions.nest(RequestPredicates.path("/user/{userId}"),
RouterFunctions.route()
.GET("", this::getById)
.build());
}
@NonNull
Mono<ServerResponse> getById(@NonNull ServerRequest request) {
String userId = request.pathVariable("userId");
return authorize(request.method(), request.requestPath())
.then(userService.getUserWithScore(userId))
.map(userMapper::toResponseWithScores)
.flatMap(result -> ServerResponse.ok().body(Mono.just(result), UserResponse.class))
.switchIfEmpty(Mono.error(new ResourceNotFoundException("userId:%s does not exist".formatted(userId))))
.doOnEach(serverResponseSignal -> LoggingHelper.addHttpStatusToContext(serverResponseSignal, HttpStatus.OK))
.contextWrite(context -> LoggingHelper.addUserServiceContextValues(context, "/user", userId, null));
}
private Mono<AuthenticatedPrincipal> authorize(HttpMethod httpMethod, RequestPath requestPath) {
return ReactiveSecurityContextHolder.getSecurityContext().flatMap(authenticationDetail -> {
// Retrieve AuthorizationDetail from the context
boolean isAuthorized = clientAuthorizer.authorizeByPath(authenticationDetail.permissions(), httpMethod, requestPath);
if (isAuthorized) {
return Mono.just(authenticationDetail);
} else {
return Mono.error(new ResourceAccessNotPermittedException());
}
});
}
}
```
The expected example in Swagger UI is
```
{
"id": "int",
"name": "string",
"scoreSummary": {
"avgScore": int,
"score": [
"string"
]
}
}
```
But actual example is
```
{
"id": "int",
"name": "string",
"scoreSummary": {
"avgScore": int,
"score": [
"string"
]
}
}
```
Additionally I can see an error in the UI
```
Errors
Resolver error at responses.200.content.application/json.schema.$ref
Could not resolve reference: JSON Pointer evaluation failed while evaluating token "score" against an ObjectElement
```
Please help me to find what I am missing.
After days of research I found this and it worked and I got expected result. But I need ro understand is there any better approach or is this fine.
Added below bean definition in my handler class.
```
@Bean
public OpenApiCustomizer schemaCustomizer() {
ResolvedSchema resolvedSchema = ModelConverters.getInstance()
.resolveAsResolvedSchema(new AnnotatedType(Score.class));
return openApi -> openApi
.schema(resolvedSchema.schema.getName(), resolvedSchema.schema);
}
``` |
Spring boot 3 open api - Field of array in response class is not rendering |
|java|spring-boot|swagger|spring-webflux|springdoc| |
null |
You have multiple capture groups enabled which don't seem to serve any purpose. And the one capture group you want to use has been turned off. Try this version:
^\[Twitter\] (User \w+|An Anonymous user) has posted a .* on .*! |
I have an application which, from time to time, saves JSON arrays into text files. All the elements of the JSON array display identical structure.
As an example, consider the following JSON array:
[{"A":1,"B":2},{"A":11,"B":22},{"A":111,"B":222}]
The need is to save this into a file such that it would look like this:
[{"A":1,"B":2} ,
{"A":11,"B":22} ,
{"A":111,"B":222}]
meaning, one record within the text file for each JSON element in the array.
The reason for that is that those JSON arrays may include hundreds and up to several thousands of elements and I wish to keep these files READABLE for humans.
Is there any trick that can be invoked when using the standard JSON.stringify method? If not, any other suggestion will be appreciated.
|