instruction stringlengths 0 30k β |
|---|
This badRecords file creation is a specific feature of databricks.This not gonna work in local mode. |
I am using Datatable and correctly fetching data as expected but in a situation where I want to export data on excel button I want some data in A1 and B1 cell then my regular table content should export. I tried ```customize``` option of datatable but it didn't work like I needed.
Here is my code:
```
$("#boms-table").DataTable({
"responsive": true,
"lengthChange": false,
"autoWidth": true,
"paging": false,
"searching": true,
"info": false,
"buttons": [{
extend: 'excel',
exportOptions: {
columns: ':visible:not(.exclude)'
},
title: "",
customize: function(xlsx) {
var sheet = xlsx.xl.worksheets['sheet1.xml'];
$('row:first c', sheet).attr('s', '50');
var a1 = $('row c[r^="A1"]', sheet);
a1.html('<is><t>' + materialPartcode + '</t></is>');
var b1 = $('row c[r^="B1"]', sheet);
b1.html('<is><t>' + materialDesc + " - BOM" + '</t></is>');
$('row').eq(1).addClass('ignore');
$('row c', sheet).each(function() {
if ($(this).index() > 1) {
var selectedRow = $(this).parent();
var columnIndex = $(this).index();
selectedRow.find('c[r^="A"]').eq(columnIndex).addClass('ignore');
selectedRow.find('c[r^="B"]').eq(columnIndex).addClass('ignore');
}
});
$('.ignore', sheet).attr('s', '2');
},
},
{
extend: 'pdf',
exportOptions: {
columns: ':visible:not(.exclude)'
},
title: materialPartcode + " - " + materialDesc + " - BOM",
},
{
extend: 'print',
exportOptions: {
columns: ':visible:not(.exclude)'
},
title: materialPartcode + " - " + materialDesc + " - BOM",
},
'colvis'
],
}).buttons().container().appendTo('#boms-table_wrapper .col-md-6:eq(0)');
```
All I want is "materialPartcode" in A1 cell then "materialDesc" in B1 cell then all the table contents. |
|html|jquery|api|datatables| |
Given that you are using MySQL, we could try the following aggregation approach with tuple syntax:
<!-- language: sql -->
SELECT user
FROM reviews
WHERE (product, rating) IN ((173, 4), (173, 5), (50, 4), (50, 5)) AND user <> 1
GROUP BY user
HAVING COUNT(DISTINCT product) = 2; |
The part of the logic should go like this
```swift
import SwiftUI
struct ContentView: View {
@State var title = "Title text"
@State var subTitle = "Subtitle text"
var isSubtitleLarger: Bool {
subTitle.count > title.count
}
var body: some View {
VStack(alignment: .leading) {
Text(title)
.lineLimit(isSubtitleLarger ? 1 : 2)
Text(subTitle)
.lineLimit(isSubtitleLarger ? 2 : 1)
}
}
}
```
This is important to guarantee there is always 3 lines at max. |
I have the following directory structure:
```
src
|__app
|__layout.tsx
|__page.tsx
|__archive
|__page.tsx
|__[eventId]
|__page.tsx
```
and the following serverless function
```
import { revalidatePath } from "next/cache";
export async function GET() {
revalidatePath("/archive");
revalidatePath("/archive/[eventId]");
revalidatePath("/");
return Response.json({ info: "finished running revalidatePath" });
}
```
**Problem**
`/` and `/archive/page` are getting revalidated, but not the dynamic `/archive/[eventId]`
**Note**: `export const dynamicParams = true` ([which fetches on demand if the route has not been generated][1]) works but I cannot use it as it fetches partial data from the DB (the paths exist in the DB, but I'm filtering them by "ready to be used" to create the dynamic paths).
**What I've tried**
I tried many variations of `revalidatePath`, such as:
`revalidatePath("/archive/[eventId]", "page");`
`revalidatePath("/archive/[eventId]", "layout");`
`revalidatePath("/archive/[eventId]/page");`
but nothing seems to work. What am I doing wrong?
___
**Further info**
- `/archive` is a list of links to past events, each with an `eventId`. Clicking on a link takes the user to `/archive/<clicked eventId>`
- `/` displays the latest `eventId` that was added to the DB
- when a new event is added to the DB and I run the `GET()` function above, the new event is displayed in `/` and `/archive`, but clicking on it in `/archive` **correctly takes the user to `/archive/<clicked eventId>` but shows a `404` page (i.e.: the page has not been generated)**
- `/archive/[eventId]/page` is where my `generateStaticParams()` lives.
[1]: https://nextjs.org/docs/app/api-reference/file-conventions/route-segment-config#cross-route-segment-behavior |
NextJS 14 revalidatePath for dynamic paths not working |
|next.js|caching|next.js14| |
Get the latest `zipfile` for **`Python 3.12`** from here:
https://github.com/TA-Lib/ta-lib-python/issues/127#issuecomment-1793716236
Extract `zipfile` to `C:\`, like this `C:\ta-lib`
Download and install Visual C++ build tools from:
https://visualstudio.microsoft.com/visual-cpp-build-tools/
Finally install:
pip install ta-lib |
I created `person` table, then inserted 2 rows into it as shown below:
```sql
CREATE TABLE person (
id INTEGER,
name VARCHAR(20),
age INTEGER
);
INSERT INTO person (id, name, age)
VALUES (1, 'John', 27), (2, 'David', 32);
```
Then, I created `my_func()` which returns `person` table as shown below:
```sql
CREATE FUNCTION my_func() RETURNS TABLE(id INTEGER, name VARCHAR, age INTEGER)
AS $$
BEGIN
RETURN QUERY SELECT person.id, person.name FROM person;
END; -- β β β β β β β β β β β β β β β β β β β β β
$$ LANGUAGE plpgsql;
```
Finally, calling `my_func()` got the same error as shown below:
```sql
postgres=# SELECT my_func();
ERROR: structure of query does not match function result type
DETAIL: Number of returned columns (2) does not match expected column count (3).
CONTEXT: SQL statement "SELECT person.id, person.name FROM person"
PL/pgSQL function my_func() line 3 at RETURN QUERY
```
So, I added `person.age` as shown below:
```sql
CREATE FUNCTION my_func() RETURNS TABLE(id INTEGER, name VARCHAR, age INTEGER)
AS $$
BEGIN
RETURN QUERY SELECT person.id, person.name, person.age FROM person;
END; -- ββββββββββ
$$ LANGUAGE plpgsql;
```
Finally, I could call `my_func()` without error as shown below. Omitting the table name `person` with `.` from `id`, `name` and `age` gets [the error][1] and omitting `FROM` clause from [SELECT statement][2] gets [the error][3]:
```sql
postgres=# SELECT my_func();
INSERT 0 1
postgres=# SELECT * FROM person;
my_func
--------------
(1,John,27)
(2,David,32)
(2 rows)
```
[1]: https://stackoverflow.com/questions/9821121/sql-column-reference-id-is-ambiguous/77811511#77811511
[2]: https://www.postgresql.org/docs/current/sql-select.html
[3]: https://stackoverflow.com/questions/19975755/missing-from-clause-entry-for-a-table/77810473#77810473 |
Why isnβt my marco to delete rows with specfic criteria not working? |
When the project had one data source, native queries ran fine, now when there are two data sources, hibernation cannot determine the schema for receiving native queries, non-native queries work fine.
**application.yaml**
```
spring:
autoconfigure:
exclude: org.springframework.boot.autoconfigure.jdbc.DataSourceAutoConfiguration
flyway:
test:
testLocations: classpath:/db_test/migration,classpath:/db_test/migration_test
testSchemas: ems_core
locations: classpath:/db_ems/migration
baselineOnMigrate: true
schemas: my_schema
jpa:
packages-to-scan: example.example1.example2.example3.example4
show-sql: false
properties:
hibernate.dialect: org.hibernate.dialect.PostgreSQL10Dialect
hibernate.format_sql: false
hibernate.jdbc.batch_size: 50
hibernate.order_inserts: true
hibernate.order_updates: true
hibernate.generate_statistics: false
hibernate.prepare_connection: false
hibernate.default_schema: my_schema
org.hibernate.envers:
audit_table_prefix: log_
audit_table_suffix:
hibernate.javax.cache.uri: classpath:/ehcache.xml
hibernate.cache:
use_second_level_cache: true
region.factory_class: org.hibernate.cache.ehcache.internal.SingletonEhcacheRegionFactory
hibernate:
connection:
provider_disables_autocommit: true
handling_mode: DELAYED_ACQUISITION_AND_RELEASE_AFTER_TRANSACTION
hibernate.ddl-auto: validate
# todo:
open-in-view: false
database-platform: org.hibernate.dialect.H2Dialect
#database connections
read-only:
datasource:
url: jdbc:postgresql://localhost:6432/db
username: postgres
password: postgres
configuration:
pool-name: read-only-pool
read-only: true
auto-commit: false
schema: my_schema
read-write:
datasource:
url: jdbc:postgresql://localhost:6433/db
username: postgres
password: postgres
configuration:
pool-name: read-write-pool
auto-commit: false
schema: my_schema
```
**Datasources config:**
```
@Configuration
public class DataSourceConfig {
@Bean
@ConfigurationProperties("spring.read-write.datasource")
public DataSourceProperties readWriteDataSourceProperties() {
return new DataSourceProperties();
}
@Bean
@ConfigurationProperties("spring.read-only.datasource")
public DataSourceProperties readOnlyDataSourceProperties() {
return new DataSourceProperties();
}
@Bean
@ConfigurationProperties("spring.read-only.datasource.configuration")
public DataSource readOnlyDataSource(DataSourceProperties readOnlyDataSourceProperties) {
return readOnlyDataSourceProperties.initializeDataSourceBuilder().type(HikariDataSource.class).build();
}
@Bean
@ConfigurationProperties("spring.read-write.datasource.configuration")
public DataSource readWriteDataSource(DataSourceProperties readWriteDataSourceProperties) {
return readWriteDataSourceProperties.initializeDataSourceBuilder().type(HikariDataSource.class).build();
}
@Bean
@Primary
public RoutingDataSource routingDataSource(DataSource readWriteDataSource, DataSource readOnlyDataSource) {
RoutingDataSource routingDataSource = new RoutingDataSource();
Map<Object, Object> dataSourceMap = new HashMap<>();
dataSourceMap.put(DataSourceType.READ_WRITE, readWriteDataSource);
dataSourceMap.put(DataSourceType.READ_ONLY, readOnlyDataSource);
routingDataSource.setTargetDataSources(dataSourceMap);
routingDataSource.setDefaultTargetDataSource(readWriteDataSource);
return routingDataSource;
}
@Bean
public BeanPostProcessor dialectProcessor() {
return new BeanPostProcessor() {
@Override
public Object postProcessBeforeInitialization(Object bean, String beanName) throws BeansException {
if (bean instanceof HibernateJpaVendorAdapter) {
((HibernateJpaVendorAdapter) bean).getJpaDialect().setPrepareConnection(false);
}
return bean;
}
};
}
}
```
**Routing data sources**
```
public class RoutingDataSource extends AbstractRoutingDataSource {
@Override
protected Object determineCurrentLookupKey() {
return DataSourceTypeContextHolder.getTransactionType();
}
@Override
public void setTargetDataSources(Map<Object, Object> targetDataSources) {
super.setTargetDataSources(targetDataSources);
afterPropertiesSet();
}
}
```
**depending on the type of transaction readOnly or not, the datasource is selected**
```
public class DataSourceTypeContextHolder {
private static final ThreadLocal<DataSourceType> contextHolder = new ThreadLocal<>();
public static void setTransactionType(DataSourceType dataSource) {
contextHolder.set(dataSource);
}
public static DataSourceType getTransactionType() {
return contextHolder.get();
}
public static void clearTransactionType() {
contextHolder.remove();
}
}
```
```
@Aspect
@Component
@Slf4j
public class TransactionAspect {
@Before("@annotation(transactional) && execution(* *(..))")
public void setTransactionType(Transactional transactional) {
if (transactional.readOnly()) {
DataSourceTypeContextHolder.setTransactionType(DataSourceType.READ_ONLY);
} else {
DataSourceTypeContextHolder.setTransactionType(DataSourceType.READ_WRITE);
}
}
@AfterReturning("@annotation(transactional) && execution(* *(..))")
public void clearTransactionType(Transactional transactional) {
DataSourceTypeContextHolder.clearTransactionType();
}
}
```
**Error**
```
org.springframework.jdbc.BadSqlGrammarException: PreparedStatementCallback; bad SQL grammar [UPDATE my_table SET lock_until = timezone('utc', CURRENT_TIMESTAMP) + cast(? as interval), locked_at = timezone('utc', CURRENT_TIMESTAMP), locked_by = ? WHERE my_table.name = ? AND my_table.lock_until <= timezone('utc', CURRENT_TIMESTAMP)]; nested exception is org.postgresql.util.PSQLException: ERROR: relation "shedlock" does not exist
ΠΠΎΠ·ΠΈΡΠΈΡ: 8
at org.springframework.jdbc.support.SQLErrorCodeSQLExceptionTranslator.doTranslate(SQLErrorCodeSQLExceptionTranslator.java:235)
at org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:72)
at org.springframework.jdbc.core.JdbcTemplate.translateException(JdbcTemplate.java:1443)
at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:633)
at org.springframework.jdbc.core.JdbcTemplate.update(JdbcTemplate.java:862)
at org.springframework.jdbc.core.JdbcTemplate.update(JdbcTemplate.java:883)
at org.springframework.jdbc.core.namedparam.NamedParameterJdbcTemplate.update(NamedParameterJdbcTemplate.java:321)
at org.springframework.jdbc.core.namedparam.NamedParameterJdbcTemplate.update(NamedParameterJdbcTemplate.java:326)
at net.javacrumbs.shedlock.provider.jdbctemplate.JdbcTemplateStorageAccessor.lambda$execute$0(JdbcTemplateStorageAccessor.java:115)
at org.springframework.transaction.support.TransactionTemplate.execute(TransactionTemplate.java:140)
at net.javacrumbs.shedlock.provider.jdbctemplate.JdbcTemplateStorageAccessor.execute(JdbcTemplateStorageAccessor.java:115)
at net.javacrumbs.shedlock.provider.jdbctemplate.JdbcTemplateStorageAccessor.updateRecord(JdbcTemplateStorageAccessor.java:81)
at net.javacrumbs.shedlock.support.StorageBasedLockProvider.doLock(StorageBasedLockProvider.java:91)
at net.javacrumbs.shedlock.support.StorageBasedLockProvider.lock(StorageBasedLockProvider.java:65)
at jdk.internal.reflect.GeneratedMethodAccessor328.invoke(Unknown Source)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:344)
at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:205)
at com.sun.proxy.$Proxy139.lock(Unknown Source)
at net.javacrumbs.shedlock.core.DefaultLockingTaskExecutor.executeWithLock(DefaultLockingTaskExecutor.java:63)
at net.javacrumbs.shedlock.spring.aop.MethodProxyScheduledLockAdvisor$LockingInterceptor.invoke(MethodProxyScheduledLockAdvisor.java:86)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186)
at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.proceed(CglibAopProxy.java:747)
at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:689)
at ru.russianpost.ems.core.assembly.service.impl.scheduler.RpoContentSheduler$$EnhancerBySpringCGLIB$$631d68e1.loadData(<generated>)
at jdk.internal.reflect.GeneratedMethodAccessor320.invoke(Unknown Source)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at org.springframework.scheduling.support.ScheduledMethodRunnable.run(ScheduledMethodRunnable.java:84)
at org.springframework.scheduling.support.DelegatingErrorHandlingRunnable.run(DelegatingErrorHandlingRunnable.java:54)
at org.springframework.scheduling.concurrent.ReschedulingRunnable.run(ReschedulingRunnable.java:93)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: org.postgresql.util.PSQLException: ERROR: relation "my_table" does not exist
```
when I change the native query and specify the schema before the table name, the query processes normally :
UPDATE my_schema.my_table SET lock_until = timezone('utc', CURRENT_TIMESTAMP) + cast(? as interval), locked_at = timezone('utc', CURRENT_TIMESTAMP), locked_by = ? WHERE my_schema.my_table.name = ? AND my_schema.my_table.lock_until <= timezone('utc', CURRENT_TIMESTAMP);
|
I have this code, and the time complexity is
( log ()+())Β but I now I need a code that does the same thing but with a time complexity of
( log ()+())Β , I do not understand what should I consider for getting that time complexity.
```
import java.util.Arrays;
public class Main {
// Returns true if set1[] and set2[] are disjoint, else false
static boolean areDisjoint(int[] set1, int[] set2, int m, int n) {
// Sort the set1 array
Arrays.sort(set1);
// Take every element of set2[] and search it in the sorted set1 array
for (int i = 0; i < n; i++) {
// Binary search to find the lower bound of set2[i] in set1
int lb = Arrays.binarySearch(set1, set2[i]);
// If the element is present in set1, return false
if (lb >= 0)
return false;
}
// If no element of set2 is present in set1, return true
return true;
}
// Driver program to test the above function
public static void main(String[] args) {
int[] set1 = {12, 34, 11, 9, 3};
int[] set2 = {7, 2, 1, 5};
int m = set1.length;
int n = set2.length;
System.out.println(areDisjoint(set1, set2, m, n) ? "Yes" : "No");
}
}
```
I was trying to use just quicksort but still I do not get how can I get this
( log ()+())Β if the original code has a time complexity of
( log ()+()) |
I have a time complexity of ( log ()+()), how should I modify the code to have a complexity of ( log ()+()) |
|java|algorithm|sorting|time-complexity|quicksort| |
null |
When the project had one data source, native queries ran fine, now when there are two data sources, hibernate cannot determine the schema for receiving native queries, non-native queries work fine.
**application.yaml**
```
spring:
autoconfigure:
exclude: org.springframework.boot.autoconfigure.jdbc.DataSourceAutoConfiguration
flyway:
test:
testLocations: classpath:/db_test/migration,classpath:/db_test/migration_test
testSchemas: ems_core
locations: classpath:/db_ems/migration
baselineOnMigrate: true
schemas: my_schema
jpa:
packages-to-scan: example.example1.example2.example3.example4
show-sql: false
properties:
hibernate.dialect: org.hibernate.dialect.PostgreSQL10Dialect
hibernate.format_sql: false
hibernate.jdbc.batch_size: 50
hibernate.order_inserts: true
hibernate.order_updates: true
hibernate.generate_statistics: false
hibernate.prepare_connection: false
hibernate.default_schema: my_schema
org.hibernate.envers:
audit_table_prefix: log_
audit_table_suffix:
hibernate.javax.cache.uri: classpath:/ehcache.xml
hibernate.cache:
use_second_level_cache: true
region.factory_class: org.hibernate.cache.ehcache.internal.SingletonEhcacheRegionFactory
hibernate:
connection:
provider_disables_autocommit: true
handling_mode: DELAYED_ACQUISITION_AND_RELEASE_AFTER_TRANSACTION
hibernate.ddl-auto: validate
# todo:
open-in-view: false
database-platform: org.hibernate.dialect.H2Dialect
#database connections
read-only:
datasource:
url: jdbc:postgresql://localhost:6432/db
username: postgres
password: postgres
configuration:
pool-name: read-only-pool
read-only: true
auto-commit: false
schema: my_schema
read-write:
datasource:
url: jdbc:postgresql://localhost:6433/db
username: postgres
password: postgres
configuration:
pool-name: read-write-pool
auto-commit: false
schema: my_schema
```
**Datasources config:**
```
@Configuration
public class DataSourceConfig {
@Bean
@ConfigurationProperties("spring.read-write.datasource")
public DataSourceProperties readWriteDataSourceProperties() {
return new DataSourceProperties();
}
@Bean
@ConfigurationProperties("spring.read-only.datasource")
public DataSourceProperties readOnlyDataSourceProperties() {
return new DataSourceProperties();
}
@Bean
@ConfigurationProperties("spring.read-only.datasource.configuration")
public DataSource readOnlyDataSource(DataSourceProperties readOnlyDataSourceProperties) {
return readOnlyDataSourceProperties.initializeDataSourceBuilder().type(HikariDataSource.class).build();
}
@Bean
@ConfigurationProperties("spring.read-write.datasource.configuration")
public DataSource readWriteDataSource(DataSourceProperties readWriteDataSourceProperties) {
return readWriteDataSourceProperties.initializeDataSourceBuilder().type(HikariDataSource.class).build();
}
@Bean
@Primary
public RoutingDataSource routingDataSource(DataSource readWriteDataSource, DataSource readOnlyDataSource) {
RoutingDataSource routingDataSource = new RoutingDataSource();
Map<Object, Object> dataSourceMap = new HashMap<>();
dataSourceMap.put(DataSourceType.READ_WRITE, readWriteDataSource);
dataSourceMap.put(DataSourceType.READ_ONLY, readOnlyDataSource);
routingDataSource.setTargetDataSources(dataSourceMap);
routingDataSource.setDefaultTargetDataSource(readWriteDataSource);
return routingDataSource;
}
@Bean
public BeanPostProcessor dialectProcessor() {
return new BeanPostProcessor() {
@Override
public Object postProcessBeforeInitialization(Object bean, String beanName) throws BeansException {
if (bean instanceof HibernateJpaVendorAdapter) {
((HibernateJpaVendorAdapter) bean).getJpaDialect().setPrepareConnection(false);
}
return bean;
}
};
}
}
```
**Routing data sources**
```
public class RoutingDataSource extends AbstractRoutingDataSource {
@Override
protected Object determineCurrentLookupKey() {
return DataSourceTypeContextHolder.getTransactionType();
}
@Override
public void setTargetDataSources(Map<Object, Object> targetDataSources) {
super.setTargetDataSources(targetDataSources);
afterPropertiesSet();
}
}
```
**depending on the type of transaction readOnly or not, the datasource is selected**
```
public class DataSourceTypeContextHolder {
private static final ThreadLocal<DataSourceType> contextHolder = new ThreadLocal<>();
public static void setTransactionType(DataSourceType dataSource) {
contextHolder.set(dataSource);
}
public static DataSourceType getTransactionType() {
return contextHolder.get();
}
public static void clearTransactionType() {
contextHolder.remove();
}
}
```
```
@Aspect
@Component
@Slf4j
public class TransactionAspect {
@Before("@annotation(transactional) && execution(* *(..))")
public void setTransactionType(Transactional transactional) {
if (transactional.readOnly()) {
DataSourceTypeContextHolder.setTransactionType(DataSourceType.READ_ONLY);
} else {
DataSourceTypeContextHolder.setTransactionType(DataSourceType.READ_WRITE);
}
}
@AfterReturning("@annotation(transactional) && execution(* *(..))")
public void clearTransactionType(Transactional transactional) {
DataSourceTypeContextHolder.clearTransactionType();
}
}
```
**Error**
```
org.springframework.jdbc.BadSqlGrammarException: PreparedStatementCallback; bad SQL grammar [UPDATE my_table SET lock_until = timezone('utc', CURRENT_TIMESTAMP) + cast(? as interval), locked_at = timezone('utc', CURRENT_TIMESTAMP), locked_by = ? WHERE my_table.name = ? AND my_table.lock_until <= timezone('utc', CURRENT_TIMESTAMP)]; nested exception is org.postgresql.util.PSQLException: ERROR: relation "shedlock" does not exist
ΠΠΎΠ·ΠΈΡΠΈΡ: 8
at org.springframework.jdbc.support.SQLErrorCodeSQLExceptionTranslator.doTranslate(SQLErrorCodeSQLExceptionTranslator.java:235)
at org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:72)
at org.springframework.jdbc.core.JdbcTemplate.translateException(JdbcTemplate.java:1443)
at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:633)
at org.springframework.jdbc.core.JdbcTemplate.update(JdbcTemplate.java:862)
at org.springframework.jdbc.core.JdbcTemplate.update(JdbcTemplate.java:883)
at org.springframework.jdbc.core.namedparam.NamedParameterJdbcTemplate.update(NamedParameterJdbcTemplate.java:321)
at org.springframework.jdbc.core.namedparam.NamedParameterJdbcTemplate.update(NamedParameterJdbcTemplate.java:326)
at net.javacrumbs.shedlock.provider.jdbctemplate.JdbcTemplateStorageAccessor.lambda$execute$0(JdbcTemplateStorageAccessor.java:115)
at org.springframework.transaction.support.TransactionTemplate.execute(TransactionTemplate.java:140)
at net.javacrumbs.shedlock.provider.jdbctemplate.JdbcTemplateStorageAccessor.execute(JdbcTemplateStorageAccessor.java:115)
at net.javacrumbs.shedlock.provider.jdbctemplate.JdbcTemplateStorageAccessor.updateRecord(JdbcTemplateStorageAccessor.java:81)
at net.javacrumbs.shedlock.support.StorageBasedLockProvider.doLock(StorageBasedLockProvider.java:91)
at net.javacrumbs.shedlock.support.StorageBasedLockProvider.lock(StorageBasedLockProvider.java:65)
at jdk.internal.reflect.GeneratedMethodAccessor328.invoke(Unknown Source)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:344)
at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:205)
at com.sun.proxy.$Proxy139.lock(Unknown Source)
at net.javacrumbs.shedlock.core.DefaultLockingTaskExecutor.executeWithLock(DefaultLockingTaskExecutor.java:63)
at net.javacrumbs.shedlock.spring.aop.MethodProxyScheduledLockAdvisor$LockingInterceptor.invoke(MethodProxyScheduledLockAdvisor.java:86)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186)
at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.proceed(CglibAopProxy.java:747)
at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:689)
at ru.russianpost.ems.core.assembly.service.impl.scheduler.RpoContentSheduler$$EnhancerBySpringCGLIB$$631d68e1.loadData(<generated>)
at jdk.internal.reflect.GeneratedMethodAccessor320.invoke(Unknown Source)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at org.springframework.scheduling.support.ScheduledMethodRunnable.run(ScheduledMethodRunnable.java:84)
at org.springframework.scheduling.support.DelegatingErrorHandlingRunnable.run(DelegatingErrorHandlingRunnable.java:54)
at org.springframework.scheduling.concurrent.ReschedulingRunnable.run(ReschedulingRunnable.java:93)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: org.postgresql.util.PSQLException: ERROR: relation "my_table" does not exist
```
when I change the native query and specify the schema before the table name, the query processes normally :
UPDATE my_schema.my_table SET lock_until = timezone('utc', CURRENT_TIMESTAMP) + cast(? as interval), locked_at = timezone('utc', CURRENT_TIMESTAMP), locked_by = ? WHERE my_schema.my_table.name = ? AND my_schema.my_table.lock_until <= timezone('utc', CURRENT_TIMESTAMP);
|
It takes more than just setting `CMAKE_INSTALL_RPATH` for me (cmake version 3.20.2):
I've set `CMAKE_INSTALL_RPATH` in a top level CMakeLists.txt file:
set(CMAKE_INSTALL_RPATH "${LIB_INSTALL_PATH}:$ORIGIN")
This example also adds another path from a variable I created `LIB_INSTALL_PATH`.
For the target's CMakeLists.txt I have:
set_target_properties(mytarget
PROPERTIES
BUILD_WITH_INSTALL_RPATH true
LINK_OPTIONS "-Wl,--disable-new-dtags"
)
The `--disable-new-dtags` was need to use `RPATH` instead of `RUNPATH` (the new default) which apparently has issues.
Not adding anything to the target CMakeLists.txt file results in RPATH set to the build path:
0x000000000000000f (RPATH) Library rpath: [/home/brookbot/workspace/my_project/build/tgt/stuff:/home/brookbot/workspace/my_project/build/tgt]
But adding the property `BUILD_WITH_INSTALL_RPATH true` to the target gives me the desired result:
0x000000000000000f (RPATH) Library rpath: [/my/install/path:$ORIGIN]
I'm using `readelf -d` to examine the results. |
Assuming you have the ID it's possible to query using WP_Query object. If you don't have the ID you can find it also by email, or other means.
Don't forget to reset the global `$post` variable pointer to the beginning of the loop using `wp_reset_postdata()`
$username = "username";
$meail = "what@ever.com";
$user = get_user_by('login', $username);
// $user = get_user_by('email', $email);
$user_id = $user->ID;
$args = array(
'post_type' => 'page',
'posts_per_page' => -1, // -1 to retrieve all pages
'author' => $user_id,
);
$posts = new WP_Query($args);
while ($posts->have_posts()) {
$posts->the_post();
echo ('<h1>' . get_the_title() . '</h1>');
}
wp_reset_postdata();
|
Authenticated HTTP Request to External API |
|firebase|firebase-authentication|google-cloud-functions| |
null |
I have written the following piece of code trying to use exec() to execute the code enclosed in the string. I pass an empty dictionary, d, as the locals parameter. When I print d, the local namespaces are being created in d. Can anyone help me explain that is really happening?
```
tc = '''x = 1
def __init__(self,n):self.name = n
def printD(self):print(self.name)'''
d = {}
print(d)
exec(c,globals(),d)
print(d)
```
|
How does Python's exec function work when sending an empty dictionary in the locals parameter? |
|python|exec| |
null |
Need an notification when a registration is complete or a new user is created on user side.This notification is shown in admin side.This is a REACT project and the backend is provided through django REST Api.
I created a seperate db for notifications ,it is not working automatically when a registration is completed. |
How notification works |
|django|django-rest-framework|dbsql| |
null |
type ErrorResponse = {
message: string;
};
const error: ErrorResponse = { message: 'Ups' };
|
In your code you need to initialise the LCD and also turn on the backlight. Also you need to check that you have the correct I2C address for your LCD. For the unit that you appear to be using this is usually either 0x27 or 0x3f. If this is wrong you will see nothing.
In the code below I am using an Arduino Uno with and I2C LCD with address 0x3f, and I am displaying the value read from A0, so if you have no connection on A0 you should see a continuously changing value (I am getting around 270 - 285).
#include <LiquidCrystal_I2C.h>
LiquidCrystal_I2C lcd(0x3f, 16, 2); // Set the LCD I2C address - ususally 0x27 or 0x3F
void setup() {
lcd.init();
lcd.backlight();
lcd.clear();
lcd.print("Hello world");
}
void loop() {
int sensorValue = analogRead(A0);
lcd.setCursor(0, 1); // Display analogue reading on line 2
lcd.print(String(sensorValue) + " ");
delay(500);
}
If you see nothing on the LCD you may have your I2C connections wrong. You should have SDA on A4 and SCL on A5.
You have said that the above code is working but you want a version with an IF statement to check whether the reading on A0 is above or below 800 - so the version below satisfies that requirement.
#include <LiquidCrystal_I2C.h>
LiquidCrystal_I2C lcd(0x3f, 16, 2); // Set the LCD I2C address - ususally 0x27 or 0x3F
void setup() {
lcd.init();
lcd.backlight();
lcd.clear();
lcd.print("Hello world");
}
void loop() {
int sensorValue = analogRead(A0);
lcd.setCursor(0, 1); // Analogue reading on line 2
if (sensorValue >= 800) lcd.print(">= 800 ");
else lcd.print("< 800 ");
delay(500);
} |
I have the Openfire server on local machine and I need to use as a server for my video calling web application for that I need Js library to integrate with Web application i.e. Client side. So which one i should use and can get documentation for that.
I tried with Strope.js library in CDN format but it doesn't work. |
Integrate XMPP Openfire STUN server with client side js web application |
|webrtc|xmpp|openfire|stun|videochat| |
null |
{"OriginalQuestionIds":[26037954],"Voters":[{"Id":3440745,"DisplayName":"Tsyvarev","BindingReason":{"GoldTagBadge":"cmake"}}]} |
You basically need to correlate 3 charts:
- **Active Threads Over Time** - shows the current load (number of virtual users)
- **Transactions per second** - shows the number of requests per second
Ideally these two should be similar, to wit application throughput should increase by the same factor as the load
However in the vast majority of cases at certain point you will notice that despite you increase the load the number of requests per second doesn't increase. That would indicate the [saturation point][1] of your application.
The most common reason for the situation is Response Time which gets increased under the load, it can be observed using **Response times Over Time** chart.
----------
With regards to "optimal number of users" - it depends on the [performance testing type][2] you're running.
For example if you're doing [Load Testing][3] - you should use the **anticipated** number of users from your system under test [NFR][4] or [SLA][5]
In case of [Stress Testing][6] there is no upper limit, you should gradually increase the load until response time starts exceeding acceptable thresholds or errors start occurring or application crashes, whatever comes the first.
[1]: http://hd-performance.blogspot.com/2010/05/saturation-point.html
[2]: https://www.blazemeter.com/blog/performance-testing-vs-load-testing-vs-stress-testing
[3]: https://en.wikipedia.org/wiki/Software_performance_testing#Load_testing
[4]: https://en.wikipedia.org/wiki/Non-functional_requirement
[5]: https://en.wikipedia.org/wiki/Service-level_agreement
[6]: https://en.wikipedia.org/wiki/Software_performance_testing#Stress_testing |
You can use the `onOpenChange` prop. The documentation says `onOpenChange` is the callback function which can be executed whether the popup calendar is popped up or closed.
So you should listen for when the popup is closed and reset the calendar.
```
// add onOpenChange props to the RangePicker component
const onOpenChange = (isOpen) => {
if(!isOpen) {
const today = dayjs();
setDateRange([today.startOf('year'), today]);
}
}
``` |
How to setup indexer tier in Apache Druid version 28.0.0?
Need to change this _Default_tier to new_tier
[druid-console](https://i.stack.imgur.com/tG3tk.png)
I tried adding below parameter and then restarted druid, but no change was found
druid.worker.category=test
druid.worker.capacity=5 |
You'll typically use S2 JavaScript port wrapped in BigQuery UDF.
Take a look at a few functions doing similar stuff:
* Carto's UDF that converts long/lat pair + level to S2 cell boundary `jslibs.s2.latLngToCornerLatLngs`
* My `gislib.s2.s2CellIdToLatLng` that converts cellid taken as string, basically s2 uint64 cellid reinterpreted as signed int64 - same way as used by BigQuery S2_ functions. Note that integer s2 cellid already contains its level encoded, so the function does not take level argument.
You can click button 'Edit persisted function' and see how it is done, even if you don't have permissions to actually edit it. |
|python|python-3.x|ta-lib|python-3.12| |
You are running your JavaScript file in node.js enviroment. The document object and its methods, like getElementById in your case, are part of the Web API provided by browsers for manipulating web pages and are not available in Node.js.
Why?
Because, node.js is a JS runtime used primarily for server-side scripting and does not have a built-in Document Object Model (DOM) like a web browser does
You can read more about the Document interface here: [Document API][1]
To run your js script in the browser, add it as a script in your index.html file with the ```<script>``` tag. This will load your javascript when your HTML file is loaded in the browser. You can follow the steps mentioned here:
[Use JavaScript within a webpage][2]
[1]: https://developer.mozilla.org/en-US/docs/Web/API/Document
[2]: https://developer.mozilla.org/en-US/docs/Learn/HTML/Howto/Use_JavaScript_within_a_webpage |
Looks like I need `sitemap-en.xml` and `sitemap-de.xml` in the root.
From all the solutions, I like the idea of using rewrites and generating sitemaps with `/api`
//next.config.js
const nextConfig = {
...
async rewrites() {
return [
{
source: '/sitemap.xml',
destination: '/api/sitemap',
},
]
},
...
}
And
// /api/sitemap.ts
export async function GET() {
const xmlContent = `<?xml version="1.0" encoding="UTF-8"?>
<urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9">
...
</urlset>`;
return new Response(xmlContent, { headers: { "Content-Type": "text/xml" } });
}
So, it returns errors and can't be built even without logic to grab pages. How do I go with this?
I'm using Next.JS 13.5 |
Sitemaps for multilangual, multidomain Next.JS site |
|next.js|next.js13| |
<com.google.android.material.textfield.TextInputLayout
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:layout_marginStart="32dp"
android:layout_marginTop="16sp"
android:layout_marginEnd="32dp"
app:boxStrokeColor="@color/blue">
<com.google.android.material.textfield.TextInputEditText
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:fontFamily="@font/quicksand_bold"
android:hint="Full Name"
android:textColor="@color/blue"
android:textColorHint="@color/blue" />
</com.google.android.material.textfield.TextInputLayout>
I used this for Edittext but I don't know how to use it for Textview
I want like this
[enter image description here][1]
[1]: https://i.stack.imgur.com/tX5kd.png |
I am relatively new to coding and am trying to code a subroutine that takes in a grid spacing h and a polynomial degree n and returns the n + 1 numbers that correspond to the symmetric finite difference stencil/filter for the operator d2/dx2, i.e write some code that returns the finite difference coefficients found on wikipedia. I am close but not getting the correct values for 4th, 6th and 8th degree accuracy. Any help would be greatly appreciated!
I've coded this Lagrange polynomial, but it doesnt seem to work (Edit: The code doesn't return any errors, but does not return the correct coefficients for the second order central finite difference found here https://en.wikipedia.org/wiki/Finite_difference_coefficient. Thanks for the help!):
def central_difference_coefficients(order, n):
coefficients = []
for j in range(n+1):
coefficient = 0.0
for i in range(n+1):
if i != j:
term1 = 1.0 / (j - i)
product = 1.0
for m in range(n+1):
if m != j and m != i:
denominator = j - m
if denominator != 0:
product *= 1.0 / denominator
for l in range(n+1):
if l != j and l != i and l != m:
denominator = j - l
if denominator != 0:
product *= (n / 2 - l) / denominator
term2 = term1 * product
coefficient += term2
if j != 0:
coefficient *= 1.0 / (order**2)
coefficients.append(coefficient)
return coefficients
# Example usage:
order = 2
n = 4
coefficients_2nd_order = central_difference_coefficients(order, n)
print(f"Central Difference Coefficients (2nd order) for n={n}:\n{coefficients_2nd_order}")
Let me know where I've gone wrong!
|
I'm using Adobe document generation API to create a PDF from Microsoft Word (.docx) template. I want to format the text that will replace the placeholders in the template with a paragraph or line break.
I've tried adding "\n" to the text but it didn't work |
What is the newline character for Microsoft Word |
|c#|ms-word|adobe-documentgeneration| |
null |
You need to use clang++ if you have clang compiler downloaded.
clang++ is praised for its strict adherence to standards and its quick adoption of new C++ features. It is known for providing more informative error messages and better diagnostics compared to g++.
g++ has historically been criticized for being less strict in its adherence to standards, but it has also made significant improvements in recent versions. |
```
func removeDuplicateSequences(from array: [String]) -> [String] {
var result = array
var indicesToRemove = Set<Int>()
// Check for every sequence of three elements in the array
for i in 0..<result.count - 2 {
let currentSequence = Array(result[i...i+2])
// Only proceed if this sequence hasn't been marked for removal
if !indicesToRemove.contains(i) {
for j in i+1..<result.count - 2 {
let nextSequence = Array(result[j...j+2])
// If a duplicate sequence is found, mark its indices for removal
if currentSequence == nextSequence {
indicesToRemove.insert(j)
indicesToRemove.insert(j+1)
indicesToRemove.insert(j+2)
}
}
}
}
// Remove elements in reverse order to avoid index out of range errors
for index in indicesToRemove.sorted(by: >) {
result.remove(at: index)
}
return result
}
```
This fiction meets these requirements. I am not aware of any native version. |
There isn't necessarily an error. If the p-value is smaller than the smallest (normal) double precision floating point number (~`1e-308`), it [underflows](https://en.wikipedia.org/wiki/Arithmetic_underflow) and you get a zero. This would not be a bug in your code or in SciPy; it's just a fundamental limitation of floating point arithmetic.
If your sample size is large, it doesn't take much of a difference in sample means to get a zero p-value.
```python3
import numpy as np
from scipy import stats
rng = np.random.default_rng(83469358365936)
x = rng.random(1000)
stats.ttest_ind(x, x + 1)
# TtestResult(statistic=-76.66392731424226, pvalue=0.0, df=1998.0)
```
If you really want to know the true p-value, use [arbitrary precision arithmetic](https://mpmath.org/doc/current/functions/gamma.html#betainc) and the definition of the [t distribution](https://en.wikipedia.org/wiki/Student%27s_t-distribution) CDF.
```python3
from mpmath import mp
t = mp.mpf(res.statistic)
nu = mp.mpf(res.df)
x2 = nu / (t**2 + nu)
p = mp.betainc(nu/2, mp.one/2, x2=x2, regularized=True)
print(p)
# 1.72157326887951e-597
``` |
"**git help**" does not work on my git bash but "**--help**" works, even though it shows more than necessary it's worth using. |
The d.docx document only a table, across four pages, the last page is a blank page, there is no content.
However, if the mouse cursor is located at the top of the blank page, it will become recognized to the state of the form, then right-click the mouse, open source to see the "Delete Cells" option, if you click to delete the cell, the dialog box will pop up, and then choose to delete the line, so three times in a row.
This is the third consecutive time, the blank page will disappear. This should be the form of overflow, resulting in a blank page!
I tried, Open XML, DocX, Microsoft.Office.Interop.Word, none of them can remove the last blank page.
I put the document (d.docx) [in the github directory][1]
It's better not to use paid dlls, like aspose, etc.

[1]: https://github.com/angmangm/test |
how to document QML files inside C++ project? |
|c++|qt|qml|documentation|doxygen| |
I tried to add an app password for my gmail following Google Gmail Help https://support.google.com/mail/answer/185833?hl=en&sjid=2565285338845314707-AP
Create & use app passwords
Important: To create an app password, you need 2-Step Verification on your Google Account.
If you use 2-Step-Verification and get a "password incorrect" error when you sign in, you can try to use an app password.
1. Go to your Google Account.
2. Select Security.
3. Under "How you sign in to Google," select 2-Step Verification.
4. At the bottom of the page, select App passwords.
5. Enter a name that helps you remember where youβll use the app password.
6. Select Generate.
7. To enter the app password, follow the instructions on your screen. The app password is the 16-character code that generates on your device.
8. Select Done.
I was told that 2-step verification is required for the app password to be effective. However, I received no 2-step verification after the app password has been done (point 8). I would appreciate it if someone could help me understand why 2-step verification was not prompted in my case. (Note: This app password is to be hardcoded using in our mass email sending python program. Here is the error after I have input the app password in the program : smtplib.SMTPAuthenticationError: (535, b'5.7.8 Username and Password not accepted. For more information, go to\n5.7.8 https://support.google.com/mail/?p=BadCredentials e28-20020a63545c000000b005cf5bf78b74sm2151276pgm.17 - gsmtp')
Thank you for all your help!
(Note: I expect the mass email can go through to the intended recipients without error.) |
invalid application password of gmail |
|gmail|passwords|fault| |
null |
{"Voters":[{"Id":9267406,"DisplayName":"imhvost"},{"Id":1974224,"DisplayName":"Cristik"},{"Id":17562044,"DisplayName":"Sunderam Dubey"}],"SiteSpecificCloseReasonIds":[18]} |
In my angular app there can be 2 type of layout(e.g. List and Detail).
I have to get the layoutConfig based on activated route. For this I'm using a route resolver.
My plan is ResolveFn will return a signal not an observable.
below is my existing code which pass url as parameter and return an observable:
```
export const LayoutResolver: ResolveFn<LayoutConfig> = (
route: ActivatedRouteSnapshot,
state: RouterStateSnapshot,
) => {
const _layoutService = inject(LayoutService);
_layoutService.setCurrentActivatedRoute(state.url);
return _layoutService.layoutConfig$;
};
``` |
return signal from ResolveFn |
|angular|rxjs|angular-signals| |
null |
I have this URL that works on Postman and returns data:
https://www.tablebuilder.singstat.gov.sg/api/table/resourceid?isTestApi=true&keyword=manufacturing&searchoption=all
But the Python code generated on Postman does not work.
import requests
url = "https://www.tablebuilder.singstat.gov.sg/api/table/resourceid?isTestApi=true&keyword=manufacturing&searchoption=all"
response = requests.request("GET", url)
print(response.text)
What could the reason be? This code used to work in the past. Is there a permanent fix for the problem? |
I have this simple playbook that reads a file content => fetches a value
=> append that to a local file,
I want to get a config value for different hosts stored in a local file.
```yml
---
- name: Check file content
hosts: all
become: true
tasks:
- name: Check if file exists
ansible.builtin.stat:
path: /mydir/myfile.ini
register: file_stat
- name: Fetch values and write to a local file
when: file_stat.stat.exists
block:
- name: Read file if it exists
ansible.builtin.slurp:
src: /mydir/myfile.ini
register: file_content
- name: Set value as a fact
ansible.builtin.set_fact:
my_setting: "{{ file_content.content | \
b64decode | regex_search('^setting_key\\s*?=\\s*?(.*)$', \
'\\1', multiline=True) | first | trim }}"
- name: Debug file content
ansible.builtin.debug:
msg: '{{ my_setting }}'
- name: Write fetch value to a file
become: false
ansible.builtin.lineinfile:
path: my_settings.txt
line: '{{ inventory_hostname }} - {{ my_setting }}'
create: true
mode: '644'
insertafter: 'EOF'
delegate_to: localhost
# when: >
# my_setting is defined and
# my_setting != "MY_SETT_VALUE"
# with_items: '{{ my_setting }}'
```
However, this playbook behaves inconsistently. If I run it against two hosts, sometimes there will be two lines in the output file `my_settings.txt`, one for each host, as expected.
But sometimes there will be only one line or one host information(most of the time), and the other hostβs value is missing.
Is this due to some race condition? How can I fix this issue?
The debug lines always show the correct values for different hosts, so the problem is not with reading or fetching the value.
Note: the source file is located in remote node,
|
In the past few days I have been searching about this question and found out that the reason for it could be:
1. The notnull constraint in C# is used to ensure that a type parameter is not null. According to the official documentation, the notnull constraint can be applied to either value types or non-nullable reference types, but not nullable reference types
2. This code instantiates a generic class with nullable types int? and string? and it still works. However, the 'notnull' constraint is not being enforced for these types because the 'notnull' constraint is only enforced at compile-time and not at runtime.
Since C# 8.0, developers can annotate reference types as nullable (e.g., string?) to indicate to the compiler that the variable may hold null. This feature is known as nullable reference types. It is intended for static analysis by the compiler to issue warnings and does not affect the actual runtime type of the variable. The runtime type of string and string? is the same, and thus they both satisfy the notnull constraint.
3. Even with the notnull constraint, nullable value types are allowed because they are represented as a System.Nullable<T> struct, which is a value type itself and is not actually null. It just has the ability to represent null through a HasValue property. So the code compiles because int? is a Nullable<int> struct and this is not a nullable reference type but a value type, which satisfies the notnull constraint.
Also via the link below you can check out documentation of notnull constraint.
https://learn.microsoft.com/en-us/dotnet/csharp/programming-guide/generics/constraints-on-type-parameters |
|list|recursion|prolog| |
I am writing a request handler in a telegram bot. I am having difficulty processing the waiting for pressing the inline button, can you tell me if this can be implemented? I have a ticket monitoring start handler, get them in json, then process each ticket in the list.
```
class MonitoringStatus(StatesGroup):
monitoring_active = State()
monitoring_pending = State()
waiting_for_ticket_action = State()
```
```
def get_ticket_keyboard():
builder = InlineKeyboardBuilder()
builder.add(
InlineKeyboardButton(
text="assign",
callback_data="assign"),
InlineKeyboardButton(
text="resolve",
callback_data="resolve"),
)
return builder.as_markup()
```
```
@router.message(None or MonitoringStatus.monitoring_pending, F.text == 'Start Monitoring')
async def start_monitoring(message: Message, state: FSMContext):
await state.set_state(MonitoringStatus.monitoring_active)
await state.update_data(is_monitoring_active=True)
await asyncio.sleep(3)
await message.answer('Monitoring started')
while True:
is_monitoring_active = data.get('is_monitoring_active', False)
if not is_monitoring_active:
break
tickets = await fetch_tickets_from_file()
for ticket in tickets:
await message.answer(f'Ticket: {ticket["ticket_number"]}\n'
f'Reporter: {ticket["reporter"]}\n'
f'Description: {ticket["description"]}',
reply_markup=get_ticket_keyboard())
await MonitoringStatus.waiting_for_ticket_action.set()
await asyncio.sleep(10)
```
And I get an error
AttributeError: 'State' object has no attribute 'set'.
If I don't use the set() function, all requests from "tickets" in cicle instantly leave messages in telegram, but I need to process each request sequentially.
|
> I think it probably requires to use index_sequence but I'm not sure
> how.
Yes, it's very easy to do this using `index_sequence`:
template<typename... Args, std::size_t... Is>
void foo_impl(std::index_sequence<Is...>, const Args&... args)
{
using Tuple = std::tuple<Args...>;
bar( SomeClass<Args, std::tuple_element_t<Is, Tuple>> { args }... );
}
template<typename... Args>
void foo(const Args&... args)
{
foo_impl( std::index_sequence_for<Args...>{}, args... );
} |
today I feel very frustrated , after 3 days trying to implement django using amazon s3 with media file in private mode.
In the past my apps has been configured using the following link bellow and always work fine:
https://unfoldadmin.com/blog/configuring-django-storages-s3/
From now for some reasson is not possible to reach the goald (some media files need to be private) I undestand that from django 4.2 exist new way to configure storages but still you are able to continue with old option, To be honest I do not sure if this will be the cause of the problem:
https://docs.djangoproject.com/en/5.0/ref/settings/
```
{
"default": {
"BACKEND": "django.core.files.storage.FileSystemStorage",
},
"staticfiles": {
"BACKEND": "django.contrib.staticfiles.storage.StaticFilesStorage",
},
}
```
```
versions:
Phyhon: 3.12.2
asgiref==3.8.1
boto3==1.34.72
botocore==1.34.72
Django==5.0.3
django-storages==1.14.2
jmespath==1.0.1
python-dateutil==2.9.0.post0
s3transfer==0.10.1
six==1.16.0
sqlparse==0.4.4
tzdata==2024.1
urllib3==2.2.1
Configuration on AWS:
Object Ownsership:ACls enabled
Block all public access: off
Bucket policy:
{
"Version": "2012-10-17",
"Id": "Policy_id",
"Statement": [
{
"Sid": "my_sid",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::mybucket_name/*"
}
]
}
```
**Note**: I have to say that appear the ACLs is working because after uploading a private file if you try to retrieve the URL of the content , the API will generate a long URL that expire after a few minutes, however in order to test, I just try to access the file directly, without the parameters but I never get the standart AWS error message.
I really apreciate your support.
I expected that only the user authenticated can access the private files. |
### Preamble
Reading the title of your question, I think you have to read quietly the *`address chapter*`* in `info sed`!
Understanding the difference between *addressing* and *commands*! `s`, like `p` are commands!
So your request is about *addressing* for *executing a command*.
### Address range and address for commands
Little sample:
<!-- language: lang-bash -->
info sed |
sed -ne '
/^4.3/,/^5/{
/^\(\o47\|\o342\o200\o230\){
:a;
N;
/\n / ! ba;
N;
p
}
}'
- From the line that ***begin by `4.3`*** to the line that ***begin by `5`***,
- On lines that **begin by a *quote `'`*** (or a *UTF8 open quote: `β`*),
- Place a *label* *`a`*.
- Append next line
- If current buffer ***does not contain*** a *newline* followed by **one *space***, the branch to *label `a`*.
- Append one more line
- print current buffer.
<!-- language: lang-none -->
'/REGEXP/'
This will select any line which matches the regular expression
REGEXP. If REGEXP itself includes any '/' characters, each must be
'\%REGEXP%'
(The '%' may be replaced by any other single character.)
'/REGEXP/I'
'\%REGEXP%I'
The 'I' modifier to regular-expression matching is a GNU extension
which causes the REGEXP to be matched in a case-insensitive manner.
'/REGEXP/M'
'\%REGEXP%M'
The 'M' modifier to regular-expression matching is a GNU 'sed'
extension which directs GNU 'sed' to match the regular expression
'/[0-9]/p' matches lines with digits and prints them. Because the
second line is changed before the '/[0-9]/' regex, it will not match and
will not be printed:
$ seq 3 | sed -n 's/2/X/ ; /[0-9]/p'
1
'0,/REGEXP/'
A line number of '0' can be used in an address specification like
'0,/REGEXP/' so that 'sed' will try to match REGEXP in the first
'ADDR1,+N'
Matches ADDR1 and the N lines following ADDR1.
'ADDR1,~N'
Matches ADDR1 and the lines following ADDR1 until the next line
whose input line number is a multiple of N. The following command
### Please, RTFM:
Have a look at `info sed`, search for *`* sed addresses`*, then *`* Regexp Addresses`*:
> β/REGEXP/β
> This will select any line which matches the regular expression
> REGEXP. If REGEXP itself includes any β/β characters, each must be
> escaped by a backslash (β\β).
> ...
>
> β\%REGEXP%β
> (The β%β may be replaced by any other single character.)
>
> This also matches the regular expression REGEXP, but allows one to
> use a different delimiter than β/β. This is particularly useful if
> the REGEXP itself contains a lot of slashes, since it avoids the
> tedious escaping of every β/β. If REGEXP itself includes any
> delimiter characters, each must be escaped by a backslash (β\β).
### In fine, regarding your question:
So you have to precede your 1st *delimiter* by a backslash `\`:
$ echo A | sed -ne '\#A#p'
A
|
I'm wanting to use pnpm instead of the tradition npm or yarn to decrease the possibility of installing duplicate modules in my node_modules, as well as to reduce the size of the node_modules, but whenever I pack my app using electron-builder, upon launch I throw the error "Error: Cannot find module 'builder-util-runtime'". I've read in some git forums that it stems from an error processing the node_modules and pnpm-lock.yaml within the electron asar and I was wondering if anyone has found any solutions to this.
package.json
```
{
"name": "toolbox",
"version": "2.0.12",
"description": "",
"main": "main.js",
"scripts": {
"electron": "electron ./build/app.js",
"webpack": "webpack --mode production",
"react": "webpack-dev-server --mode development --port 3000",
"electron-dev": "cross-env ELECTRON_ENV=dev electron ./build/app.js",
"webpack-prod": "webpack --mode production",
"build-win": "electron-builder build --win -c.extraMetadata.main=./build/app.js --publish never"
},
"author": "Toolbox",
"license": "ISC",
"build": {
"appId": "com.Toolbox",
"productName": "Toolbox",
"files": [
"build/*",
"node_modules/**/*",
"Functions/**/*",
"Modules/**/*",
"src/**/*"
],
"publish": [
{
"provider": "spaces",
"name": "toolbox",
"region": "nyc3"
}
],
"win": {
"icon": "toolbox.ico",
"target": "nsis"
},
"nsis": {
"installerIcon": "toolbox.ico",
"uninstallerIcon": "toolbox.ico",
"oneClick": false,
"allowToChangeInstallationDirectory": true
}
},
"dependencies": {
"@electron/remote": "^2.1.1",
"@fortawesome/free-solid-svg-icons": "^6.5.1",
"@fortawesome/react-fontawesome": "^0.2.0",
"ansi-colors": "^4.1.3",
"autosolve-http-client": "^1.0.5",
"csv-parser": "^3.0.0",
"deepmerge": "^4.3.1",
"discord-rpc": "^4.0.1",
"discord-webhook-node": "^1.1.8",
"dotenv": "^16.3.1",
"electron-store": "^8.1.0",
"electron-updater": "^6.1.7",
"express": "^4.18.2",
"fast-csv": "^4.3.6",
"fetch-cookie": "^2.1.0",
"https-proxy-agent": "^7.0.2",
"iconv-lite": "^0.6.3",
"ip": "^1.1.8",
"mailparser": "^3.6.5",
"moment-timezone": "^0.5.44",
"mongodb": "^6.3.0",
"node-fetch": "2.6.11",
"node-html-parser": "^6.1.11",
"node-imap": "^0.9.6",
"node-machine-id": "^1.1.12",
"node-notifier": "^10.0.1",
"node-random-name": "^1.0.1",
"node-wav-player": "^0.2.0",
"puppeteer": "^21.6.1",
"puppeteer-extra": "^3.3.6",
"puppeteer-extra-plugin-stealth": "^2.11.2",
"quoted-printable": "^1.0.1",
"react": "^18.2.0",
"react-dom": "^18.2.0",
"react-router-dom": "^6.21.2",
"systeminformation": "^5.21.20",
"tough-cookie": "^4.1.3",
"uuid": "^9.0.1",
"v8-compile-cache": "^2.4.0",
"webpack-node-externals": "^3.0.0",
"winston": "^3.11.0"
},
"devDependencies": {
"@babel/core": "^7.23.6",
"@babel/preset-env": "^7.23.6",
"@babel/preset-react": "^7.23.3",
"babel-loader": "^9.1.3",
"cross-env": "^7.0.3",
"css-loader": "^6.9.0",
"electron": "^28.1.3",
"electron-builder": "^24.9.1",
"file-loader": "^6.2.0",
"html-webpack-plugin": "^5.5.4",
"style-loader": "^3.3.4",
"webpack": "^5.89.0",
"webpack-cli": "^5.1.4",
"webpack-dev-server": "^4.15.1"
}
}
``` |
{"OriginalQuestionIds":[46151707],"Voters":[{"Id":794749,"DisplayName":"gre_gor"},{"Id":213269,"DisplayName":"Jonas"},{"Id":3689450,"DisplayName":"VLAZ"}]} |
{"Voters":[{"Id":794749,"DisplayName":"gre_gor"},{"Id":213269,"DisplayName":"Jonas"},{"Id":3689450,"DisplayName":"VLAZ"}],"SiteSpecificCloseReasonIds":[16]} |
When you download the data, the best way is to download it as a workbook. When you force it to be a ```csv``` then it looks like this
[![Downloading the work book as a csv][1]][1].
What I did was to download it as an Excel document and then upload it to Google Sheets before downloading it as a CSV (Google Sheets has that option). [![Data in google sheets][2]][2]
Now you can load your data from Google Sheets or download a CSV and upload it to BQ. I am feeling a bit lazy to code but you could also open the Excel file in ```Python``` for example and save it as a ```csv``` or save it directly to ```BQ``` using ```pandas_gbq```.
Final output.[![enter image description here][3]][3]
[1]: https://i.stack.imgur.com/MI6pm.png
[2]: https://i.stack.imgur.com/kfcx4.png
[3]: https://i.stack.imgur.com/mI5mt.png |
I've been encountering some difficulties while trying to install React Native. Whenever I attempt to set up React Native on my system, I encounter an error message
The error message I'm receiving is
> [Error: Cannot find module 'C:\Program Files\nodejs\node_modules\npm\node_modules\path-scurry\node_modules\lru-cache\dist\cjs\index.js'].
Despite trying various troubleshooting methods, I haven't been successful in resolving the issue.
How to troubleshoot and resolve this installation problem?
[![enter image description here][1]][1]
[1]: https://i.stack.imgur.com/OUnLa.png |
facing problem installing react native: Cannot find module ...\node_modules\lru-cache\dist\cjs\index.js |
You can't merge or rebase initially, just push with `-u` flag which set upstream to `origin/main`
git init
git add .
git commit -m "first commit"
git branch -M main
git remote add origin $(repolink).git
git push -u origin main
Next push use `git push origin main` |
Using CSS module scripts (still not finalized, Stage 3, and currently only working in Chromium-based browsers):
```javascript
// import the css and directly get a constructed stylesheet from it
import css from './path/to/styles.css' with { type: 'css' };
// ...
// then, in the constructor:
this.shadowRoot.adoptedStyleSheets.push(css);
```
For more info: https://github.com/tc39/proposal-import-attributes |
I'm using drizzle for a my ORM and I'm able to generate migrations successfully how ever when I try to push the migrations this is the error I get:
error: password authentication failed for user "postgres"
at D:\code\toni\inventory\server\node_modules\pg-pool\index.js:45:11
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at PgDialect.migrate (D:\code\toni\inventory\server\node_modules\src\pg-core\dialect.ts:61:3)
at migrate (D:\code\toni\inventory\server\node_modules\src\node-postgres\migrator.ts:10:2)
at main (d:\code\toni\inventory\server\src\db\migrate.ts:15:3) {
length: 104,
severity: 'FATAL',
code: '28P01',
detail: undefined,
hint: undefined,
position: undefined,
internalPosition: undefined,
internalQuery: undefined,
where: undefined,
schema: undefined,
table: undefined,
column: undefined,
dataType: undefined,
constraint: undefined,
file: 'auth.c',
line: '326',
routine: 'auth_failed'
}
My connection string is as follows: `postgresql://postgres:postgres@localhost:5432/testDB?schema=public` |
Python GET Request returns data when tried on Postman but the generated python code not working |
|python|api|python-requests| |
{"Voters":[{"Id":22180364,"DisplayName":"Jan"},{"Id":2530121,"DisplayName":"L Tyrone"},{"Id":17562044,"DisplayName":"Sunderam Dubey"}],"SiteSpecificCloseReasonIds":[13]} |
{"Voters":[{"Id":9599344,"DisplayName":"uber.s1"}]} |
{"Voters":[{"Id":712649,"DisplayName":"Mathias R. Jessen"},{"Id":354577,"DisplayName":"Chris"},{"Id":17562044,"DisplayName":"Sunderam Dubey"}],"SiteSpecificCloseReasonIds":[11]} |
{"Voters":[{"Id":5320906,"DisplayName":"snakecharmerb"},{"Id":1974224,"DisplayName":"Cristik"},{"Id":17562044,"DisplayName":"Sunderam Dubey"}]} |
I have an application that occasionally needs to be able to read improperly closed gzip files. The files behave like this:
```
>>> import gzip
>>> f = gzip.open("path/to/file.gz", 'rb')
>>> f.read()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python3.8/gzip.py", line 292, in read
return self._buffer.read(size)
File "/usr/lib/python3.8/gzip.py", line 498, in read
raise EOFError("Compressed file ended before the "
EOFError: Compressed file ended before the end-of-stream marker was reached
```
I wrote a function to handle this by reading the file line by line and catching the `EOFError`, and now I want to test it.
The input to my test should be a gz file that behaves in the same way as demonstrated.
How do I make this happen in a controlled testing environment?
I really strongly prefer not making a copy of the improperly closed files that I get in production. |
How to create an improperly closed gzip file using python? |
|python|python-3.x|unit-testing|pytest|gzip| |
I have tried the below approach:
In my case I have used default schema provided by databricks.
import pandas as pd
tables = spark.sql("SHOW TABLES IN default").toPandas()
output_df = pd.DataFrame({'Table Name': tables['tableName'], 'Location': tables['database']})
display(output_df)
Using pyspark:
from pyspark.sql import SparkSession
tables = spark.sql("SHOW TABLES IN default")
output_df = tables.select("tableName", "database")
display(output_df)
**Results:**
```
Table Name Location
dilip01 default
dilip010 default
dilip01_temp default
dilip1 default
dilip_02 default
dilip_02_transformed default
table1 default
table2 default
```
In the above code using **Spark SQL** to execute the `SHOW TABLES` command in the default schema,
Which returns a DataFrame containing information about the tables in that schema
`.toPandas()` converts the Spark DataFrame to a Pandas DataFrame
creates a new Pandas DataFrame with two columns: '**Table Name**' and '**Location**'.
It uses the 'tableName' column from the original DataFrame for the table names and the 'database' column for the schema names. |
null |
{"Voters":[{"Id":14122,"DisplayName":"Charles Duffy"},{"Id":13664137,"DisplayName":"moken"},{"Id":17562044,"DisplayName":"Sunderam Dubey"}],"SiteSpecificCloseReasonIds":[13]} |
How can I use pnpm within my electron application? |
|node.js|electron|pnpm| |
null |