id
stringlengths 5
11
| text
stringlengths 0
146k
| title
stringclasses 1
value |
|---|---|---|
doc_23535400
|
I have a Stacked Bar with a second line chart. I'd like the stacked bar y axis to be on the left and the second line chart to have the scale on the right. So this would be a swap for both y axis.
Here is my chart :
chart1 = BarChart()
chart1.type = "col"
chart1.style = 12
chart1.grouping = "stacked"
chart1.overlap = 100
chart1.layout = Layout(
ManualLayout(
x=0.12, y=0.25,
h=0.9, w=0.75,
xMode="edge",
yMode="edge",
)
)
...
chart2 = LineChart()
chart2.style = 12
chart2.y_axis.axId = 0
...
chart1.y_axis.crosses = "max"
chart1 += chart2
WorkSheetOne.add_chart(chart1, 'A1')
A:
Resolved in comments:
Best bet is to change the order of the charts.
The logic for this is hard-coded in the library and difficult to change due to the XML. β Charlie Clark
I tried to change the order with:
chart1.y_axis.crosses = "max"
chart2 += chart1
WorkSheetOne.add_chart(chart2, 'A1')
But this didn't work until I also changed:
chart2.y_axis.crosses = "max"
And I also had to change the : chart2.title because the chart1.title was now ignored.
For a bar chart with a single data series I was able to reverse the order with this:
chart.x_axis.scaling.orientation = "maxMin"
chart.y_axis.scaling.orientation = "minMax"
| |
doc_23535401
|
<div class="form-main-section classiera-post-cat">
<div class="classiera-post-main-cat">
<h4 class="classiera-post-inner-heading">
<?php esc_html_e('Select an Occasion', 'classiera') ?> :
</h4>
<ul class="list-unstyled list-inline">
<?php
$categories = get_terms('category', array(
'hide_empty' => 0,
'parent' => 0,
'order'=> 'ASC'
)
);
foreach ($categories as $category) {
//print_r($category);
</ul><!--list-unstyled-->
<input class="classiera-main-cat-field" name="classiera-main-cat-field" type="hidden" value="">
</div><!--classiera-post-main-cat-->
<div class="classiera-post-sub-cat">
<h4 class="classiera-post-inner-heading">
<?php esc_html_e('Select a Category', 'classiera') ?> :
</h4>
<ul class="list-unstyled classieraSubReturn">
</ul>
<input class="classiera-sub-cat-field" name="classiera-sub-cat-field" type="hidden" value="">
</div><!--classiera-post-sub-cat-->
<!--ThirdLevel-->
<div class="classiera_third_level_cat">
<h4 class="classiera-post-inner-heading">
<?php esc_html_e('Select a Category', 'classiera') ?> :
</h4>
<ul class="list-unstyled classieraSubthird">
</ul>
<input class="classiera_third_cat" name="classiera_third_cat" type="hidden" value="">
</div>
<!--ThirdLevel-->
</div>
Any help would be appreciated. I don't need to have all the icons set to each category, just the name of the categories.
A: 0,
'parent' => 0,
'order'=> 'ASC'
)
);
?>
foreach ($categories as $category) {?>
___value you need to store___"?>>name __name you want to display__?>
//print_r($category);
| |
doc_23535402
|
what i would like to achieve is that when sending files that are larger then 20MB just to save the files aside and generate a link to download instead.
i would like to know:
*
*are there any plugin to courier that do it?
*what tools to i need to use to develop such a plugins?
A: How to make MTA replace large attachments with links to a centrally-stored copy [MIMEDefang Milter for sendmail/postfix/...]
You may consider using MIMEDefang milter available under GPL license for sendmail and postfix.
MIMEDefang Description
Mail Inspection and Modification
MIMEDefang can inspect and modify e-mail messages as they pass through your mail relay. MIMEDefang is written in Perl, and its filter actions are expressed in Perl, so it's highly flexible. Here are some things that you can do very easily with MIMEDefang:
*
*Delete or alter attachments based on file name, contents, results of a virus scan, attachment size, etc.
*Replace large attachments with links to a centrally-stored copy to ease the burden on POP3 users with slow modem links.
[...]
| |
doc_23535403
|
I tried
Select count(*) from tabele
Where AVG(grade)> 7.6
and AVG(Grade)<8.3
Group by id
But there is always an error
A: The approach below is to first aggregate by student id and assert the average grade requirements in a HAVING clause. Then, subquery to find the count of such matching students.
SELECT COUNT(*)
FROM
(
SELECT id
FROM yourTable
GROUP BY id
HAVING AVG(Grade) > 7.6 AND AVG(Grade) < 8.3
) t;
Assuming your RDBMS support window functions, here is one way we can get the count with a single query:
SELECT DISTINCT COUNT(*) OVER () AS total_cnt
FROM yourTable
GROUP BY id
HAVING AVG(Grade) > 7.6 AND AVG(Grade) < 8.3;
A: You should use having clause here instead of where. Also sum(count(distinct id)) could be used to count the number of values. Alternatively you could use a subquery to achieve the same result.
select sum(count(distinct id)) from table_name
group by id
having AVG(grade) BETWEEN 7.6 and 8.3;
A: Somehting like this should work
WITH _CTE AS
(
SELECT StudentId, AVG(Grade) as avgGrade
FROM MyTalbe
GROUP BY StudentId
)
SELECT COUNT(*)
FROM _CTE
WHERE avgGrade > 7.6
AND avgGrade < 8.3
| |
doc_23535404
|
I can disable screen pinning for a bit and then perform the transfer, but that is a security risk.
How can I do this?
Here is all the code if you want to try. All you need to do is enable screen pinning manually through your app settings (so it is less code and still produces the same result). I tested this using two Nexus 7 both running Android 5.0.
You don't have to read all this code, this question can probably be solved if you know something I can add to my manifest that would allow NFC while screen pinning.
AndroidManifest.xml
<?xml version="1.0" encoding="utf-8"?>
<manifest xmlns:android="http://schemas.android.com/apk/res/android"
package="com.example.androidnfc"
android:versionCode="1"
android:versionName="1.0" >
<uses-sdk
android:minSdkVersion="16"
android:targetSdkVersion="19" />
<uses-permission android:name="android.permission.NFC"/>
<application
android:allowBackup="true"
android:icon="@drawable/ic_launcher"
android:label="@string/app_name"
android:theme="@style/AppTheme" >
<activity
android:name="com.example.androidnfc.MainActivity"
android:label="@string/app_name" >
<intent-filter>
<action android:name="android.intent.action.MAIN" />
<category android:name="android.intent.category.LAUNCHER" />
</intent-filter>
<intent-filter>
<action android:name="android.nfc.action.NDEF_DISCOVERED" />
<category android:name="android.intent.category.DEFAULT" />
<data android:mimeType="text/plain" />
</intent-filter>
</activity>
</application>
</manifest>
MainActivity.java
public class MainActivity extends Activity implements CreateNdefMessageCallback, OnNdefPushCompleteCallback
{
TextView textInfo;
EditText textOut;
NfcAdapter nfcAdapter;
@Override
protected void onCreate(Bundle savedInstanceState)
{
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
textInfo = (TextView)findViewById(R.id.info);
textOut = (EditText)findViewById(R.id.textout);
nfcAdapter = NfcAdapter.getDefaultAdapter(this);
nfcAdapter.setNdefPushMessageCallback(this, this);
nfcAdapter.setOnNdefPushCompleteCallback(this, this);
}
@Override
protected void onResume()
{
super.onResume();
Intent intent = getIntent();
String action = intent.getAction();
if(action.equals(NfcAdapter.ACTION_NDEF_DISCOVERED))
{
Parcelable[] parcelables = intent.getParcelableArrayExtra(NfcAdapter.EXTRA_NDEF_MESSAGES);
NdefMessage inNdefMessage = (NdefMessage)parcelables[0];
NdefRecord[] inNdefRecords = inNdefMessage.getRecords();
NdefRecord NdefRecord_0 = inNdefRecords[0];
String inMsg = new String(NdefRecord_0.getPayload());
textInfo.setText(inMsg);
}
}
@Override
protected void onNewIntent(Intent intent) {
setIntent(intent);
}
@Override
public void onNdefPushComplete(NfcEvent event) {
final String eventString = "onNdefPushComplete\n" + event.toString();
runOnUiThread(new Runnable() {
@Override
public void run() {
Toast.makeText(getApplicationContext(), eventString, Toast.LENGTH_LONG).show();
}
});
}
@Override
public NdefMessage createNdefMessage(NfcEvent event) {
String stringOut = textOut.getText().toString();
byte[] bytesOut = stringOut.getBytes();
NdefRecord ndefRecordOut = new NdefRecord(
NdefRecord.TNF_MIME_MEDIA,
"text/plain".getBytes(),
new byte[] {},
bytesOut);
NdefMessage ndefMessageout = new NdefMessage(ndefRecordOut);
return ndefMessageout;
}
}
layout
<LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:tools="http://schemas.android.com/tools"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:paddingBottom="@dimen/activity_vertical_margin"
android:paddingLeft="@dimen/activity_horizontal_margin"
android:paddingRight="@dimen/activity_horizontal_margin"
android:paddingTop="@dimen/activity_vertical_margin"
android:orientation="vertical"
tools:context="com.example.androidnfc.MainActivity" >
<TextView
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_gravity="center_horizontal"
android:textStyle="bold" />
<EditText
android:id="@+id/textout"
android:layout_width="match_parent"
android:layout_height="wrap_content" />
<TextView
android:id="@+id/info"
android:layout_width="wrap_content"
android:layout_height="wrap_content" />
</LinearLayout>
A: I'm not sure if this actually answers your question, but I'd like to summarize my findings:
When trying your example on Android 5.0.1 (LRX22C on Nexus 4), the receiving side automatically unpins the screen upon receiving the NDEF message and (re-)starts the activity. So it seems that the intent filter that is registered in the manifest gets priority over (manual?) screen pinning.
I'm aware that this does not quite match the experiences described in the question. I'm wondering if this is due to the different Android version (5.0 vs. 5.0.1) or due to the use of manual screen pinning instead of programatic screen pinning...
In my test setup, I was able to solve the problem (i.e. prevent the activity from getting automatically unpinned) by using the foreground dispatch system to register the activity to receive its NDEF message:
In your onResume() method create a pending intent like this and enable foreground dispatch:
PendingIntent pi = this.createPendingResult(0x00A, new Intent(), 0);
nfcAdapter.enableForegroundDispatch(this, pi, null, null);
You will then receive intents notifying you about discovered tags through the activity's onActivityResult() method:
@Override
protected void onActivityResult(int requestCode, int resultCode, Intent data) {
switch (requestCode) {
case 0x00A:
onNewIntent(data);
break;
}
}
Moreover, you have to disable the foreground dispatch in your onPause() method:
nfcAdapter.disableForegroundDispatch(this);
| |
doc_23535405
|
gst-launch-0.10 v4l2src device=/dev/video0 ! video/x-raw-yuv,width=320,height=240 ! videobox left=-320 border-alpha=0 ! queue ! videomixer name=mix ! ffmpegcolorspace ! xvimagesink v4l2src device=/dev/video1 ! video/x-raw-yuv,width=320,height=240 ! videobox left=1 ! queue ! send-config=true ! udpsink host=127.0.0.1 port=5000
this gives me error:
WARNING: erroneous pipeline: link without source element
but without the udp it works fine.
gst-launch-0.10 v4l2src device=/dev/video0 ! video/x-raw-yuv,width=320,height=240 ! videobox left=-320 border-alpha=0 ! queue ! videomixer name=mix ! ffmpegcolorspace ! xvimagesink v4l2src device=/dev/video1 ! video/x-raw-yuv,width=320,height=240 ! videobox left=1 ! queue ! mix.
my client side is this:
gst-launch udpsrc uri=udp://127.0.0.1:5000 caps="application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)MP4V-ES, profile-level-id=(string)1, config=(string)000001b001000001b58913000001000000012000c48d88007d0a041e1463000001b24c61766335322e3132332e30, payload=(int)96, ssrc=(uint)298758266, clock-base=(uint)3097828288, seqnum-base=(uint)63478" ! rtpmp4vdepay ! ffdec_mpeg4 ! autovideosink
what am I doing wrong? Any help would be great.
A: The warning is the reason you are not able to send:
"queue ! send-config=true ! udpsink" is the 'link without source element'
What is send-config=true? Isn't this a property for some element you didn't type there?
| |
doc_23535406
|
for example, lets say i have this picture of a cookie like here.
and i have a color picker that lets a person select a color. I want to change the color of part of the image (in this case, lets say the color of the chocolate chips) to the color that is picked.
is that possible to do in javascript / jquery ?
A: Its possible with Javascript and the canvas element by directly manipulating the pixel data. The below example turns blue into red.
Live Demo
var canvas = document.getElementById("canvas"),
ctx = canvas.getContext("2d"),
image = document.getElementById("testImage");
canvas.height = canvas.width = 45;
ctx.drawImage(image,0,0);
var imgd = ctx.getImageData(0, 0, 45, 45),
pix = imgd.data;
for (var i = 0, n = pix.length; i <n; i += 4) {
if(pix[i + 2] > 20){ // Blue threshold
// Swap red and blue component values.
var redVal = pix[i]; // Copy the current red component value
pix[i] = pix[i + 2]; // Assign the current blue component value to red
pix[i+2] = redVal; // Assign the old red value to blue.
}
}
ctx.putImageData(imgd, 0, 0);
Its not very fast to do it this way however, and for larger images you would see a noticeable performance drop depending on the browser. AS for jQuery, there is nothing related to this that jQuery provides that would help.
| |
doc_23535407
|
KDL::Tree my_tree;
if (!kdl_parser::treeFromFile("robot.urdf", my_tree)){
std::cout << "Failed to construct kdl tree"<< std::endl;
return false;
}
The above code works with ROS. However, in another project, which is not a ROS project, I need to construct this KDL tree. This computer doesn't have ROS and sadly the OS is Windows.
How to install kdl_parser without using ROS.
PS: I do not want to install ROS on Windows for this task.
A: You should be able to build the Library on windows, too:
http://www.orocos.org/kdl/installation-manual
| |
doc_23535408
|
The demo is shown in http://jsfiddle.net/lightbringer/FsSmy/
<ul id="userstorylist" data-role="listview" data-filter="true">
<li id="draggable">
<div data-role="collapsible" data-theme="b" data-content-theme="d">
<h3>Userstory 1</h3>
<p>Content</p>
</div>
</li>
<li id="draggable">
<div data-role="collapsible" data-theme="b" data-content-theme="d">
<h3>Userstory 2</h3>
<p>Content</p>
</div>
</li>
</ul>
Could anyone show me how to remove the border, I would like the items to stick together like normal listview.
Thanks in advance.
A: I have found a solution myself.
<ul data-role="listview" data-filter="true">
<li class="custom-li">
<div class="custom-collapsible" data-name="1" data-role="collapsible" data-theme="b" data-content-theme="d" data-corners="false">
<h3>Userstory 1</h3>
<p>Content</p>
</div>
</li>
<li class="custom-li">
<div class="custom-collapsible" data-name="2" data-role="collapsible" data-theme="b" data-content-theme="d" data-corners="false">
<h3>Userstory 2</h3>
<p>Content</p>
</div>
</li>
</ul>
with css styling:
.custom-li {
padding: 0 !important;
border-width: 0 !important;
}
.custom-collapsible {
margin: 0 !important;
border-bottom-right-radius: 0 !important;
border-bottom-left-radius: 0 !important;
border-width:0 !important;
}
This is the demo: http://jsfiddle.net/lightbringer/jWaEv/
| |
doc_23535409
| ||
doc_23535410
|
The task goes:
The company has more business office across the country. In addition to tire assembly, there are still several services: sales (tires and other products), tire maintenance, etc...
To change tires at the beginning of the season, it is necessary to order it by phone. The operator receives a call and for each box where it is possible to change the tires, see the booking records. Changing the tire takes 30 min.
Upon arrival, the user arrives at the counter where based on registration of the vehicle a work order is made for the employee in the box (on which the number of tire code contracts was recorded). The tires are then transported from warehouse to the box for the assembly. When the tires are stored, a contract is signed to ensure that after a contracted deadline of 6 months tires are kept for another 60 days, which is paid annually by $ 0.10 per tire daily. The price of tire storage depends on the size and is calculated per piece.
There is a possibility of joining the Loyalty Club and getting discounts for all family cars. The discount is valid only for services, not for products.
With companies that want the services of this tire service (rent-a-car, utility vehicles, etc...), a contract will be concluded to provide a more affordable price for tire assembly and storage.
| |
doc_23535411
|
Ie.
root_url/countries/france
root_url/paris/some_place
root_url/paris
Here's my code to be more precise.
resources :countries do
resources :cities
end
resources :cities do
resources :places, :reviews
end
match ':id' => 'cities#show', :as => :city, :method => :get
match ':city_id/:id' => 'places#show', :as => :city_place, :method => :get
That seems to work perfectly accept when I try to edit records. The html is as below:
<% form_for @city do |f| %>
<% end %>
Produces:
<form accept-charset="UTF-8" action="/kiev" class="edit_city" id="edit_city_3581" method="post">
Which would only work if it were:
<form accept-charset="UTF-8" action="/cities/kiev" class="edit_city" id="edit_city_3581" method="post">
I know I could simply provide a more advanced form_for route explicitly to get around this, but I'm wondering if there's something better to do in my routes.rb to make my life easier rather than patching?
Thanks
A: How about if you renamed your custom routes like this and let normal routes handle edit etc.
get ':id' => 'cities#show', :as => :city_shortcut
get ':city_id/:id' => 'places#show', :as => :city_place_shortcut
| |
doc_23535412
|
For example, if I have
str1 = "world!";
str2 = "Hello, ";
Is there some way to combine those strings into one string that contains "Hello, world!"?
A: It can be done in many ways. Here is one approach which uses dynamic memory allocation:
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
int main(void)
{
char* str1 = "world!";
char* str2 = "Hello, ";
char* p = malloc(strlen(str1) + strlen(str2) + 1); // Allocate memory for the new string
strcpy(p, str2); // Copy str2 to the new string
strcat(p, str1); // Concatenate str1
printf("%s", p); // print it
free(p); // free the allocated memory
return 0;
}
Try the code here: http://ideone.com/oydJHN
Instead of dynamic memory allocation you can use something like:
#define LENGTH_OF_RESULT 100
char result[LENGTH_OF_RESULT];
instead of char* p;. You must make sure that LENGTH_OF_RESULT is large enough to hold the concatenated string.
The benefit of dynamic memory allocation is that you don't need to worry about the size of the destination string - just allocate what you need based on the length of the input strings and add 1 for the null-termination.
The downside of using dynamic memory allocation is that you must remember to free the memory when done with it.
A: You can do it using statically declared memory as opposed to dynamically allocating the memory as well and avoid the additional call malloc and call to free. Either way is fine. A simple example is:
#include <stdio.h>
#include <string.h>
enum { MAXL = 64 }; /* constant for max concatenated length */
int main (void) {
char str1[MAXL] = "hello";
char str2[MAXL] = "world";
char dest[2*MAXL+1] = "";
strcpy (dest, str1);
strcat (dest, " ");
strcat (dest, str2);
strcat (dest, "!");
printf ("\n%s\n\n", dest);
return 0;
}
Example Use
$ ./bin/concat
hello world!
The key either way is simply insuring you have adequate space in your destination string to hold the concatenated results. Combining 2 - strings at their max character count means with only 1 separator, you can use twice the storage available for the existing strings (you get back one char by only having to use 1 nul-terminating character in the combined string instead of 2 before). If you add additional characters as separators, you simply must add those to the total storage available as well. Let me know if you have questions.
A: You can make your own function.It is quite easy:
#include <stdio.h>
#include <string.h>
void my_strcat(char *dst, const char *src);
int main(void)
{
char dst[50] = "world";
char src[] = "hello ";
my_strcat(dst, src);
printf("%s\n", dst);
return 0;
}
void my_strcat(char *dst, const char *src)
{
size_t dst_len = strlen(dst) + 1, src_len = strlen(src);
memmove(dst + src_len, dst, dst_len);
memcpy(dst, src, src_len);
}
| |
doc_23535413
|
const callAndParseHttp = async (url) => {
const response = await got(url);
return await parseXml(response.body);
};
const parseXml = async xmlData => {
try {
const json = parser.toJson(xmlData.body);
return JSON.parse(json);
} catch (err) {
return err;
}
};
And I have wrote a unit test in sinon for that looks like,
describe('/ handler', () => {
let spy;
before(() => {
spy = sinon.spy(unitParser, 'callAndParseHttp');
});
afterEach(() => {
spy.restore();
});
it('unit parser testing', async () => {
await unitParser.callAndParseHttp(
'http://www.mocky.io/v2/5e34242423'
);
expect(spy.callCount).to.equal(1);
});
})
Do I need to create stub for this test ? I'm new to unit testing. Tests are passing correctly.
A: This way you are not testing it completely. Better approach will be to mock the http call using nock and test the function end to end. You can see the example here
https://www.npmjs.com/package/nock
The way you are testing, it doesn't really test anything as it completely replaces the function with the spy which you are using.
| |
doc_23535414
|
Mainly, they consume the same kafka topic (30 000 records / seconds), process the data and store it to a dedicated ElasticSearch cluster (sparkStr1 has ES1 and sparkStr2 has ES2).
*
*When the first one is running = all is good
*When both are running = the first one is slowed (and the second one is not very fast) => i assumed it was due to the topic Kafka at the beginning
I re-run the second one without the ES part (saveToEs) (to avoid this case) = all is good again !!
I am wondering if it could be a network issue but it seems not (I looked input/output network interfaces loads, they seemed normal).
I don't understand what's going on and where I should search now => do you have any idea about this ?
| |
doc_23535415
|
import { NgModule } from '@angular/core';
import { CommonModule } from '@angular/common';
import { FormsModule } from '@angular/forms';
import { TranslateModule } from '@ngx-translate/core';
import { IonicModule } from '@ionic/angular';
import { HomePageRoutingModule } from './home-routing.module';
import { HomePage } from './home.page';
import { SwiperModule } from 'swiper/angular';
@NgModule({
imports: [
CommonModule,
FormsModule,
IonicModule,
TranslateModule,
HomePageRoutingModule
],
declarations: [HomePage]
})
export class HomePageModule {}
Visual Studio Code doesn't warn me of any error. But when I try to run (either ng serve or ionic serve), it results an error.
(The real thing is so long and cannot fit into this post. This is just a small part of it)
[ng] node_modules/swiper/angular/angular/src/swiper.module.d.ts:6:21 - error TS2694: Namespace '"C:/ORIGIN/bookspeak-update/node_modules/@angular/core/core"' has no exported member 'Ι΅Ι΅FactoryDeclaration'.
[ng]
[ng] 6 static Ι΅fac: i0.Ι΅Ι΅FactoryDeclaration<SwiperModule, never>;
[ng] ~~~~~~~~~~~~~~~~~~~~
[ng] node_modules/swiper/angular/angular/src/swiper.module.d.ts:7:21 - error TS2694: Namespace '"C:/ORIGIN/bookspeak-update/node_modules/@angular/core/core"' has no exported member 'Ι΅Ι΅NgModuleDeclaration'.
[ng]
[ng] 7 static Ι΅mod: i0.Ι΅Ι΅NgModuleDeclaration<SwiperModule, [typeof i1.SwiperComponent, typeof i2.SwiperSlideDirective], [typeof i3.CommonModule], [typeof i1.SwiperComponent, typeof i2.SwiperSlideDirective]>;
[ng] ~~~~~~~~~~~~~~~~~~~~~
[ng] node_modules/swiper/angular/angular/src/swiper.module.d.ts:8:21 - error TS2694: Namespace '"C:/ORIGIN/bookspeak-update/node_modules/@angular/core/core"' has no exported member 'Ι΅Ι΅InjectorDeclaration'.
[ng]
[ng] 8 static Ι΅inj: i0.Ι΅Ι΅InjectorDeclaration<SwiperModule>;
As soon as I remove import { SwiperModule } from 'swiper/angular'; from the module, everything immediately works (either with ng serve or ionic serve as well).
I've tried some solutions, like removing node_modules then npm cache clean then npm install, trying to use older version of Swiper and several other things, but to no avail.
For more info, I use Bookspeak template and Ionic Framework for this project. My goal here is to make carousels that work on both web and mobile app. (Bookspeak mainly focus on mobile app, so it produces hideous horizontal scrollbars when run on web.)
This is package.json:
{
"name": "Bookspeak",
"version": "0.0.1",
"author": "Ionic Framework",
"homepage": "https://ionicframework.com/",
"scripts": {
"ng": "ng",
"start": "ng serve",
"build": "ng build",
"test": "ng test",
"lint": "ng lint",
"e2e": "ng e2e"
},
"private": true,
"dependencies": {
"@angular/cli": "^9.1.13",
"@angular/common": "~9.1.6",
"@angular/core": "~9.1.6",
"@angular/forms": "~9.1.6",
"@angular/platform-browser": "~9.1.6",
"@angular/platform-browser-dynamic": "~9.1.6",
"@angular/router": "~9.1.6",
"@capacitor/android": "3.4.3",
"@capacitor/app": "1.1.1",
"@capacitor/core": "3.4.3",
"@capacitor/haptics": "1.1.4",
"@capacitor/keyboard": "1.2.2",
"@capacitor/status-bar": "1.0.8",
"@ionic-native/core": "^5.0.7",
"@ionic-native/splash-screen": "^5.0.0",
"@ionic-native/status-bar": "^5.0.0",
"@ionic/angular": "^5.0.0",
"@ngx-translate/core": "^13.0.0",
"@ngx-translate/http-loader": "^6.0.0",
"animate.css": "^4.1.0",
"cordova-android": "^8.1.0",
"rxjs": "~6.5.1",
"swiper": "8.0.7",
"tslib": "^1.10.0",
"zone.js": "~0.10.2"
},
"devDependencies": {
"@angular-devkit/build-angular": "~0.901.5",
"@angular/compiler": "~9.1.6",
"@angular/compiler-cli": "~9.1.6",
"@angular/language-service": "~9.1.6",
"@capacitor/cli": "3.4.3",
"@ionic/angular-toolkit": "^2.1.1",
"@types/jasmine": "~3.5.0",
"@types/jasminewd2": "~2.0.3",
"@types/node": "^12.11.1",
"codelyzer": "^5.1.2",
"cordova-plugin-device": "^2.0.2",
"cordova-plugin-ionic-keyboard": "^2.2.0",
"cordova-plugin-ionic-webview": "^4.2.1",
"cordova-plugin-splashscreen": "^5.0.2",
"cordova-plugin-statusbar": "^2.4.2",
"cordova-plugin-whitelist": "^1.3.3",
"jasmine-core": "~3.5.0",
"jasmine-spec-reporter": "~4.2.1",
"karma": "~5.0.0",
"karma-chrome-launcher": "~3.1.0",
"karma-coverage-istanbul-reporter": "~2.1.0",
"karma-jasmine": "~3.0.1",
"karma-jasmine-html-reporter": "^1.4.2",
"protractor": "~5.4.3",
"ts-node": "~8.3.0",
"tslint": "~6.1.0",
"typescript": "~3.8.3"
},
"description": "An Ionic project",
"cordova": {
"plugins": {
"cordova-plugin-whitelist": {},
"cordova-plugin-statusbar": {},
"cordova-plugin-device": {},
"cordova-plugin-splashscreen": {},
"cordova-plugin-ionic-webview": {
"ANDROID_SUPPORT_ANNOTATIONS_VERSION": "27.+"
},
"cordova-plugin-ionic-keyboard": {}
},
"platforms": [
"android"
]
}
}
A: You need to import it to your AppModule in order to use it:
import { NgModule } from '@angular/core';
import { CommonModule } from '@angular/common';
import { FormsModule } from '@angular/forms';
import { TranslateModule } from '@ngx-translate/core';
import { IonicModule } from '@ionic/angular';
import { HomePageRoutingModule } from './home-routing.module';
import { HomePage } from './home.page';
import { SwiperModule } from 'swiper/angular';
@NgModule({
imports: [
CommonModule,
FormsModule,
IonicModule,
TranslateModule,
HomePageRoutingModule,
SwiperModule
],
declarations: [HomePage]
})
export class HomePageModule {}
Also try re-installing the package npm i swiper.
| |
doc_23535416
|
Is it possible to download the necessary files beforehand with my normal account which does have internet access and then use those files to run create-react-app offline?
A: While you create the app it will have to download the installation packages. Maybe if you spin up your own registry locally with caching, then install the packages once, cache them and then use the cache. I am wondering how to develop in an environment that restrictive?
On the other side you could just clone the create-react-app repository, then npm link the repo globally and you can execute with the app with all files offline. The installation step will not work then I propose.
| |
doc_23535417
|
function taskFirst(k, v) {
console.log(k, v);
}
function taskSecond(k, v) {
console.log(k, v);
}
function run() {
var g1 = "Something";
var g2 = "Something";
var g3 = "Something";
var g4 = "Something";
async.series(
[
taskFirst(g1, g2),
taskSecond(g3, g4)
],
function(error, result){
}
);
}
What is the right way to pass custom variables and async.js callback function?
A: This answer to async's github issue has worked for me perfectly.
https://github.com/caolan/async/issues/241#issuecomment-14013467
for you it would be something like:
var taskFirst = function (k, v) {
return function(callback){
console.log(k, v);
callback();
}
};
A: You could do something like this:
function taskFirst(k, v, callback) {
console.log(k, v);
// Do some async operation
if (error) {
callback(error);
} else {
callback(null, result);
}
}
function taskSecond(k, v, callback) {
console.log(k, v);
// Do some async operation
if (error) {
callback(error);
} else {
callback(null, result);
}
}
function run() {
var g1 = "Something";
var g2 = "Something";
var g3 = "Something";
var g4 = "Something";
async.series(
[
// Here we need to call next so that async can execute the next function.
// if an error (first parameter is not null) is passed to next, it will directly go to the final callback
function (next) {
taskFirst(g1, g2, next);
},
// runs this only if taskFirst finished without an error
function (next) {
taskSecond(g3, g4, next);
}
],
function(error, result){
}
);
}
A: It can be as follows
function taskFirst(k, v) {
console.log(k, v);
}
function taskSecond(k, v) {
console.log(k, v);
}
async.series([
function(callback) {
callback(null, taskFirst(g1, g2));
},
function(callback) {
callback(null, taskFirst(g3, g4));
}
],function(error, result){
});
A: Better way.
const a1 = (a, callback) => {
console.log(a, 'a1')
callback()
}
const a2 = (a, callback) => {
console.log(a, 'a2')
callback()
}
const run = () => {
async.series([
a1.bind(null, 'asd'),
a2.bind(null, 'asd2')
], () => {
console.log('finish')
})
}
run()
| |
doc_23535418
|
Traceback (most recent call last):
File "C:\Users\QuartzMiner6000\PycharmProjects\Test1\src\main.py", line 309, in <module>
screen.register_shape('player.gif')
File "C:\Users\QuartzMiner6000\AppData\Local\Programs\Python\Python36\lib\turtle.py", line 1133, in register_shape
shape = Shape("image", self._image(name))
File "C:\Users\QuartzMiner6000\AppData\Local\Programs\Python\Python36\lib\turtle.py", line 479, in _image
return TK.PhotoImage(file=filename)
File "C:\Users\QuartzMiner6000\AppData\Local\Programs\Python\Python36\lib\tkinter\__init__.py", line 3542, in __init__
Image.__init__(self, 'photo', name, cnf, master, **kw)
File "C:\Users\QuartzMiner6000\AppData\Local\Programs\Python\Python36\lib\tkinter\__init__.py", line 3498, in __init__
self.tk.call(('image', 'create', imgtype, name,) + options)
_tkinter.TclError: encountered an unsupported criticial chunk type "mkBF"
This is my code
import turtle
from src.variables import *
from math import ceil
TILESIZE = 20
# the number of inventory resources per row
INVWIDTH = 8
drawing = False
# create a new 'screen' object
screen = turtle.Screen()
# calculate the width and height
width = (TILESIZE * MAPWIDTH) + max(200, INVWIDTH * 50)
num_rows = int(ceil((len(resources) / INVWIDTH)))
inventory_height = num_rows * 120 + 40
height = (TILESIZE * MAPHEIGHT) + inventory_height
screen.setup(width, height)
screen.setworldcoordinates(0, 0, width, height)
screen.bgcolor(BACKGROUNDCOLOUR)
screen.listen()
I have tried giving it the full path to the image and converting it to a png but nothing worked.
| |
doc_23535419
|
PHP
private function _send_email($post_data) {
$data = array(
'email_from' => 'website@XXXXX.co.id',
'email_to' => \Config::get('config_basic.contact_us_email_to'),
'email_subject' => 'Contact Us message from ' . $post_data['name'],
'mail_subject' => 'Enquiry Confirmation from XXXXX',
'email_reply_to' => array(
'email' => $post_data['email'],
'name' => $post_data['name']
), // Optional
'email_data' => array(
'base_url' => \Uri::base(),
'name' => $post_data['name'],
'email' => $post_data['email'],
'message' => $post_data['message']
),
'email_view' => 'pages::email/contact_us.twig',
'email_respond_view' => 'pages::email/email_responder.twig',
);
\Util_Email::queue_send_email($data);
$email = \Email::forge();
// Set the from address
$email->from($data['email_from'], 'XXXXX');
// Set the to address
$email->to($data['email_to'],'Web Admin of XXXXXX');
// Set a subject
$email->subject($data['email_subject']);
// And set the body.
$email->html_body(\View::forge($data['email_view'], $data['email_data']));
$email->send();
$mail = \Email::forge();
$mail->from($data['email_from'], 'XXXXXX');
$mail->to($data['email_data']['email'],$data['email_data']['name']);
$mail->subject($data['mail_subject']);
$mail->html_body(\View::forge($data['email_respond_view'], $data['email_data']));
$mail->send();
}
TWIG (email_responder.twig)
{% block frontend_content %}
Dear {{ name }}, <br/><br/>
Thank you for contacting us,<br/>
our sales team would be contacting you as soon as possible<br/><br/>
Best Regards,<br/><br/><br/>
XXXXXX
<div><img src="https://postimg.org/image/ynhb45hrx/" alt="city"/></div>
{% endblock %}
I only get texts with image with broken link! Would you please me give me the right way to do it?
Thanks
| |
doc_23535420
|
That was the first question.
Second question:
Should I use ONE channel and ONE receiver for all that addresses? Or better to have channel and receiver for each mail address? I don't understand Spring so deeply to feel the difference.
p.s. this question is continuation of Spring multiple imapAdapter
A: In each child context, you can add a header enricher to set a custom header to the URL from the adapter; with the output channel being the shared channel to the shared service.
In the service, use void foo(Message emailMessage, @Header("myHeader") String url)
I would generally recommend using a single service unless the service needs to do radically different things based on the source.
EDIT:
I modified my answer to your previous question to enhance the original message with the url in a header; each instance has its own header enricher and they all route the enriched message to the common emailChannel.
@Configuration
@EnableIntegration
public class GeneralImapAdapter {
@Value("${imap.url}")
String imapUrl;
@Bean
public static PropertySourcesPlaceholderConfigurer pspc() {
return new PropertySourcesPlaceholderConfigurer();
}
@Bean
@InboundChannelAdapter(value = "enrichHeadersChannel", poller = @Poller(fixedDelay = "10000") )
public MessageSource<javax.mail.Message> mailMessageSource(MailReceiver imapMailReceiver) {
return new MailReceivingMessageSource(imapMailReceiver);
}
@Bean
public MessageChannel enrichHeadersChannel() {
return new DirectChannel();
}
@Bean
@Transformer(inputChannel="enrichHeadersChannel", outputChannel="emailChannel")
public HeaderEnricher enrichHeaders() {
Map<String, ? extends HeaderValueMessageProcessor<?>> headersToAdd =
Collections.singletonMap("emailUrl", new StaticHeaderValueMessageProcessor<>(this.imapUrl));
HeaderEnricher enricher = new HeaderEnricher(headersToAdd);
return enricher;
}
@Bean
public MailReceiver imapMailReceiver() {
MailReceiver receiver = mock(MailReceiver.class);
Message message = mock(Message.class);
when(message.toString()).thenReturn("Message from " + this.imapUrl);
Message[] messages = new Message[] {message};
try {
when(receiver.receive()).thenReturn(messages);
}
catch (MessagingException e) {
e.printStackTrace();
}
return receiver;
}
}
...and I modified the receiving service so it gets access to the header...
@MessageEndpoint
public class EmailReceiverService {
@ServiceActivator(inputChannel="emailChannel")
public void handleMessage(Message message, @Header("emailUrl") String url) {
System.out.println(message + " header:" + url);
}
}
...hope that helps.
EDIT 2:
And this one's a bit more sophisticated; it pulls the from from the payload and puts it in a header; not needed for your use case since you have the full message, but it illustrates the technique...
@Bean
@Transformer(inputChannel="enrichHeadersChannel", outputChannel="emailChannel")
public HeaderEnricher enrichHeaders() {
Map<String, HeaderValueMessageProcessor<?>> headersToAdd = new HashMap<>();
headersToAdd.put("emailUrl", new StaticHeaderValueMessageProcessor<String>(this.imapUrl));
Expression expression = new SpelExpressionParser().parseExpression("payload.from[0].toString()");
headersToAdd.put("from", new ExpressionEvaluatingHeaderValueMessageProcessor<>(expression, String.class));
HeaderEnricher enricher = new HeaderEnricher(headersToAdd);
return enricher;
}
and
@ServiceActivator(inputChannel="emailChannel")
public void handleMessage(Message message, @Header("emailUrl") String url,
@Header("from") String from) {
System.out.println(message + " header:" + url + " from:" + from);
}
| |
doc_23535421
|
A: Sounds like you're trying to create a list of the user's favorite Activities in a String list? To do this...
*
*get the name of the Activity using String mActivityName = this.getClass().getSimpleName();
*create an ArrayList using ArrayList mFavList; which will collect the Activity names.
*Set an OnClickListener on the button which will add the Activity name to the ArrayList mFavList.add(mActivityName)
| |
doc_23535422
|
https://code.google.com/p/ics-openvpn/source/checkout
and it compile succesfully but when I create a profile and try to connect it .. it will give me an error "Error writing minivpn binary "
the readme file says
Optional: Copy minivpn from lib/ to assets (if you want your own compiled version)
but I havent found any minivpn or lib like this in the package
please let me know if some has worked on it
A: You should read the README:
Do ./build-native.sh in the root directory of the project.
The minivpn part is obsolete. Early version used to ship a compiled version in the repository.
| |
doc_23535423
|
A: There is no standard C++ methods. standard library usually have defined realloc(), but on most platforms it all it does that is call of another malloc() and memcpy() to copy memory. You may want to use standard library containers which hide that mechanism - that's most common - or use memory pool object (allocate all possibly required memory, then "allocate" objects within it), that's less common, usually applied for FEMA or image-processing or in OpenGL render engines.
A: I would suggest take array's length and insert data in length + 1 location.
eg :
int arraylength = abc.length;
abc[arraylength + 1].value = "Your Value";
| |
doc_23535424
|
Could you please help me on how do I establish private channel for messaging in sockjs-tornado ? (I mean Private conversation / one to one)
Below is the on_message function in my server side code -
def on_message(self, message):
str=message
mg=str.split('#:#')
sender=1 # This is the sender user id
receiver=2 #This is the receiver user id - I need to implement session variables to have these id's so that I can use it here this way
ts=r.expr(datetime.now(r.make_timezone('00:00')))
connection = r.connect(host="192.x.x.x")
r.db("djrechat").table('events').insert({"usrs":mg[0],"msg":mg[1],"tstamp":ts,"snder":sender,"rcver":receiver}).run(connection)
log.info(message)
self.broadcast(self.participants, '{} - {}'.format(self.stamp(),message))
Currently this is broadcasting to all the clients connected.
May be I should have a channel id and to send message only to the two clients which will have the same channel id, but how do I implement it or is there any better solution for this?
At client side, I have below javascript -
function connect() {
disconnect();
conn = new SockJS('http://localhost:8080/chat', ['websocket','xhr-streaming','iframe-eventsource','iframe-htmlfile','xhr-polling','iframe-xhr-polling','jsonp-polling']);
//log('Connecting.....');
conn.onopen = function() {
// log('Connected. (' + conn.protocol + ')');
log('Connected.');
};
conn.onmessage = function(e) {
log(e.data);
};
conn.onclose = function() {
log('Disconnected.');
conn = null;
};
}
Am using python 3.4 - Django 1.8.4 and Rethinkdb
A: My answer makes the assumption that all clients in the current solution connect to a SockJS server with a channel name that is used for broadcasts of the chat messages (or something close to this scenario). I further assume that the round-trip when a user sends a message from the client is:
[Sender (Client)] ------- Message (POST) --> [App Server] --- Message --> [Database]
|
Message
v
[Receiver(s) (Client)] <-- Message (WS) -- [SockJS Server]
There are multiple solutions to this problem. I will only outline the one I think is simplest and most robust here:
*
*Let every user subscribe to a per-user (private) channel in addition to the broadcast-channel in the chat. If you implement this solution, the only additional change is to make the app server aware of private messages which must be sendt only to the receiver's private channel.
Another, slightly more complex, and inelegant solution would be to create ad hoc channels for each private pair when needed. However, how would you get User B to subscribe to the (newly created) private channel when User A wants to chat with User B? One way would be to broadcast a request for User B to connect to that channel, but that would tell all the other clients (at the code level) about this private chat that will take place; and if any of those clients are compromised, that information can be misused for, for example, eavesdropping or crashing the private party.
Additional Advice
I would also like to mention that since I wrote my sockjs - example of implementing rooms answer, I have changed my own architecture to (still) use SockJS in the front-end, but with RabbitMQ using the Web-STOMP plugin on the back-end. That way,
*
*the client still uses sockjs-client with stomp-websocket in the browser,
*all of the back-end code to handle front-end multiplexing could be removed,
*the (RabbitMQ-based) back-end is fast and really robust,
*the client POSTs a message to the back-end app server, which in turn
*
*persists the message in the database, and
*acts as a client of the RabbitMQ server and either (1) broadcasts the message to all connected users, or if the message is private (2) sends the message to a single recipient over the recipient's own private channel, as sketched out in the above suggested solution.
The whole new solution is placed behind HAProxy which terminates HTTPS and SSL/TLS-encrypted WebSocket connections.
A: Although sockjs-tornado does not implement such a thing like channels, nor rooms, there is example multiplex how one could implement that. Also look at sockjs - example of implementing rooms. Solution is based on structured msg - within message you additional info is sent - channel name.
| |
doc_23535425
|
*
*call shell commands (for example 'sleep' below) in parallel,
*report on their individual starts and completions and
*be able to kill them with 'kill -9 parent_process_pid'.
There is already a lot written on these kinds of things already but I feel like I haven't quite found the elegant pythonic solution I'm looking for. I'm also trying to keep things relatively readable (and short) for someone completely unfamiliar with python.
My approach so far (see code below) has been:
*
*put subprocess.call(unix_command) in a wrapper function that reports the start and completion of the command.
*call the wrapper function with multiprocess.Process.
*track the appropriate pids, store them globally, and kill them in the signal_handler.
I was trying to avoid a solution that periodically polled the processes but I'm not sure why.
Is there a better approach?
import subprocess,multiprocessing,signal
import sys,os,time
def sigterm_handler(signal, frame):
print 'You killed me!'
for p in pids:
os.kill(p,9)
sys.exit(0)
def sigint_handler(signal, frame):
print 'You pressed Ctrl+C!'
sys.exit(0)
signal.signal(signal.SIGINT, sigint_handler)
signal.signal(signal.SIGTERM, sigterm_handler)
def f_wrapper(d):
print str(d) + " start"
p=subprocess.call(["sleep","100"])
pids.append(p.pid)
print str(d) + " done"
print "Starting to run things."
pids=[]
for i in range(5):
p=multiprocessing.Process(target=f_wrapper,args=(i,))
p.daemon=True
p.start()
print "Got things running ..."
while pids:
print "Still working ..."
time.sleep(1)
A: Once subprocess.call returns, the sub-process is done -- and call's return value is the sub-process's returncode. So, accumulating those return codes in list pids (which btw is not synced between the multi-process appending it, and the "main" process) and sending them 9 signals "as if" they were process ids instead of return codes, is definitely wrong.
Another thing with the question that's definitely wrong is the spec:
be able to kill them with 'kill -9
parent_process_pid'.
since the -9 means the parent process can't possibly intercept the signal (that's the purpose of explicitly specifying -9) -- I imagine the -9 is therefore spurious here.
You should be using threading instead of multiprocessing (each "babysitter" thread, or process, does essentially nothing but wait for its sub-process, so why waste processes on such a lightweight task?-); you should also call suprocess.Process in the main thread (to get the sub-process started and be able to obtain its .pid to put in the list) and pass the resulting process object to the babysitter thread which waits for it (and when it's done reports and removes it from the list). The list of subprocess ids should be guarded by a lock, since the main thread and several babysitter threads can all access it, and a set would probably be a better choice than a list (faster removals) since you don't care about ordering nor about avoiding duplicates.
So, roughly (no testing, so there might be bugs;-) I'd change your code to s/thing like:
import subprocess, threading, signal
import sys, time
pobs = set()
pobslock = threading.Lock()
def numpobs():
with pobslock:
return len(pobs)
def sigterm_handler(signal, frame):
print 'You killed me!'
with pobslock:
for p in pobs: p.kill()
sys.exit(0)
def sigint_handler(signal, frame):
print 'You pressed Ctrl+C!'
sys.exit(0)
signal.signal(signal.SIGINT, sigint_handler)
signal.signal(signal.SIGTERM, sigterm_handler)
def f_wrapper(d, p):
print d, 'start', p.pid
rc = p.wait()
with pobslock:
pobs.remove(p)
print d, 'done, rc =', rc
print "Starting to run things."
for i in range(5):
p = subprocess.Popen(['sleep', '100'])
with pobslock:
pobs.add(p)
t = threading.Thread(target=f_wrapper, args=(i, p))
t.daemon=True
t.start()
print "Got things running ..."
while numpobs():
print "Still working ..."
time.sleep(1)
A: This code (code below) seems to work for me, killing from "top" or ctrl-c from the command line. The only real change from Alex's suggestions was to replace subprocess.Process with a subprocess.Popen call (I don't think subprocess.Process exists).
The code here could also be improved by somehow locking stdout so that there is no chance of printing overlap between processes.
import subprocess, threading, signal
import sys, time
pobs = set() # set to hold the active-process objects
pobslock = threading.Lock() # a Lock object to make sure only one at a time can modify pobs
def numpobs():
with pobslock:
return len(pobs)
# signal handlers
def sigterm_handler(signal, frame):
print 'You killed me! I will take care of the children.'
with pobslock:
for p in pobs: p.kill()
sys.exit(0)
def sigint_handler(signal, frame):
print 'You pressed Ctrl+C! The children will be dealt with automatically.'
sys.exit(0)
signal.signal(signal.SIGINT, sigint_handler)
signal.signal(signal.SIGTERM, sigterm_handler)
# a function to watch processes
def p_watch(d, p):
print d, 'start', p.pid
rc = p.wait()
with pobslock:
pobs.remove(p)
print d, 'done, rc =', rc
# the main code
print "Starting to run things ..."
for i in range(5):
p = subprocess.Popen(['sleep', '4'])
with pobslock:
pobs.add(p)
# create and start a "daemon" to watch and report the process p.
t = threading.Thread(target=p_watch, args=(i, p))
t.daemon=True
t.start()
print "Got things running ..."
while numpobs():
print "Still working ..."
time.sleep(1)
| |
doc_23535426
|
I have the following code:
$urlRouterProvider.otherwise("/products");
$stateProvider
.state('root', {
url: "/stores",
abstract: true,
template: '<ui-view/>'
})
.state('products', {
url: "/products",
templateUrl: "<%= asset_path('store/templates/productList.html') %>",
controller: "productListCtrl as plCtrl"
})
.state('checkout', {
url: "/checkout",
templateUrl: "<%= asset_path('store/templates/checkoutSummary.html') %>"
})
When I go to localhost:3000 my url automatically goes to /products and displays the correct page. However, when I hit reload on my browser I get a rails error page saying that missing template products/index.
routes.rb
resources :stores, only: :index
get 'stores/*app', to: 'stores#index'
resources :products, only: :index
namespace :api, constraints: {format: :json}, defaults: {format: :json} do
namespace :v1 do
resources :products
end
end
root 'stores#index'
app/views/stores/index.html.slim
base href="/stores"
div ng-app="storeApp"
ui-view autoscroll="top"
A: Your server is willing to handle the routing but you deferred the task to the client.
So you should always render the base template from the server and let the client handle the routing itself.
At the end of your routes.rb, add:
get '*a', to: 'stores#index'
where your_controller and your_action match what you use for root_path
| |
doc_23535427
|
How can I make a for loop to take 1 subject out for testing, and 32 for training, 33 times?
I also want to know the R^2 (R-squared) of each of my cross validated values.
This is the code I tried to make:
%% cross validation Leave 1 out
for i = 1:33
xcv = Predicted{1,i};
ycv = Original{1,i};
sse = 0;
N = 33;
for i = 1:7
[train,test] = crossvalind('LeaveMOut',N,1);
%crossval = xcv(train), ycv(train),xcv(test);
sum = sse + sum ((crossval - ycv(test)).^2);
end
end
| |
doc_23535428
|
Unfortunately the source of the data is noisy and thus there are multiple zero crossings.
If I filter the data before checking for zero crossings, aspects of the filter (gain-phase margin) will need to be justified while averaging the zero crossing points is slightly easier to justify
[123,125,127,1045,1049,1050,2147,2147,2151,2155]
consider the above list. what would be an appropriate way to create:
[125, 1048, 2149]
The aim is to find the phase shift between two sine waves
A: This code takes a simplistic approach of looking for a gap THRESHOLD between the transitions - exceeding this marks the end of a signal transition.
xings = [123,125,127,1045,1049,1050,2147,2147,2151,2155]
THRESHOLD = 100
xlast = -1000000
tot = 0
n = 0
results = []
i = 0
while i < len(xings):
x = xings[i]
if x-xlast > THRESHOLD:
# emit a transition, averaged for the
if n > 0:
results.append(tot/n)
tot = 0
n = 0
tot += x
n += 1
xlast = x
i += 1
if n > 0:
results.append(tot/n)
print results
prints:
[125, 1048, 2150]
A: I was hoping for a more elegant solution to just iterating over the list of zero crossings, but it seems that is the only solution.
I settled on:
def zero_crossing_avg(data):
output = []
running_total = data[0]
count = 1
for i in range(1,data.size):
val = data[i]
if val - data[i-1] < TOL:
running_total += val
count += 1
else:
output.append(round(running_total/count))
running_total = val
count = 1
return output
with example code of it in-use:
#!/usr/bin/env python
import numpy as np
from matplotlib import pyplot as plt
dt = 5e-6
TOL = 50
class DCfilt():
def __init__(self,dt,freq):
self.alpha = dt/(dt + 1/(2*np.pi*freq))
self.y = [0,0]
def step(self,x):
y = self.y[-1] + self.alpha*(x - self.y[-1])
self.y[-1] = y
return y
def zero_crossing_avg(data):
output = []
running_total = data[0]
count = 1
for i in range(1,data.size):
val = data[i]
if val - data[i-1] < TOL:
running_total += val
count += 1
else:
output.append(round(running_total/count))
running_total = val
count = 1
return output
t = np.arange(0,2,dt)
print(t.size)
rng = (np.random.random_sample(t.size) - 0.5)*0.1
s = 10*np.sin(2*np.pi*t*10 + np.pi/12)+rng
c = 10*np.cos(2*np.pi*t*10)+rng
filt_s = DCfilt(dt,16000)
filt_s.y[-1] =s[0]
filt_c = DCfilt(dt,1600)
filt_c.y[-1] =c[0]
# filter the RAW data first
for i in range(s.size):
s[i] = filt_s.step(s[i])
c[i] = filt_c.step(c[i])
# determine the zero crossings
s_z = np.where(np.diff(np.sign(s)))[0]
c_z = np.where(np.diff(np.sign(c)))[0]
sin_zc = zero_crossing_avg( np.where(np.diff(np.sign(s)))[0] )
cos_zc = zero_crossing_avg( np.where(np.diff(np.sign(c)))[0] )
HALF_PERIOD = (sin_zc[1] - sin_zc[0])
for i in range([len(sin_zc),len(cos_zc)][len(sin_zc) > len(cos_zc)]):
delta = abs(cos_zc[i]-sin_zc[i])
print(90 - (delta/HALF_PERIOD)*180)
plt.hold(True)
plt.grid(True)
plt.plot(s)
plt.plot(c)
plt.show()
This works well enough.
| |
doc_23535429
|
I am using WebKitX by mobilefx.
I have not found this event or any way to get this event.
How can I achieve this using WebKitX, or can WebKitX not do this?
A: The event does not exist, and the developer does not add it.
I was able to use VB6 Webview2 instead:
https://www.vbforums.com/showthread.php?889202-VB6-WebView2-Binding-(Edge-Chromium)&p=5597424#post5597424
The good thing is that this also resolved occasional crashes that I experienced with the above control.
| |
doc_23535430
|
private void bindPhoto(final PhotoViewHolder holder, int position) {
Picasso.with(context)
.load(photos.get(position))
.resize(cellSize, cellSize)
.centerCrop()
.into(holder.ivPhoto, new Callback() {
@Override
public void onSuccess() {
animatePhoto(holder);
}
@Override
public void onError() {
}
});
}
private void animatePhoto(PhotoViewHolder viewHolder) {
if (!lockedAnimations) {
if (lastAnimatedItem == viewHolder.getPosition()) {
setLockedAnimations(true);
}
long animationDelay = PHOTO_ANIMATION_DELAY + viewHolder.getPosition() * 30;
viewHolder.flRoot.setScaleY(0);
viewHolder.flRoot.setScaleX(0);
viewHolder.flRoot.animate()
.scaleY(1)
.scaleX(1)
.setDuration(200)
.setInterpolator(INTERPOLATOR)
.setStartDelay(animationDelay)
.start();
}
}
now i can't find any document about use Picasso features with Volley or how to merge that with the Volley
A: You should instead use Glide, a comparable image loading library to Picasso that optionally uses Volley integration. A good comparison of the two APIs can be found in this blog post.
If you include Glide and the volley-integration library, then every use of Glide will automatically use Volley without any specific work on your part. Therefore you only need to worry about using Glide. In your case, the load would look similar to:
private void bindPhoto(final PhotoViewHolder holder, int position) {
Glide.with(context)
.load(photos.get(position))
// By default, Glide automatically resizes the image to fit
// but you could .override(cellSize, cellSize)
.centerCrop()
.into(new ImageViewTarget<GlideDrawable>(yourViewObject) {
@Override
public void onResourceReady(
GlideDrawable resource, GlideAnimation anim) {
animatePhoto(holder);
}
});
}
| |
doc_23535431
|
If Folder(folderName) does not exist in EITHER parent folder, then I want that folder to be created in Parent Folder 1.
So far my script is managing to check Parent Folder 1 and create in Parent Folder 1, but is not checking Parent Folder 2.
This is the script I have so far:
function myfunction() {
var parent = DriveApp.getFolderById("parent Folder 1")
SpreadsheetApp.getActive().getSheetByName('Letter History').getRange('S2:S').getValues()
.forEach(function (r) {
if(r[0]) checkIfFolderExistElsesCreate(parent, r[0]);
})
}
function checkIfFolderExistsElseCreate(parent, folderName) {
var folder;
try {
folder = parent.getFoldersByName(folderName).next();
} catch (e) {
folder = parent.createFolder(folderName);
}
A: I believe your goal is as follows.
*
*You want to expand your showing script for using 2 parent folders.
In this case, how about the following sample script?
Sample script:
Please set your parent folder IDs to parentFolderIDs and save the script.
function myfunction() {
// Please set your parent folder IDs you want to use.
const parentFolderIDs = ["###parent folder ID1###", "###parent folder ID2###"];
// 1. Retrieve subfolders from folders of parentFolderIDs.
const parentFolders = parentFolderIDs.map(id => DriveApp.getFolderById(id));
const subFolders = parentFolders.map(folder => {
const temp = [];
const folders = folder.getFolders();
while (folders.hasNext()) {
temp.push(folders.next().getName());
}
return { parent: folder, subFolderNames: temp };
});
// 2. Create new folders when the folder names are not existing.
const sheet = SpreadsheetApp.getActive().getSheetByName('Letter History');
const values = sheet.getRange('S2:S' + sheet.getLastRow()).getDisplayValues();
values.forEach(([s]) => {
if (s) {
subFolders.forEach(({ parent, subFolderNames }) => {
if (!subFolderNames.includes(s)) {
parent.createFolder(s);
}
});
}
});
}
*
*When this script is run, the subfolders in parentFolderIDs are retrieved. And, using the retrieved subfolders, the new subfolders are created in the parent folders of parentFolderIDs.
References:
*
*map()
*forEach()
A: function myFunction() {
var parentFolder1 = DriveApp.getFolderById('<ParentFolder1 ID>').getFolders(); //Get ID for Folder 1
var parentFolder2 = DriveApp.getFolderById('<ParentFolder2 ID>').getFolders(); //Get ID for Folder 2
var sheet = SpreadsheetApp.getActive().getSheetByName('Letter History');
//Iterating to read all the subfolders in ParentFolder 1 and 2
var childNames = [];
while (parentFolder1.hasNext()) {
var child = parentFolder1.next();
childNames.push([child.getName()]);
}
while (parentFolder2.hasNext()) {
var child = parentFolder2.next();
childNames.push([child.getName()]);
}
//Checking if the subfolder/s is existing or not, then creating in folder 1 if not yet existing
var data = sheet.getRange(2, 19, sheet.getLastRow() - 1, 1).getValues(); //S = 19, Change 19 according to column.
for (var i = 0; i < data.length; i++) {
if (childNames.flat().includes(data[i][0])) {
console.log(data[i][0] + "- already exist")
} else {
DriveApp.getFolderById('<ParentFolder1 ID>').createFolder(data[i][0])
console.log(data[i][0] + "- has been created")
}
};
}
The above code will create subfolder/s in Folder 1 based on range S2:S column in spreadsheet, if it is checked that the subfolder name does not yet exist either in Parent Folder 1 or 2.
Please take note to change the following:
*
*ParentFolder 1 ID
*ParentFolder 2 ID
Reference: How to create a folder if it doesn't exist?
| |
doc_23535432
|
from libc.stdlib cimport malloc, calloc, realloc, free
from optv.tracking_framebuf cimport TargetArray
The first one is not highlighted by PyCharm (2016.2.3 professional on Ubuntu 14.04) as unresolved refference but the second line is highlighted in red underline as unresolved refference.
My TargetArray class is located in tracking_framebuf.pxd file which is located in /usr/local/lib/python2.7/dist-packages/optv/ along with .c, .pyx, .so files with the same name.
I inserted /usr/local/lib/python2.7/dist-packages/optv/ and /usr/local/lib/python2.7/dist-packages/ paths to be associated with python interpreter, but the error messages are still apearing in the editor.
Despite the error messages the file (along with others) is cythonized successfully using this setup.py script:
# -*- coding: utf-8 -*-
from distutils.core import setup
from Cython.Distutils import build_ext
from Cython.Distutils.extension import Extension
import numpy as np
import os
inc_dirs = [np.get_include(), '.']
def mk_ext(name, files):
return Extension(name, files, libraries=['optv'], include_dirs=inc_dirs,
pyrex_include_dirs=['.'])
ext_mods = [
mk_ext("optv.tracking_framebuf", ["optv/tracking_framebuf.pyx"]),
mk_ext("optv.parameters", ["optv/parameters.pyx"]),
mk_ext("optv.calibration", ["optv/calibration.pyx"]),
mk_ext("optv.transforms", ["optv/transforms.pyx"]),
mk_ext("optv.imgcoord", ["optv/imgcoord.pyx"]),
mk_ext("optv.image_processing", ["optv/image_processing.pyx"]),
mk_ext("optv.segmentation", ["optv/segmentation.pyx"]),
mk_ext("optv.orientation", ["optv/orientation.pyx"])
]
setup(
name="optv",
cmdclass = {'build_ext': build_ext},
packages=['optv'],
ext_modules = ext_mods,
package_data = {'optv': ['*.pxd']}
)
Am I missing something on my way to get rid of these error messages and being able to view the contents of the .pxd files i place in the path?
A: The problem was solved by adding /usr/local/lib/python2.7/dist-packages/ to PYTHONPATH by:
File --> Settings --> Project --> Project Structure --> Add Content Root .
A: By default, PyCharm will ignore Cython imports unless they are part of a search path. If the module folder is white, this is a smoking gun:
If the folder is white, then add it to the search path:
Update 2017-09-18
For some reason sometimes PyCharm does not actually add directories marked as "Sources Root" to the Python path. Fix this by switching this on.
Notice the "Starting Script" in the image below. I assume that manually adding these lines to your Python script would also achieve the same result.
| |
doc_23535433
|
def process_request(self, request, spider):
original_url = request.url
new_url= original_url + "hello%20world"
print request.url # This prints the original request url
request=request.replace(url=new_url)
print request.url # This prints the modified url
def process_response(self, request, response, spider):
print request.url # This prints the original request url
print response.url # This prints the original request url
return response
Can anyone please tell me what I'm missing here ?
A: Since you are modifying the request object in process_request() - you need to return it:
def process_request(self, request, spider):
# avoid infinite loop by not processing the URL if it contains the desired part
if "hello%20world" in request.url: pass
new_url = request.url + "hello%20world"
request = request.replace(url=new_url)
return request
| |
doc_23535434
|
SELECT
*
FROM
"SNOWFLAKE"."ACCOUNT_USAGE"."QUERY_HISTORY"
I also tried another query according to this link and found out running queries but the limit(10000 rows) is so low so that I may miss information(my team heavily uses snowflake nowadays).
Is there any way to handle this problem?
A: I get to see the queries in execution with this SQL:
select * from table(information_schema.query_history()) order by start_time
| |
doc_23535435
|
Here is the relevant line:
response.write "<a href='search.htm?supplier_name= & request.querystring('supplier_name') & "&aircraft_type=" & request.querystring('aircraft_type') & "&state=" & request.querystring('state') & "&order=state'>State</a>"
I believe the problem is associated with this section:
& "&order=state'>State</a>"
The ASP code in full:
<%
if request.querystring("order") = "state" then
response.write("State")
else
response.write "<a href='search.htm?supplier_name= & request.querystring('supplier_name') & "&aircraft_type=" & request.querystring('aircraft_type') & "&state=" & request.querystring('state') & "&order=state'>State</a>"
end if
%>
A: Missing a double quote after supplier_name=:
response.write "<a href='search.htm?supplier_name=" & request.querystring('supplier_name') & "&aircraft_type=" & request.querystring('aircraft_type') & "&state=" & request.querystring('state') & "&order=state'>State</a>"
| |
doc_23535436
|
It seems I can do myPeerConnection.getStats() as described here I can measure the bytes sent or recieved. If they increase that means we are connected and the disconnected ICE state will be treated as temporary. otherwise, it is permanent.
But now I am confused about which byte I should measure. There inbound-rtp, outbound-rtp, remote-inbound-rtp and remote-outbound-rtp.
I want to make sure that both sides are actually receiving data from each other. So what should I measure from the above four?
ORIGINAL
Sometimes on unstable networks ICE state can change to 'Disconnected' and will normally try to recover on its own. 'Failed' state will need ICE renegotiated. But there will be cases when the other peer has just lost connection or died and in that case I will get 'Disconnected' and then after sometime 'Failed' states. I need to know when the peer connection is still alive and when it is dead so that I can take appropriate action.
function handleICEConnectionStateChangeEvent(event) {
log("*** ICE connection state changed to " + myPeerConnection.iceConnectionState);
switch(myPeerConnection.iceConnectionState) {
case "closed": // This means connection is shut down and no longer handling requests.
hangUpCall(); //Hangup instead of closevideo() because we want to record call end in db
break;
case "failed": // This will not restart ICE negotiation on its own and must be restarted/
myPeerConnection.restartIce();
break;
case "disconnected":
//This will resolve on its own. No need to close connection.
//But in case the other peer connection is dead we want to call the below function.
//hangUpCall(); //Hangup instead of closevideo() because we want to record call end in db
//break;
}
}
I would like something like
case "disconnected":
if(!otherPeerConnected){
hangUpCall();
}
Is there anyway to do this?
Thank you
A: from MDN I got this
inbound-rtp:
An RTCInboundRtpStreamStats object providing statistics about inbound data being received from remote peers. Since this only provides statistics related to inbound data, without considering the local peer's state, any values that require knowledge of both, such as round-trip time, is not included. This report isn't available if there are no connected peers
I am now going to use that as shown below, in case anyone else wants it in future.
function handleICEConnectionStateChangeEvent(event) {
log("*** ICE connection state changed to " + myPeerConnection.iceConnectionState);
switch(myPeerConnection.iceConnectionState) {
case "closed": // This means connection is shut down and no longer handling requests.
hangUpCall(); //Hangup instead of closevideo() because we want to record call end in db
break;
case "failed":
checkStatePermanent('failed');
break;
case "disconnected":
checkStatePermanent('disconnected');
break;
}
}
const customdelay = ms => new Promise(res => setTimeout(res, ms));
async function checkStatePermanent (iceState) {
videoReceivedBytetCount = 0;
audioReceivedByteCount = 0;
let firstFlag = await isPermanentDisconnect();
await customdelay(2000);
let secondFlag = await isPermanentDisconnect(); //Call this func again after 2 seconds to check whether data is still coming in.
if(secondFlag){ //If permanent disconnect then we hangup i.e no audio/video is fllowing
if (iceState == 'disconnected'){
hangUpCall(); //Hangup instead of closevideo() because we want to record call end in db
}
}
if(!secondFlag){//If temp failure then restart ice i.e audio/video is still flowing
if(iceState == 'failed') {
myPeerConnection.restartIce();
}
}
};
var videoReceivedBytetCount = 0;
var audioReceivedByteCount = 0;
async function isPermanentDisconnect (){
var isPermanentDisconnectFlag = false;
var videoIsAlive = false;
var audioIsAlive = false;
await myPeerConnection.getStats(null).then(stats => {
stats.forEach(report => {
if(report.type === 'inbound-rtp' && (report.kind === 'audio' || report.kind === 'video')){ //check for inbound data only
if(report.kind === 'audio'){
//Here we must compare previous data count with current
if(report.bytesReceived > audioReceivedByteCount){
// If current count is greater than previous then that means data is flowing to other peer. So this disconnected or failed ICE state is temporary
audioIsAlive = true;
} else {
audioIsAlive = false;
}
audioReceivedByteCount = report.bytesReceived;
}
if(report.kind === 'video'){
if(report.bytesReceived > videoReceivedBytetCount){
// If current count is greater than previous then that means data is flowing to other peer. So this disconnected or failed ICE state is temporary
videoIsAlive = true;
} else{
videoIsAlive = false;
}
videoReceivedBytetCount = report.bytesReceived;
}
if(audioIsAlive || videoIsAlive){ //either audio or video is being recieved.
isPermanentDisconnectFlag = false; //Disconnected is temp
} else {
isPermanentDisconnectFlag = true;
}
}
})
});
return isPermanentDisconnectFlag;
}
| |
doc_23535437
|
UPDATE: The term is called melt
I have a data frame for countries and data for each year
Country 2001 2002 2003
Nigeria 1 2 3
UK 2 NA 1
And I want to have something like
Country Year Value
Nigeria 2001 1
Nigeria 2002 2
Nigeria 2003 3
UK 2001 2
UK 2002 NA
UK 2003 1
A: The base R reshape approach for this problem is pretty ugly, particularly since the names aren't in a form that reshape likes. It would be something like the following, where the first setNames line modifies the column names into something that reshape can make use of.
reshape(
setNames(mydf, c("Country", paste0("val.", c(2001, 2002, 2003)))),
direction = "long", idvar = "Country", varying = 2:ncol(mydf),
sep = ".", new.row.names = seq_len(prod(dim(mydf[-1]))))
A better alternative in base R is to use stack, like this:
cbind(mydf[1], stack(mydf[-1]))
# Country values ind
# 1 Nigeria 1 2001
# 2 UK 2 2001
# 3 Nigeria 2 2002
# 4 UK NA 2002
# 5 Nigeria 3 2003
# 6 UK 1 2003
There are also new tools for reshaping data now available, like the "tidyr" package, which gives us gather. Of course, the tidyr:::gather_.data.frame method just calls reshape2::melt, so this part of my answer doesn't necessarily add much except introduce the newer syntax that you might be encountering in the Hadleyverse.
library(tidyr)
gather(mydf, year, value, `2001`:`2003`) ## Note the backticks
# Country year value
# 1 Nigeria 2001 1
# 2 UK 2001 2
# 3 Nigeria 2002 2
# 4 UK 2002 NA
# 5 Nigeria 2003 3
# 6 UK 2003 1
All three options here would need reordering of rows if you want the row order you showed in your question.
A fourth option would be to use merged.stack from my "splitstackshape" package. Like base R's reshape, you'll need to modify the column names to something that includes a "variable" and "time" indicator.
library(splitstackshape)
merged.stack(
setNames(mydf, c("Country", paste0("V.", 2001:2003))),
var.stubs = "V", sep = ".")
# Country .time_1 V
# 1: Nigeria 2001 1
# 2: Nigeria 2002 2
# 3: Nigeria 2003 3
# 4: UK 2001 2
# 5: UK 2002 NA
# 6: UK 2003 1
Sample data
mydf <- structure(list(Country = c("Nigeria", "UK"), `2001` = 1:2, `2002` = c(2L,
NA), `2003` = c(3L, 1L)), .Names = c("Country", "2001", "2002",
"2003"), row.names = 1:2, class = "data.frame")
A: I still can't believe I beat Andrie with an answer. :)
> library(reshape)
> my.df <- read.table(text = "Country 2001 2002 2003
+ Nigeria 1 2 3
+ UK 2 NA 1", header = TRUE)
> my.result <- melt(my.df, id = c("Country"))
> my.result[order(my.result$Country),]
Country variable value
1 Nigeria X2001 1
3 Nigeria X2002 2
5 Nigeria X2003 3
2 UK X2001 2
4 UK X2002 NA
6 UK X2003 1
A: You can use the melt command from the reshape package. See here: http://www.statmethods.net/management/reshape.html
Probably something like melt(myframe, id=c('Country'))
| |
doc_23535438
|
Error:
Argument of type '{ method: string; headers: { 'Accept': string; 'Content-Type': string; 'Authorization': string; }...' is not assignable to parameter of type 'RequestInit'.
Types of property 'headers' are incompatible.
Type '{ 'Accept': string; 'Content-Type': string; 'Authorization': string; }' is not assignable to type 'Headers | string[][]'.
Object literal may only specify known properties, and ''Accept'' does not exist in type 'Headers | string[][]'.
The terminal process terminated with exit code: 1
fetch(URL, {
method: 'GET',
headers: {
'Accept': 'application/json',
'Content-Type': 'application/json',
'Authorization': "Bearer " + token
}
})
.then((responseJson) => {
return responseJson;
})
.catch((error) => {
console.error(error);
});
"dependencies": {
"expo": "^21.0.2",
"jquery": "^3.2.1",
"lodash": "^4.17.4",
"mobx": "^3.2.2",
"mobx-react": "^4.2.2",
"moment": "^2.19.1",
"react": "16.0.0-alpha.12",
"react-native": "https://github.com/expo/react-native/archive/sdk-21.0.2.tar.gz",
"react-native-azure-ad": "^0.2.4",
"react-native-sqlite-2": "^1.5.0",
"react-native-sqlite-storage": "^3.3.3",
"react-native-tabs": "^1.0.9",
"typescript": "^2.5.3"
}
| |
doc_23535439
|
So on my page i have several links in the menu bar where onclick a function is called upon. For instance
<li><a href="javascript:void(0);" onclick="myprofile()" ><span>my profile</span></a></li>
<li><a href="javascript:void(0);" onclick="mysettings()"><span>Settings<small>
A function would look something like this.
function myprofile() {
$('#content').load('members.php');
}
function mysettings() {
$('#content').load('settings.php');
}
I added some javascript for history.js (found on this forum) And although it does change the urls inside my index.php but when clicking back it doesnt load the pages. How would i let history.js work when functions are used? (some links i do really need functions so just putting the onload inside link would not be an option for me)
<script>
$(function() {
// Prepare
var History = window.History; // Note: We are using a capital H instead of a lower h
if ( !History.enabled ) {
// History.js is disabled for this browser.
// This is because we can optionally choose to support HTML4 browsers or not.
return false;
}
// Bind to StateChange Event
History.Adapter.bind(window,'statechange',function() { // Note: We are using statechange instead of popstate
var State = History.getState();
$('#content').load(State.url);
/* Instead of the line above, you could run the code below if the url returns the whole page instead of just the content (assuming it has a `#content`):
$.get(State.url, function(response) {
$('#content').html($(response).find('#content').html()); });
*/
});
// Capture all the links to push their url to the history stack and trigger the StateChange Event
$('a').click(function(evt) {
evt.preventDefault();
History.pushState(null, $(this).text(), $(this).attr('onclick'));
});
});
</script>
A: The first parameter of the History.pushState is "data", which is for additional information.
So to save the History entry you could do this in your click event:
History.pushState({ href: 'members.php' }, $(this).text(), $(this).attr('onclick'));
And then when reading it back out you would do this in your History.Adapter.bind function:
$('#content').load(State.data.href);
| |
doc_23535440
|
Current script is something like this:
$appID = $appServers | ForEach-Object {
Get-Service "AppIDSvc" -ComputerName $_
}
Write-Host "Application Identity Service Status - App Servers"
$appID.Status
Text file just has a list of server names, so the output lists 4 lines that say:
Status
Name
DisplayName
Running
AppIDSvc
AppIDSvc
Running
AppIDSvc
AppIDSvc
Stopped
AppIDSvc
AppIDSvc
Running
AppIDSvc
AppIDSvc
I'd like to have $appID.Status go from looking like
Running
Running
Stopped
Running
to
Server1 - Running
Server2 - Running
Server3 - Stopped
Server4 - Running
Is this something I can do and have the correct server names match their results, so that we can properly identify the servers that are having issues?
I have tried to isolate the result set from the Status, Name, and DisplayName of the services to just the status, then enumerate the array next to the result set. I'm not having luck doing so and I don't feel confident I can match the server in the list to the output order of the results.
A: Following sample from your code uses Add-Member to add computer name to output
$appID = $appServers | ForEach-Object {
$service = Get-Service "AppIDSvc" -ComputerName $_
Add-Member -PassThru -InputObject $service -Type NoteProperty -Name Server -Value $_
}
Write-Host "Application Identity Service Status - App Servers"
$appID | Select-Object Server, Status
See also:
https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.utility/add-member?view=powershell-7.3
| |
doc_23535441
|
numbers = [10,15,20]
def get_data(request, *args,**kwargs):
data = {
'num': numbers,
}
return JsonResponse(data)
A: In flask I would use jsonify:
from flask import jsonify
def get_data(request, *args,**kwargs):
data = {
'num': numbers,
}
return jsonify(data)
| |
doc_23535442
|
I've been working on this problem for 24 hours now and I can't seem to fix it.
Even a blank Xamarin.Forms Project causes the Black Screen. I guess the path of the Android SDK might be wrong? (but I don't know how to fix it...)?
A: I got it resolved by running VS as administrator
Try it out.
| |
doc_23535443
|
I've tried with many options like wireframe=false, but the edges are still drawn.
This is the code:
var container, stats;
var camera, scene, renderer;
var canvasWidth = 500;
var canvasHeight = 500;
var windowHalfX = 100;
var windowHalfY = 100;
container = document.createElement( 'div' );
document.body.appendChild( container );
// Camera
camera = new THREE.OrthographicCamera( canvasWidth / - 2, canvasWidth / 2, canvasHeight / 2, canvasHeight / - 2, - 500, 5000 );
// Scene
scene = new THREE.Scene();
camera.position.x = 200;
camera.position.y = 200;
camera.position.z = 200;
camera.lookAt( scene.position );
// Renderer
renderer = new THREE.CanvasRenderer();
renderer.setClearColor( "#fff" );
renderer.setSize( canvasWidth, canvasHeight );
container.appendChild( renderer.domElement );
var size = 100;
geometry = new THREE.BoxGeometry( size, size, size );
material = new THREE.MeshBasicMaterial({
color: "#0000ff",
side: THREE.DoubleSide,
wireframe: false
});
// Comment this line to paint a single color cube
material = new THREE.MeshLambertMaterial({ map: THREE.ImageUtils.loadTexture("http://upload.wikimedia.org/wikipedia/commons/8/81/Color_icon_black.png") });
mesh = new THREE.Mesh( geometry, material );
scene.add( mesh );
var draw = function() {
requestAnimationFrame( draw );
renderer.render( scene, camera );
}
draw();
And a link to the example:
http://jsfiddle.net/gyss/qg4x9/
Cheers!
A: I've solved my problem changing this line
renderer = new THREE.CanvasRenderer();
for this other
renderer = new THREE.WebGLRenderer();
Other possible solution, as WestLangley commented above is to use this line with CanvasRenderer
material.overdraw = 0.5; // or some number between 0 and 1
A: When using renderer = new THREE.CanvasRenderer();
to remove the edges, a parameter (overdraw:true) needs to be added to the material definition like this:
var material = new THREE.MeshBasicMaterial( { map: texture, overdraw: true } );
Then you can use add the material to the 3D object:
mesh = new THREE.MeshBasicMaterial( geometry, material );
| |
doc_23535444
|
A: QTextStream lets you read line by line
QFile file(hugeFile);
QStringList strings;
if (file.open(QIODevice::ReadOnly | QIODevice::Text))
{
QTextStream in(&file);
while (!in.atEnd()) {
strings += in.readLine().split(";");
}
}
A: You can use file streams.
QFile file = new QFile(hugeFile);
file.open(QIODevice.OpenModeFlag.ReadOnly);
QDataStream inputStream = new QDataStream(file);
QStringList array;
QString temp;
while(!inputStream.atEnd()) {
inputStream >> temp;
array << temp.split(";");
}
Note that this is untested (pseudo) code, hope it helps.
A: You can always read a part of file:
QFile file( ... );
file.read(1000); // reads no more than 1000 bytes
Or you car read Your file line by line:
file.readLine();
but You'll have to handle cases when one string was splitted in two pieces.
A: If it's a really huge file then you can read with the file.read(an_appropriate_number) while file.atEnd() is false.
Read a chunk (with file.read()), add it to a temporary string buffer and search for a ',' (e.g. with QString's contains() method). If it contains a ',' then split it (with QString's split() method): the first X parts (the read 1000 characters may contain more than 1 tokens) will contain the found tokens and the last one is not a complete token yet. So switch the temporary string to the last part of the split and read another chunk (until you hit file.atEnd()) and append it to the temporary string buffer. This will work efficiently unless your tokens are huge. And don't forget to handle the last buffered text after you hit file.atEnd() :)
Or as an alternative you can read the file character-by-character and check for ',' manually, but it's always better to read more than 1 character (it's more efficient if you read more).
A: This won't capture whitespace after a comma. If that's not acceptable, feel free to optimize the regex. You can probably also reduce the amount of includes at the top. I was just being thorough. I tested this on a 1600 line file, and it seemed to handle it well in Qt 5.6
#include <QCoreApplication>
#include <QFile>
#include <QIODevice>
#include <QRegularExpression>
#include <QRegularExpressionMatch>
#include <QRegularExpressionMatchIterator>
#include <QString>
#include <QStringList>
#include <QTextStream>
int main(int argc, char * argv[])
{
QCoreApplication app(argc, argv);
QFile file("C:\\PathToFile\\bigFile.fileExt");
QStringList lines;
QStringList matches;
QString match;
file.open(QIODevice::ReadOnly | QIODevice::Text);
while(!file.atEnd())
{
lines << file.readLine();
}
file.close();
QRegularExpression regex("(^|\\s|,)\\K\\w.*?(?=(,|$))");
QRegularExpressionMatchIterator it;
foreach (QString element, lines)
{
it = regex.globalMatch(element);
while(it.hasNext())
{
QRegularExpressionMatch qre_match = it.next();
match = qre_match.captured(0);
matches << match;
}
}
return 0;
}
| |
doc_23535445
|
I have a GET call that looks something like this:
@Path("/status/{requestID}")
@GET
public Response getStatus(@PathParam("requestID") String requestId) {
MDC.put("myRequestID", requestId);
// Do some processing
return response;
}
I use MDC to add request ID and other information that I need in my logs. Now the response that I create is Multipart, and since Jersey internally converts this Multipart response to HTTP response, during this processing I have certain callbacks which logs messages and I want the MDC to be populated at that time, with whatever was put in my getStatus call.
Now, usually I would call MDC.clear() before returning status in my getStatus call. But, now that I want to wait for Jersey to complete processing, I cannot do that.
Is there any callback that Jersey provides once the HTTP response processing is complete, where I can do this cleanup?
| |
doc_23535446
|
I have tried doing it, but the only problem is that there are duplicate entries which are being added to it.
For Each Cell1 In Worksheets(1).Range("A1", Worksheets(1).Range("A1").End(xlDown))
For Each Cell3 In Worksheets(3).Range("A1", Worksheets(3).Range("A1").End(xlDown))
If Not Cell1 = Cell3 Then
Dim LastRow3 As Long
LastRow3 = Worksheets(3).Range("A1").SpecialCells(xlCellTypeLastCell).Row + 1
Worksheets(3).Range("A1:C1").Rows(LastRow3).Value = Worksheets(1).Range("A1:C1").Rows(Cell1.Row).Value
Else
Exit For
End If
Next Cell3
Next Cell1
A: Using Find() is easier to manage than using a nested loop:
Sub Test()
Dim c As Range, f As Range
Dim ws1, ws3
Set ws1 = Worksheets(1)
Set ws3 = Worksheets(3)
For Each c In ws1.Range(ws1.Range("A1"), ws1.Cells(Rows.Count, 1).End(xlUp)).Cells
Set f = ws3.Range(ws3.Range("A1"), _
ws3.Cells(Rows.Count, 1).End(xlUp)).Find( _
What:=c.Value, lookat:=xlWhole)
If f Is Nothing Then
ws3.Cells(Rows.Count, 1).End(xlUp).Offset(1, 0).Resize(1, 3).Value = _
c.Resize(1, 3).Value
End If
Next c
End Sub
| |
doc_23535447
|
import flask
import pickle
with open('model/KTcategory-predictor.pkl', 'rb') as f:
model = pickle.load(f)
with open('model/KTtfidf.pkl', 'rb') as f:
tfidf = pickle.load(f)
app = flask.Flask(__name__, template_folder='templates')
@app.route('/', methods=['GET', 'POST'])
def main():
count_vectorizer = CountVectorizer()
if flask.request.method == 'GET':
return(flask.render_template('main.html'))
if flask.request.method == 'POST':
text = flask.request.form['text'`
text = text.split(" ")
input_tc = count_vectorizer.transform(text)
input_tfidf = tfidf.transform(input_tc)
predictions = model.predict(input_tfidf)
return (predictions)
if __name__ == '__main__':
app.run(debug = True)
| |
doc_23535448
|
CallAgents
name
email
phone_number
CallLogs
call_id
description (corresponds to the name from the UserAgents table)
action (connected or missed)
date
I want my result to look like this:
name, email, phone_number, nr_of_connected_calls, nr_of_missed_calls
So far I've tried this query but the results for nr_of_connected_calls and nr_of_missed_calls are the same:
SELECT username
,email
,phone_number
,COUNT(Connected.description as nr_of_connected_calls
,COUNT(Missed.description as nr_of_missed_calls
FROM CallAgents
Inner Join CallLogs as Connected
ON username = Connected.description AND Connected.action = 'connected'
Inner Join CallLogs as Missed
ON username = Missed.description AND Missed.action = 'missed'
GROUP BY name, email, phone_number
My SQL skills are still rather basic so I'm not getting this. Would be great if someone could help me out. Thanks so much!
| |
doc_23535449
|
Everything runs fine when the database starts empty. However, when the dummy data data.sql file runs before lauching the application, and then inserting objects through the application, the insertion doesn't take into account previous IDs, therefore resulting in duplicate entries until the number of lines is reached :
I'll try explaining better with an example :
*
*At application start, the data.sql file inserts 3 dummy users.
*Reaching registration page to add a new user and submitting, Hibernate returns an error 'Duplicate entry '1' for PRIMARY key', then 'Duplicate entry '2' for PRIMARY key' and 'Duplicate entry '3' for PRIMARY key' after retrying two times.
*At the fourth retry, the user is added.
The ID therefore does auto-increment, but doesn't take into account previously inserted rows through data.sql.
Note that I've tried playing around with the generation type (AUTO / IDENTITY) of the User class with no success, and my hibernate datasource is on create-drop mode.
Update :
User entity :
@Data
@Builder
@Getter
@Setter
@AllArgsConstructor
@NoArgsConstructor
@Entity
@Table(name = "users")
public class User {
public final static Role DEFAULT_ROLE = new Role();
static {
DEFAULT_ROLE.setId(2);
DEFAULT_ROLE.setRole("NORMAL");
}
@Id
@GeneratedValue(strategy = GenerationType.AUTO)
@Column(name = "user_id")
private int id;
@Column(name = "user_name")
@Length(min = 5, message = "*Your user name must have at least 5 characters")
@NotEmpty(message = "*Please provide a user name")
private String userName;
@Column(name = "email")
@Email(message = "*Please provide a valid Email")
@NotEmpty(message = "*Please provide an email")
private String email;
@Column(name = "password")
@Length(min = 5, message = "*Your password must have at least 5 characters")
@NotEmpty(message = "*Please provide your password")
private String password;
@Column(name = "name")
@NotEmpty(message = "*Please provide your name")
private String name;
@Column(name = "last_name")
@NotEmpty(message = "*Please provide your last name")
private String lastName;
@Column(name = "active")
private Boolean active;
@ManyToMany(cascade = CascadeType.MERGE, fetch = FetchType.EAGER)
@JoinTable(name = "user_role", joinColumns = @JoinColumn(name = "user_id"), inverseJoinColumns = @JoinColumn(name = "role_id"))
private Set<Role> roles;
@Column(name = "phone_number")
@NotEmpty(message = "*Please provide a phone number")
private String phoneNumber;
@OneToOne(cascade = CascadeType.ALL)
@JoinColumn(name = "address_id", referencedColumnName = "address_id")
private Address address;
}
Any idea what would solve it? :)
Cheers.
A: The problem is not in Hibernate, but in the statements in data.sql.
In those INSERT statements your probably give explicit values for the user_id column, hence MySQL does not increment its AUTO_INCREMENT counter. Hibernate's default generation strategy for MySQL is IDENTITY, which relies on the AUTO_INCREMENT feature.
The easiest solution is to omit the users_id column in the INSERT statements. You can however also set the initial AUTO_INCREMENT value for the table to a higher number:
ALTER TABLE users AUTO_INCREMENT=1001
Edit: Your identity field is declared as int id, therefore it can never be null. Change it to Integer, so that Hibernate can distinguish between persisted (id != null) and not persisted entities.
| |
doc_23535450
|
SELECT *
FROM article
WHERE online= 1
AND IF(ISNULL(dateHOnline), dateHCreation, dateHOnline) <= Convert_TZ(Now(), "SYSTEM", "Europe/Paris")
But Convert_TZ(Now(),"SYSTEM","Europe/Paris") returns null, probably because my mysql server does not support timezone tables (putting +02:00 return a datetime).
So how can I modify that request (it's a VIEW) to implement the daylight saving feature in the comparison. I guess that just putting +02:00 instead of Europe/Paris would not be enough.
PS : I do not have control over the mysql engine
A: What you need to do is to load timezone information into mysql.
mysql_tzinfo_to_sql tz_dir
| |
doc_23535451
|
On searching, one method I found was this (https://stackoverflow.com/a/820124/7995937):
from netaddr import IPNetwork, IPAddress
if IPAddress("192.168.0.1") in IPNetwork("192.168.0.0/24"):
print "Yay!"
But since I have to loop this over 200,000 IP Addresses, and for each address loop over 10,000 subnets, I am unsure if this is efficient.
My first doubt, is checking "IPAddress() in IPNetwork()" just a linear scan or is it optimized in some way?
The other solution I came up with was to make a list with all the IPs contained in the IP Subnets(which comes to about 13,000,000 IPs without duplicates), and then sorting it. If I do this, then in my loop over the 200,000 IP Addresses I only need to do a binary search for each IP, over a larger set of IP Addresses.
for ipMasked in ipsubnets: # Here ipsubnets is the list of all subnets
setUnmaskedIPs = [str(ip) for ip in IPNetwork(ipMasked)]
ip_list = ip_list + setUnmaskedIPs
ip_list = list(set(ip_list)) # To eliminate duplicates
ip_list.sort()
I could then just perform binary search in the following manner:
for ip in myIPList: # myIPList is the list of 200,000 IPs
if bin_search(ip,ip_list):
print('The ip is present')
Is this method more efficient than the other one? Or is there any other more efficient way to perform this task?
A: This is probably not the best possible solution, but I'd suggest using a set rather than a list. Sets are optimized for checking if any given value is present in the set, so you're replacing your binary search with a single operation. Instead of:
ip_list = list(set(ip_list))
just do:
ip_set = set(ip_list)
and then the other part of your code becomes:
for ip in myIPList: # myIPList is the list of 200,000 IPs
if ip in ip_set:
print('The ip is present')
Edit: and to make things a bit more memory-efficient you can skip creating an intermediate list as well:
ip_set = set()
for ipMasked in ipsubnets:
ip_set.update([str(ip) for ip in IPNetwork(ipMasked)])
A: Okay, So sorting takes O(nlogn), In case of 13,000,000 you end up doing O(13000000log(13000000)). Then You are iterating over 200000 IP and doing binary search O(logn) on that sorted list on 13000000.
I sincerely doubt that's best solution. I suggest you use map
from netaddr import IPNetwork, IPAddress
l_ip_address = map(IPAddress, list_of_ip_address)
l_ip_subnet = map(IPNetwork, list_of_subnets)
if any(x in y for x in l_ip_address for y in l_ip_subnet):
print "FOUND"
A: Your IP address in in a subnet if N leading bits of that address match N leading bits of one of the N-bit subnets. So, start by making a list of empty sets. Encode each subnet as a 32-bit integer with the trailing bits masked out. For example, 1.2.3.4/23 equals (0x01020304 & 0xfffffe00) equals 0x01020200. Add this number to the 23rd set in the list, ie subnets[23]. Continue for all the subnets.
To see if an IP address is in your subnets, encode the IP address in the same way as a 32-bit number ipaddr and then (something like, untested code)
for N in range( 32, 0, -1)
mask = ( 0xffffffff >> (32-N) ) << (32-N)
if (ipaddr & mask) in subnets[N] :
# have found ipaddr in one of our subnets
break # or do whatever...
else
# have not found ipaddr
Looking up a number in a set at worst O(log N) where N in the number of elements in the set. This code does it at most 32 times for the worst case of an ip address that is not in the sets of subnets. If the majority of the addresses are expected to be present, there's an optimisation to test the sets with the most elemnnts first. That might be
for N in ( 24, 16, 8, 29, 23, 28, 27, 26, 25, 22, 15, 21 ... )
or you could calculate the optimal sequence at runtime.
| |
doc_23535452
|
(this code is inside main.swf)
function loadImage(url:String):void {
imageLoader = new Loader();
imageLoader.load(new URLRequest(url));
imageLoader.contentLoaderInfo.addEventListener(ProgressEvent.PROGRESS, imageLoading);
imageLoader.contentLoaderInfo.addEventListener(Event.COMPLETE, imageLoaded);
}
loadImage("test1.swf");
function imageLoaded(e:Event):void {
MyMovieClip.addChild(imageLoader);
}
The think is the test1.swf needs a value who is in the loader swf (main.swf), how can i pass the value? Thanks!
A: You could create a class for the loaded SWF ( test1.swf ) which you can assign with the Document class.
In this class create a setter for the values you need to pass to the loaded SWF.
//Here the choice of Array is arbitrary & depends on what values
//you need to pass to the SWF...
private var _params:Array;
public function set params( value:Array ):void
{
_params = value;
}
Then in your Main class, you can do the following:
//Assuming you'd like to pass a number of values...
function imageLoaded(e:Event):void {
var content:MovieClip = imageLoader.content as CustomClass;
content.params = [ value1 , value2... valueN];
MyMovieClip.addChild(content);
}
A: You need to access the content of the loader to do this. In your imageLoaded function do this.
( e.target.content as Object ).someParam = "whatever";
There are a number of security restrictions to consider, (This is called cross scripting), but if you are loading two swfs from the same domain, you shouldn't have a problem.
| |
doc_23535453
|
I hide it with opacity: 0 for fadeIn animation and after that I set display to none but I am not sure if the animation is rendered nevertheless and consumes resources.
An answer would be highly appreciated.
A: According to this w3c draft, no; the animation is terminated:
Setting the display property to none will terminate any running animation applied to the element and its descendants.
You can verify this by looking at which animation events are fired. We'll set up an infinitely alternating animation and then you can toggle the element's display with a button:
(d => {
const $ = d.querySelector.bind(d),
div = $("#test-div"),
button = $("#test-button"),
animationEvents = [
"animationstart",
"animationiteration",
"animationcancel",
"animationend",
],
animationEventHandler = e => console.log(e.type);
for (let animationEvent of animationEvents) {
div.addEventListener(animationEvent, animationEventHandler);
}
button.addEventListener("click", e => {
const lastDisplay = div.style.display || "block";
div.style.display = lastDisplay === "block" ? "none": "block";
e.target.textContent = "Toggle display: " + lastDisplay;
});
})(document);
#test-div {
padding: 30px;
animation: test-animation 1000ms alternate infinite steps(2, jump-none);
}
@keyframes test-animation {
from {
background-color: #0f0;
}
to {
background-color: #f00;
}
}
<button id="test-button" type="button">Toggle display: none</button>
<div id="test-div">test div</div>
You can see that animationiteration stops firing once the element is no longer displayed even though the animation is, "infinite." You could infer from this that the animation runs only while the element is displayed. Indeed, the animation is restarted from 0% when you show the element again.
| |
doc_23535454
|
1) Trying to broadcast ping and using arp -a only gives the Gateway address, since docker is running inside a subnet.
2) Using nmap I cannot verify the MAC, I could only see if there is a live host in that address.
I tried the above by running docker in --privileged mode and still the results are the same.
| |
doc_23535455
|
quarter = our_date.quarter
quarter_start_date= datetime(our_date.year, 3* quarter -2, 1)
return quarter_start_date
def days_in_between_f(our_date):
quarter_start_date=first_day_quarter_f(our_date)
delta=our_date-quarter_start_date
days=delta.days+1
return days
def first_day_name_f(our_date):
quarter_start_date=first_day_quarter_f(our_date)
first_day=quarter_start_date.strftime("%A")
return first_day
def week_f(our_date):
days=days_in_between_f(our_date)
first_day=first_day_name_f(our_date)
if first_day=='Monday':
if days<7:
week=1
else:
if days%7==0:
week=math.floor(days/7)
else:
week=math.floor(days/7)+1
elif first_day=='Tuesday':
if days<6:
week=1
else:
days=days-6
if days%7==0:
week=1+math.floor(days/7)
else:
week=1+math.floor(days/7)+1
elif first_day=='Wednesday':
if days<5:
week=1
else:
days=days-5
if days%7==0:
week=1+math.floor(days/7)
else:
week=1+math.floor(days/7)+1
elif first_day=='Thursday':
if days<4:
week=1
else:
days=days-4
if days%7==0:
week=1+math.floor(days/7)
else:
week=1+math.floor(days/7)+1
elif first_day=='Friday':
if days<3:
week=1
else:
days=days-3
if days%7==0:
week=1+math.floor(days/7)
else:
week=1+math.floor(days/7)+1
elif first_day=='Saturday':
if days<3:
week=1
else:
days=days-2
if days%7==0:
week=1+math.floor(days/7)
else:
week=1+math.floor(days/7)+1
else:
if days<3:
week=1
else:
days=days-1
if days%7==0:
week=1+math.floor(days/7)
else:
week=1+math.floor(days/7)+1
return week
I have tried to get first day of quarter and then calculated difference between the date given and first day of quarter and then calculated the weekno on the basis of name of first day of quarter. Is there any other way to do this? We need 20-Aug-2020 is in which week of that quarter? Answer should be weekno-8.
A: from datetime import datetime
import math
date_format = "%m/%d/%Y"
a = datetime.strptime('1/1/2020', date_format) # First day in the year, 1st January 2020
b = datetime.strptime('8/20/2020', date_format) # Your date 20th August 2020
delta = b β a
num_days = delta.days
num_of_week = num_days / 7
week_in_quarter = num_of_week % 13
print(math.ceil(week_in_quarter))
Output:
8
| |
doc_23535456
|
I think this will be the perfect method to help with my hw but im having trouble understanding how to use it.
How do i make the sequence < /p> comes up.
This is what i have so far:
String s = "</p>";
char c;
while(reads.hasNext()){
c = reads.next();
s = "" + c;
if (inputPath.contains(?)) {
// how do i make it return true if it contains < /p>
}
}
A: Assuming you are talking about java.lang.CharSequence, it is an interface, and is the parent for String, StringBuilder, and StringBuffer (and some other things).
It looks like you are more looking to see if the input you have so fat has, or contains a given string (</p>). To do that you could make use of the Pattern class or the String.equals or String.indexOf method (or the StringBuilder version).
A: You can test that one String contains another using indexOf():
if (inputPath.indexOf("< /p>") != -1) {
// inputPath contains "< /p>" somewhere in it
}
| |
doc_23535457
|
Here is how terminal responds when I have a git command
git log
Not sure how to fix it.
this is how .bash_profile loks like
"$HOME/.rvm/scripts/rvm" ]] && . "$HOME/.rvm/scripts/rvm" # Load RVM function
if [ -f /usr/local/etc/bash_completion.d/git-completion.bash ]; then
. /usr/local/etc/bash_completion.d/git-completion.bash
fi
export PS1='\[\033[01;32m\]\u@\h\[\033[00m\] \[\033[01;34m\]\w\[\033[00m\]$(git branch &>/dev/null; if [ $? -eq 0 ]; then echo "\[\033[01;33m\]($(git branch | grep ^*|sed s/\*\ //))\[\033[00m\]"; fi)$ '
A: Change the git color configuration with the following command:
git config --global color.ui [always|auto|never]
| |
doc_23535458
|
I keep getting the error "-bash p4: command not found".
I followed the top 7 steps here and got the same error:
http://www.endlesslycurious.com/2008/11/11/configuring-p4-command-line-client-on-mac-os-x/
I couldn't find anything else useful when searching.
Has anyone else encountered a similar issue and resolved it?
A: Just drop the "p4" executable in /usr/local/bin or even /usr/bin if you prefer. ;-)
A: How-To steps:
(1) Download p4 file for macOS from:
https://www.perforce.com/downloads/helix-command-line-client-p4
(2) Copy the item to any local folder under any custom folder. For ex: '/Users//perforce'
(3) Run the following commands in terminal.
chmod +x /Users/<yourname>/perforce/p4
export PATH=/Users/<yourname>/perforce:$PATH
(4) Now run 'p4' in terminal.
This should not fail!
A: To add to the existing answers, on my macOS Mojave, downloading Perforce 2019.1/1796703 for OSX 10.10+ using Safari gives me a p4.dms file.
You must rename it to p4 first before using it. Any attempt to unarchive the .dms file will fail. It is not a valid DMS archive.
| |
doc_23535459
|
A: I've solve the problem: You can use it without a server, however, you will have to set site correctly. If you can't be sure what the url will be (it can start with file, but you don't know the middle), then hosting online is the only solution, even if no callback is needed.
A: Reading the Javascript SDK documentation and browsing the SDK on github, you can see that you only need your server to host the html page that includes the facebook JS library almost just like @Anywhere!!
No need to contact your server for any Facebook processing, if you don't want to!
| |
doc_23535460
|
I want to show only one element(h4) when I click a button.
For example, I have 5 plans and each plan has one button and one h4 tag.
When I click the third button, only the third h4 tag will shows.
As expected by my code, when the button is clicked it displays every element's h4 tag of the map. Is there any way I could active/inactive the component for just one element of the map in this case?
Thank you in advance.
import React, { useState } from "react"
const TourPage = ({
const [isOpen, setOpen] = React.useState(false)
const toggleOpen = () => {
setOpen(!isOpen)
}
return (
<article>
{plans.map((plan, index) => {
return (
<div key={index}>
<button
className="btn"
onClick={toggleOpen}
> button </button>
<div
className={`${isOpen ? "active" : "inactive"}`}
>
<h4>{plan.iternary}</h4>
</div>
</div>
)
})}
</article>
A: You need to have multiple states, state for each plan.
Here is a simple example, you can achieve it however you like, but the concept is the same: manage multiple states within the component.
const plans = [{ id: `id1`, iternary: "a1" }, { id: `id2`, iternary: "a2" }];
const INITIAL = {
id1: false,
id2: false
};
const TourPage = () => {
const [openManager, setOpenManager] = React.useState(INITIAL);
return (
<article>
{plans.map(plan => {
const onClick = () => {
setOpenManager(prev => ({ ...prev, [plan.id]: !prev[plan.id] }));
};
return (
<div key={plan.id}>
<button className="btn" onClick={onClick}>
button
</button>
<div className={`${openManager[plan.id] ? "active" : "inactive"}`}>
<h4>{plan.iternary}</h4>
</div>
</div>
);
})}
</article>
);
};
| |
doc_23535461
|
I cannot seem to find any information/documentation on how to open this dialog in code.
I have an app written in swift that does not use xib/storyboards so there is no default "About" menu item. While writing a custom one isn't exactly hard I am still wondering how to just open the standard one.
Any pointers to documentation or code snippets? I failed to find any.
A: Call orderFrontStandardAboutPanel(_:) on the application instance
NSApp.orderFrontStandardAboutPanel(nil)
| |
doc_23535462
|
<table>
<a href="screener.ashx?v=111&f=targetprice_a5&r=21"/>
<a href="screener.ashx?v=111&f=targetprice_a5&r=41"/>
<a href="screener.ashx?v=111&f=targetprice_a5&r=61"/>
</table>
Take note only one new parameter is added to the url "r=21", the rest are consistant throughout different result pages.
Is this even possible with google sheets?
Here's what I have. The goal to this idea is to build out stock market screeners that are updated every 3 mins, which allows an integration/view from notion.
=QUERY(IMPORTHTML("https://finviz.com/screener.ashx?v=111&f=cap_smallover,earningsdate_thismonth,fa_epsqoq_o15,fa_grossmargin_o20,sh_avgvol_o750,sh_curvol_o1000,ta_perf_52w10o,ta_rsi_nob50&ft=4&o=perfytd&ar=180","Table","19"),"SELECT Col1,Col2,Col7,Col8,Col9,Col10,Col11")
A: try:
=QUERY({
IMPORTHTML("https://finviz.com/screener.ashx?v=111&f=cap_smallover,earningsdate_thismonth,fa_epsqoq_o15,fa_grossmargin_o20,sh_avgvol_o750,sh_curvol_o1000,ta_perf_52w10o,ta_rsi_nob50&ft=4&o=perfytd&ar=180","Table","19");
IMPORTHTML("https://finviz.com/screener.ashx?v=111&f=cap_smallover,earningsdate_thismonth,fa_epsqoq_o15,fa_grossmargin_o20,sh_avgvol_o750,sh_curvol_o1000,ta_perf_52w10o,ta_rsi_nob50&ft=4&o=perfytd&r=21&ar=180","Table","19");
IMPORTHTML("https://finviz.com/screener.ashx?v=111&f=cap_smallover,earningsdate_thismonth,fa_epsqoq_o15,fa_grossmargin_o20,sh_avgvol_o750,sh_curvol_o1000,ta_perf_52w10o,ta_rsi_nob50&ft=4&o=perfytd&r=41&ar=180","Table","19");
IMPORTHTML("https://finviz.com/screener.ashx?v=111&f=cap_smallover,earningsdate_thismonth,fa_epsqoq_o15,fa_grossmargin_o20,sh_avgvol_o750,sh_curvol_o1000,ta_perf_52w10o,ta_rsi_nob50&ft=4&o=perfytd&r=61&ar=180","Table","19")},
"select Col1,Col2,Col7,Col8,Col9,Col10,Col11 where Col1 matches '\d+'", 1)
| |
doc_23535463
|
I think I'm negotiating the proxy OK (because the other installs worked), so does this indicate that there is something different at the end server giving me the grails content, or is my proxy objecting to the grails content.
Any advice on how to debug / fix would be appreciated
Windows XP, cygwin (very recent download), Corporate proxy
$ sdk current
Using:
gradle: 2.7
groovy: 2.4.4
lazybones: 0.8.1
vertx: 3.0.0
$ sdk install grails
Downloading: grails 3.0.7
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
curl: (56) Received HTTP code 403 from proxy after CONNECT
A: After opening your Cygwin command prompt window, type in this:
export http_proxy=http://yourusername:yourpassword@host:port/
For example:
export http_proxy=http://superman:batman@111.112.113.114:8080/
Hope this will work for you (only worked for me with half of the available candidates, sad).
A: I added 'verbose' to my .curlrc and spotted the following:
*
*Issue another request to this URL: 'http://dl.bintray.com/aalmiray/Griffon/griffon-1.5.0-bin.zip' - paste URL into my browser and it downloads (but throws away the download... love life inside a corporate ;-)
*Issue another request to this URL: 'https://github.com/grails/grails-core/releases/download/v3.0.8/grails-3.0.8.zip' - paste URL into my browser and it's blocked - no corporate access to github is allowed... grrrr
I wonder if the Grails team would consider using bintray instead of github?
A: I my case to work behind a corporate proxy.
*
*set up proxy env variables
export http_proxy="http://user:pwd@10.xxx.xxx.xxx:yy"
export https_proxy="http://user:pwd@10.xxx.xxx.xxx:yy"
*
*some corporate proxies use certificates signed by their own CA's. you should install the root CA's. Another not recomened option is adding sdkman_insecure_ssl=true to ~/.sdkman/etc/config
| |
doc_23535464
|
Cloth Issue
| |
doc_23535465
|
(browser.find_element_by_id ('formComp: buttonBack').
When this element appears, I want the loop to stop and go to the next block of code.
I tested it that way, but it made a mistake. Python reported that the element "formComp: buttonback" was not found. But that's just it, if not found continue the loop:
while (browser.find_element_by_id('formComp:repeatCompromissoLista:0:tableRealizacao:0:subtableVinculacoes:0:vinculacao_input')):
vinc = wait.until(EC.presence_of_element_located((By.ID, 'formComp:repeatCompromissoLista:0:tableRealizacao:0:subtableVinculacoes:0:vinculacao_input')))
vinc = browser.find_element_by_id('formComp:repeatCompromissoLista:0:tableRealizacao:0:subtableVinculacoes:0:vinculacao_input')
vinc.send_keys('400')
enterElem5 = wait.until(EC.element_to_be_clickable((By.ID, 'formComp:buttonConfirmar')))
enterElem5 = browser.find_element_by_id('formComp:buttonConfirmar')
enterElem5.send_keys(Keys.ENTER)
time.sleep(int(segundosv))
if (browser.find_element_by_id('formComp:buttonRetornar')== True):
break
else:
continue
A: Try like this hope this helps.Check the length count of the button more than 0.
if (len(browser.find_elements_by_id('formComp:buttonRetornar'))>0):
break
else:
continue
A: find_element_by_id() does not return False when an element is not found. Instead, it raises selenium.common.exceptions.NoSuchElementException. You can handle the exception to get the flow control you are looking for:
try:
browser.find_element_by_id('formComp:buttonRetornar')
break
except NoSuchElementException:
continue
| |
doc_23535466
|
Now, I have a bunch of git repositories, for various projects, several of which are on github. I also have the SVN repository I mentioned, imported via the git-svn command..
Basically, I like having all my code (not just projects, but random snippets and scripts, some things like my CV, articles I've written, websites I've made and so on) in one big repository I can easily clone onto remote machines, or memory-sticks/harddrives as backup.
The problem is, since it's a private repository, and git doesn't allow checking out of a specific folder (that I could push to github as a separate project, but have the changes appear in both the master-repo, and the sub-repos)
I could use the git submodule system, but it doesn't act how I want it too (submodules are pointers to other repositories, and don't really contain the actual code, so it's useless for backup)
Currently I have a folder of git-repos (for example, ~/code_projects/proj1/.git/ ~/code_projects/proj2/.git/), and after doing changes to proj1 I do git push github, then I copy the files into ~/Documents/code/python/projects/proj1/ and do a single commit (instead of the numerous ones in the individual repos). Then do git push backupdrive1, git push mymemorystick etc
So, the question: How do your personal code and projects with git repositories, and keep them synced and backed-up?
A: I would strongly advise against putting unrelated data in a given
Git repository. The overhead of creating new repositories is quite
low, and that is a feature that makes it possible to keep
different lineages completely separate.
Fighting that idea means ending up with unnecessarily tangled history,
which renders administration more difficult and--more
importantly--"archeology" tools less useful because of the resulting
dilution. Also, as you mentioned, Git assumes that the "unit of
cloning" is the repository, and practically has to do so because of
its distributed nature.
One solution is to keep every project/package/etc. as its own bare
repository (i.e., without working tree) under a blessed hierarchy,
like:
/repos/a.git
/repos/b.git
/repos/c.git
Once a few conventions have been established, it becomes trivial to
apply administrative operations (backup, packing, web publishing) to
the complete hierarchy, which serves a role not entirely dissimilar to
"monolithic" SVN repositories. Working with these repositories also
becomes somewhat similar to SVN workflows, with the addition that one
can use local commits and branches:
svn checkout --> git clone
svn update --> git pull
svn commit --> git push
You can have multiple remotes in each working clone, for the ease of
synchronizing between the multiple parties:
$ cd ~/dev
$ git clone /repos/foo.git # or the one from github, ...
$ cd foo
$ git remote add github ...
$ git remote add memorystick ...
You can then fetch/pull from each of the "sources", work and commit
locally, and then push ("backup") to each of these remotes when you
are ready with something like (note how that pushes the same commits
and history to each of the remotes!):
$ for remote in origin github memorystick; do git push $remote; done
The easiest way to turn an existing working repository ~/dev/foo
into such a bare repository is probably:
$ cd ~/dev
$ git clone --bare foo /repos/foo.git
$ mv foo foo.old
$ git clone /repos/foo.git
which is mostly equivalent to a svn import--but does not throw the
existing, "local" history away.
Note: submodules are a mechanism to include shared related
lineages, so I indeed wouldn't consider them an appropriate tool for
the problem you are trying to solve.
A: ,I haven't tried nesting git repositories yet because I haven't run into a situation where I need to. As I've read on the #git channel git seems to get confused by nesting the repositories, i.e. you're trying to git-init inside a git repository. The only way to manage a nested git structure is to either use git-submodule or Android's repo utility.
As for that backup responsibility you're describing I say delegate it... For me I usually put the "origin" repository for each project at a network drive at work that is backed up regularly by the IT-techs by their backup strategy of choice. It is simple and I don't have to worry about it. ;)
A: I also am curious about suggested ways to handle this and will describe the current setup that I use (with SVN). I have basically created a repository that contains a mini-filesystem hierarchy including its own bin and lib dirs. There is script in the root of this tree that will setup your environment to add these bin, lib, etc... other dirs to the proper environment variables. So the root directory essentially looks like:
./bin/ # prepended to $PATH
./lib/ # prepended to $LD_LIBRARY_PATH
./lib/python/ # prepended to $PYTHONPATH
./setup_env.bash # sets up the environment
Now inside /bin and /lib there are the multiple projects and and their corresponding libraries. I know this isn't a standard project, but it is very easy for someone else in my group to checkout the repo, run the 'setup_env.bash' script and have the most up to date versions of all of the projects locally in their checkout. They don't have to worry about installing/updating /usr/bin or /usr/lib and it keeps it simple to have multiple checkouts and a very localized environment per checkout. Someone can also just rm the entire repository and not worry about uninstalling any programs.
This is working fine for us, and I'm not sure if we'll change it. The problem with this is that there are many projects in this one big repository. Is there a git/Hg/bzr standard way of creating an environment like this and breaking out the projects into their own repositories?
A: I want to add to Damien's answer where he recommends:
$ for remote in origin github memorystick; do git push $remote; done
You can set up a special remote to push to all the individual real remotes with 1 command; I found it at http://marc.info/?l=git&m=116231242118202&w=2:
So for "git push" (where it makes
sense to push the same branches
multiple times), you can actually do
what I do:
*
*.git/config contains:
[remote "all"]
url = master.kernel.org:/pub/scm/linux/kernel/git/torvalds/linux-2.6
url = login.osdl.org:linux-2.6.git
*and now git push all master will push the "master" branch to both
of those remote repositories.
You can also save yourself typing the URLs twice by using the contruction:
[url "<actual url base>"]
insteadOf = <other url base>
A: What about using mr for managing your multiple Git repos at once:
The mr(1) command can checkout, update, or perform other actions on a
set of repositories as if they were one combined respository. It
supports any combination of subversion, git, cvs, mercurial, bzr,
darcs, cvs, vcsh, fossil and veracity repositories, and support for
other revision control systems can easily be added. [...]
It is extremely configurable via simple shell scripting. Some examples
of things it can do include:
[...]
*
*When updating a git repository, pull from two different upstreams and merge the two together.
*Run several repository updates in parallel, greatly speeding up the update process.
*Remember actions that failed due to a laptop being offline, so they can be retried when it comes back online.
A: There is another method for having nested git repos, but it doesn't solve the problem you're after. Still, for others who are looking for the solution I was:
In the top level git repo just hide the folder in .gitignore containing the nested git repo. This makes it easy to have two separate (but nested!) git repos.
| |
doc_23535467
|
npm install tabler-ui
I add the tabler package to resources/sass/tabler.scss and then add the SASS file to webpack.mix.js.
mix.js('resources/js/app.js', 'public/js')
.sass('resources/sass/app.scss', 'public/css')
.sass('resources/sass/tabler.scss', 'public/css');
But after trying to npm run watch, I got the following error.
File to import not found or unreadable:
../../../node_modules/bootstrap/scss/bootstrap.scss.
When I check the node_modules/tabler-ui/src/assets/scss/bundle I find a line which imports the bootstrap:
@import '../../../node_modules/bootstrap/scss/bootstrap.scss';
The error is for this line; I think this line should be like this:
@import '~bootstrap/scss/bootstrap.scss';
I cannot change this line because it is in the /node_modules folder. What should I do to solve this problem?
A: In your app.scss
replace this:
// Bootstrap
@import '~bootstrap/scss/bootstrap';
with this:
// Tabler
@import '~tabler-ui/src/assets/scss/variables';
@import '~bootstrap/scss/bootstrap';
@import '~tabler-ui/src/assets/scss/dashboard/dashboard';
A: Looks like bootstrap is a peer dependency.
Run npm install bootstrap to install it
| |
doc_23535468
|
a = navigator.language;
b = navigator.userLanguage;
language = (a && !b)? a.toLowerCase() : b;
pl = 'pl';
ru = 'ru';
en = 'en'||'us'||'au'||'bz'||'ca'||'gb'||'ie'||'jm'||'nz'||'tt'||'za';
switch (language) {
case pl: window.location.replace('polish.html'); break;
case ru: window.location.replace('russian.html'); break;
case en: window.location.replace('english.html'); break;
}
In general, the above script works, except for one problem: the browser continually refreshes the page. How can I fix this problem?
A: Your problem is that you are continually reloading the page, regardless of your current state. If your user's language is English and you're on english.html, there's no reason to reload the page.
var language = (navigator.language || navigator.userLanguage).toLowerCase(),
// simpler way of getting the user's language (take advantage of || in JS)
en = [ "en", "us", "au", "bz", "ca", "gb", "ie", "jm", "nz", "tt", "za" ],
// we'll need an array of the languages associated with English
currentPage = window.location.href.toLowerCase();
if (language == "pl" && currentPage.indexOf("polish") == -1) {
// if the user's language is polish and the page's link doesn't contain polish
// in it, redirect the user to polish.html
window.location.replace("polish.html");
} else if (language == "ru" && currentPage.indexOf("russian") == -1) {
// same concept as above
window.location.replace("russian.html");
} else if (en.indexOf(language) > -1 && currentPage.indexOf("english") == -1) {
// here, we're checking whether their language is in the array
// of user languages that should default to english
window.location.replace("english.html");
}
You can even simplify the above logic by removing en.indexOf(language) > -1 from the last else if statement. This'll make the default landing page english.html.
| |
doc_23535469
|
SELECT YEAR(o.date_added) AS YEAR, MONTH(o.date_added) AS MONTH, Count(*) AS COUNT FROM `order` o WHERE (YEAR(o.date_added) =2012) GROUP BY year, month;
There are no records for Months 1 and 2 but I want the Count to return 0 for these months
| |
doc_23535470
|
function pp(r)
{
console.log("In Function", r);
console.log("In Function", q);
console.log("In Function", m);
}
If I execute this - I am able to use the variable q inside the function, even though it is not defined as a function parameter. However in that place if I place another variable - it throws an error.
pp('op', q='rr')
In Function op
In Function rr
Uncaught ReferenceError: m is not defined
Now if I remove the line console.log("In Function", q); I still get the error Uncaught ReferenceError: m is not defined which is acceptable as q is a named variable.
How am I able to use a variable passed to a function that is not defined in the function parameter?
A: pp('op', q='rr') - this is not the right approach. It will create a global variable 'q' with the value 'rr'. That's the reason you are getting the value of q.
one approach is,
function pp(r, q='rr', m)
{
console.log("In Function", r);
console.log("In Function", q);
console.log("In Function", m);
}
pp('op') // r -> op , q -> rr and m is undefined
[or] use arguments
function pp()
{
if(arguments.length > 0) {
console.log("In Function", arguments[0]);//op1
console.log("In Function", arguments[1]);//undefined
console.log("In Function", arguments[2]);//undefined
}
}
pp('op1')
A: You can use Javascript Rest Parameters. ES6 provides a new kind of parameter called Rest Parameter that has a prefix of three dots (...)
Let's see how to use it:
function pp(r, ...args) { // ...args contains all rest parameters
console.log("In Function", r); // op
console.log("All other passed values", args) // [rr, tt]
console.log(args[0]) // 0th index value => rr
console.log(args[1]) // 1st index value => tt
}
pp('op', 'rr', 'tt');
| |
doc_23535471
|
on parent html:
<cl-sort-n-reverse flip-reverse="flipReverse()" set-key="setKey(key)" sort-key="sortKey" rev="reverse" key="body" title="Content"></cl-sort-n-reverse>
in directive js:
.directive('clSortNReverse', function () {
return {
restrict: 'E',
scope:{
innerKey : '@key',
title : '@title',
inReverse: '=rev',
sortKey:'=',
flipReverse:'&',
setKey:'&'
},
transclude :true,
templateUrl:'../devhtml/common/sortNReverse.html',
link: function (scope, element, attrs) {
}
};
});
in template:
<span>
<div class="btn" ng-if="sortKey!=innerKey" data-ng-click="setKey({key:innerKey});">
{{title}}
</div>
<div class="btn" ng-if="sortKey==innerKey" data-ng-click="flipReverse();">
<b>{{title}}</b>
<i ng-if="inReverse===true" class="fa fa-caret-up"></i>
<i ng-show="inReverse===false" class="fa fa-caret-down"></i>
</div>
</span>
I would really like to be able to insert both functions ans scope level vars like sortKey and reverse to attribute inside one object I can declare in my controller. the answers I found here regarding adding objects to directive attributes all dealt with a simpler case in which the objects encapsulates only simple string or numbers. I'm stumped at how to define that options objects to include $scope level stuff
A: Is this what you meant?
In your controller:
$scope.sortReverseConfig = {
flipReverse: function() {...},
setKey: function(key) {...},
sortKey: '',
innerKey: $scope.body,
inReverse: $scope.reverse,
title: $scope.content
}
In your html:
<cl-sort-n-reverse config="sortReverseConfig"></cl-sort-n-reverse>
Directive:
.directive('clSortNReverse', function () {
return {
restrict: 'E',
scope:{
config: '='
},
transclude :true,
templateUrl:'../devhtml/common/sortNReverse.html',
link: function (scope, element, attrs) {
}
};
});
Directive template:
<span>
<div class="btn" ng-if="config.sortKey != config.innerKey" data-ng-click="config.setKey({key:config.innerKey});">
{{config.title}}
</div>
<div class="btn" ng-if="config.sortKey == config.innerKey" data-ng-click="config.flipReverse();">
<b>{{config.title}}</b>
<i ng-if="config.inReverse === true" class="fa fa-caret-up"></i>
<i ng-show="config.inReverse === false" class="fa fa-caret-down"></i>
</div>
</span>
| |
doc_23535472
|
*
*I have an imploded array (1, 5, 3, 4, ..).
*Two tables where I need to use UNION to merge column atribute.
*And return one result where atributes are equal or greater with my array order by rand().
Is there some simple way to approach this?
I tried using IN() clause, but It will not return equal or greater results with my array.
SELECT av.sentence
FROM atributes_workers AS a
LEFT JOIN atributes_workers_sentences AS av ON a.id = av.parent
LEFT JOIN atributes_workers_sentences_other_atributes AS avd ON av.id = avd.sentence_id
UNION
WHERE atribute IN (1,5,3,4)
ORDER BY RAND()
But it doesn't work.
atributes_workers_sentences does have one column with atribute too, that's why I need to search in two tables and merge these cols.
| |
doc_23535473
|
from tkinter import Tk, Label, RAISED, Button, Entry
self.window = Tk()
#Keyboard
labels = [['q','w','e','r','t','y','u','i','o','p'],
['a','s','d','f','g','h','j','k','l'],
['z','x','c','v','b','n','m','<']]
n = 10
for r in range(3):
for c in range(n):
n -= 1
label = Label(self.window,
relief=RAISED,
text=labels[r][c])
label.grid(row=r,column=c)
continue
This gives me the first row, but it does not return anything else. I tried simply using 10 as the range, which created the first two rows of the keyboard, but it still did not continue onto the last row.
A: Your issue is in the line n -= 1. Every time a label is created, you make n one less- after the first whole row, n==0, and thus the range is 0>0, and ranges never include the high bound- for c in range(0) will just drop from the loop (as it has looped through all the nonexistent contents).
A better solution involves iterating through the lists instead of through the indexes- for loops take any iterable (list, dictionary, range, generator, set, &c.);
for lyst in labels:
# lyst is each list in labels
for char in lyst:
# char is the character in that list
label = Label(... text=char) # everything else in the Label() looks good.
label.grid(...) # You could use counters for this or use ennumerate()-ask if you need.
# The continue here was entirely irrelevant.
A: Is this what you want it to do? Let me know if you need me to explain it further but basically what I'm doing is first filling the columns in each row. So row remains 0 and then as I loop through the column (the inner list) I fill in each of the keys, then on to the next row and etc.
from tkinter import Tk, Label, RAISED, Button, Entry
window = Tk()
#Keyboard
labels = [['q','w','e','r','t','y','u','i','o','p'],
['a','s','d','f','g','h','j','k','l'],
['z','x','c','v','b','n','m','<']]
for r in labels:
for c in r:
label = Label(window, relief=RAISED, text=str(c))
label.grid(row=labels.index(r), column=r.index(c))
window.mainloop()
| |
doc_23535474
|
A: Using pySerial:
Python 2.x:
import serial
byte = 42
out = serial.Serial("/dev/ttyS0") # "COM1" on Windows
out.write(chr(byte))
Python 3.x:
import serial
byte = 42
out = serial.Serial("/dev/ttyS0") # "COM1" on Windows
out.write(bytes(byte))
A: Google says:
*
*http://pyserial.sourceforge.net/
*http://balder.prohosting.com/ibarona/en/python/uspp/uspp_en.html
if you're using Uspp; to write on serial port documentation says
| |
doc_23535475
|
def to_do_count
Task.where(status: 0).count
end
def in_progress_count
Task.where(status: 1).count
end
def paused_count
Task.where(status: 2).count
end
def completed_count
Task.where(status: 3).count
end
I need help to optimize this code as there are lot of repetitions.
A: def count_of(type)
Task.where(status: get_status_type(type)).count
end
def get_status_type(type)
{to_do: 0, in_progress: 1, paused_count: 2, completed_count: 3}[type]
end
Now call:
count_of(:to_do)
instead of
to_do_count
A: Option 1
def task_count(status)
Task
.where(status: { to_do: 0, in_progress: 1, paused_count: 2, completed_count: 3 }[status])
.count
end
task_count(:to_do)
task_count(:in_progress)
Option 2
You can simplify it by using scopes
class Task
scope :to_do, -> { where(status: 0) }
scope :in_progress, -> { where(status: 1) }
scope :paused_count, -> { where(status: 2) }
scope :completed_count, -> { where(status: 3) }
end
Then helper can look like this:
def task_count(status)
Task.send(status).count
end
task_count(:to_do)
A: Use hash constant
TASK = {
to_do: 0,
in_progress: 1,
paused_count: 2,
completed_count: 3
}
def self.count_of(type)
Task.where(status: TASK[type]).count
end
Call it on class
Task.count_of(:to_do)
A: You can define a STATUSES constants then define those methods using runtime method definition in Ruby. The code will be like
STATUSES = {
to_do: 0,
in_progress: 1,
paused: 2,
completed: 3
}
def count_for(status)
Task.where(status: status).count
end
STATUSES.each do |k, v|
define_method("#{k}_count"){ count_for(v) }
end
Now you can call all these methods. Since they were defined dynamically
*
*to_do_count
*in_progress_count
*paused_count
*completed_count
| |
doc_23535476
|
Error signing output with public key from file 'redacted.snk' -- The process cannot access the file because it is being used by another process. (Exception from HRESULT: 0x80070020)
The closest related SO question I could find is this one here. I have attempted all of the recommendations with no success, including:
*
*Cleaning my temp folder
*Editing the csproj with the UseSharedCompilation tag
*Set the "maximum number of parallel project builds" to 1
*Restarted my computer
I'm running out of ideas on this one. Not sure what to try next. My hunch is that the error isn't 100% accurate (i.e. the file is not being used by another process), and that this has something to do with the upgrade to VS2017.
Any help would be greatly appreciated. Thank you.
| |
doc_23535477
|
import { Directive } from '@angular/core';
import {AbstractControl, AsyncValidator, NG_ASYNC_VALIDATORS, ValidationErrors} from "@angular/forms";
import {Observable} from "rxjs";
import {IUser} from "app/core/user/user.model";
import {UserService} from "app/core/user/user.service";
@Directive({
"selector": '[uniqueUsername][ngModel],[uniqueUsername][FormControl]',
providers: [
{provide: NG_ASYNC_VALIDATORS, useExisting: UniqueUsernameDirective, multi: true}
]
})
export class UniqueUsernameDirective implements AsyncValidator{
constructor(private userService: UserService) { }
validate(control: AbstractControl, currentUser?: IUser): Promise<ValidationErrors | null> | Observable<ValidationErrors | null>{
const promise = new Promise<any>((resolve, reject) => {
if (currentUser?.login !== control.value) {
this.userService.userExits(control.value).subscribe(user => {
if (user?.login !== '') {
resolve(null);
}
},
response => {
resolve({'loginAlreadyExits': true});
});
}
});
return promise;
}
}
Here is what I am getting error;
/Users/aygunozdemir/IdeaProjects/inuka-ng/src/main/webapp/app/shared/validators/unique-username.directive.ts
10:49 error 'UniqueUsernameDirective' was used before it was defined @typescript-eslint/no-use-before-define
β 1 problem (1 error, 0 warnings)
Here is my document ;
https://weblog.west-wind.com/posts/2019/Nov/18/Creating-Angular-Synchronous-and-Asynchronous-Validators-for-Template-Validation
Also how can I add input to inside directive, basically
<input type="text" class="form-control" id="login" name="login" formControlName="login" uniqueUsername[???? shal I put here 'currentUser']>
A: To use your uniqueUsername validation directive within your form input control apply it as follows:
<input type="text" class="form-control" id="login" name="login"
formControlName="login" uniqueUsername={{currentUser}}>
Where currentUser is a variable within your component that you wish to validate.
Also remember to declare your validation directive within your application module:
@NgModule({
declarations: [
. . .
UniqueUsernameDirective
],
...
| |
doc_23535478
|
I need to pass objects of different struct types from SV to C. And I want to have a single DPI function to handle any kind of supported struct.
On SystemVerilog side I have many types:
typedef struct { ... } stype1;
typedef struct { ... } stype2;
typedef struct { ... } stype3;
typedef struct { ... } stype4;
stype1 var1;
stype2 var2;
stype3 var3;
stype4 var4;
On C side I have a single function that accepts any type:
void send_sv_data(const char* type_name, void *ptr);
I've hoped to use it like this on SV side:
send_sv_data("stype1", var1);
send_sv_data("stype2", var2);
And it works in general, with only problem that SystemVerilog does not supports overloading. So I can only declare it for one type:
import "DPI" function void send_sv_data(string port_name, stype1 data);
I've tried to have chandle as argument:
import "DPI" function void send_sv_data(string port_name, chandle data);
But I have not found a way how I cast a variable of struct type to chandle.
A: There are no pointers, and no casting of pointers in SystemVerilog. The easiest thing to do is pack your SystemVerilog structs into an array of bytes, and unpack them to your C structs.
| |
doc_23535479
|
Private Sub DownloadFile(ByVal fname As String, ByVal forcedownload As Boolean)
Dim flpth As String
Dim fnm As String
Dim ext As String
Dim tp As String
flpth = System.IO.Path.GetFullPath(Server.MapPath(fname))
fnm = System.IO.Path.GetFileName(flpth)
ext = System.IO.Path.GetExtension(fnm)
tp = ""
If Not IsDBNull(ext) Then
ext = LCase(ext)
End If
Select Case ext
Case ".doc", ".rtf"
tp = "application/msword"
Case ".pdf"
tp = "application/pdf"
Case ".zip"
tp = "application/zip"
End Select
If (forcedownload) Then
Response.Clear()
Response.ClearContent()
Response.ClearHeaders()
Response.ContentType = tp
Response.AppendHeader("Content-Disposition", "attachment; filename=" + fnm + ext)
Response.TransmitFile(flpth)
Response.Flush()
Response.End()
End If
End Sub
I am tring to call above code in below line but its not working. Please can any body help.
<a id="Click" runat="server" href="#" onclick="DownloadFile('files/Notes.doc',True)">Click here</a>
A: The correct way is to add a link button by drag and drop it to the page, and then go to the properties of this button and add the OnClick methodo, that is also create an automate function on code behind, where you run the DownloadFile
A: 1) Try using a LinkButton.
2) You can't pass arguments back, you'll have to get the data at the server.
A: If you are flexible with the fact that it has to be an anchor tag directly, and don't mind creating it as a 'button', you could use the following:
ASP.Net Button in codebehind that calls codebehind function
If not, here is a solution that uses JavaScript and a postback to achieve a similar functionality. Personally I think using the above button solution would be more flexible and close to what you're wanting.
Can I call a code behind function from a div onclick event?
A: If you use the anchor tag as a server control you would need to set the onServerClick event. You can add custom tags to your anchor in order to be used.
<a id="hypDownload" href="javascript:void(0);" runat="server" onserverclick="DownloadFile()" filename="files/Notes.doc" forcedownload="true"></a>
Private Sub hypDownload_ServerClick(sender As Object, e As EventArgs) Handles hypDownload.ServerClick
Dim filename As String = hypDownload.Attributes("filename") 'Also: CType(sender, HtmlAnchor).Attributes("filename")
Dim forcedownload As Boolean = hypDownload.Attributes("forcedownload").ToString().ToLower() = "true"
DownloadFile(filename, forcedownload)
End Sub
A: place this javascript on the ASPX page.
<script type="text/javascript">
function DownloadFile() {
document.getElementById(<%= DownloadFile.ClientID%>).click();
}
</script>
place the button within a div tag with display style hidden DO NOT set the display style of the button to hidden as the javascript will not be able to find the button on the page.
<div style="display= hidden;">
<asp:button id="DownloadFile" runat="server" />
</div>
then setup you <a> tag as shown below:
<a href="javascript:DownloadFile();">link text</a>
then use your sub routine as the click event for the asp:button
having said all that, the asp:linkbutton option would result in a lot less code.
| |
doc_23535480
|
I want to integrate my app with a database. This database has a table with a field called 'status' which can change frequently (every couple of minutes or so).
I want my web app to know what the value of the status field is in real time.
My Current Method:
I currenlty, have an AJAX function in Javscript which sends a HTTP request to my Node.js server. The Node.js sever selects the record from the database and then returns the result to the AJAX function, which displays the result on the screen. I have setup an interval to execute the function every second.
This doesn't seem like a great option and it is causing a LOT of traffic on my server.
What is the proper way to monitor the SQL field in real time?
A: If all your clients are interested in the same value in the database and your database situation prevents any non-polling scenario, then you will want to move the polling from the client to the server so the server can poll ONCE for all the clients. This will dramatically limit the load on both your server and on the database server.
Then, you can have each client connect to the server with a webSocket (or socket.io) connection and when the value in the database changes (and only then), you can then send an update message to each client over the webSocket connection. This will use a ton less sever CPU and bandwidth and client battery.
| |
doc_23535481
|
For example :
1 2 3 4 5 6 7 8 9 10 -> f(1)= 1 + 2 + .. + 8 + 9 + 1
1 2 3 .. 98 99 100 -> f(2)= 1 + 2 + .. + 8 + 9 + 1(1+0) + 11 + 12 + β¦ + 99 + 1
1 2 3 .. 98 99 100 β¦ 998 999 1000 -> f(3)= 1 + 2 + .. + 8 + 9 + 1 + 11 + 12 + β¦ + 99 + 1 + 2 + .. + 111 + β¦ + 1
NOTICE : 102 change to : 1 2 (1+0+2=3)
Rather be recursive!
A: 1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 1 + 11 + ... + n
=
(1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 10 + 11 + ... + n)
- (10 + 20 + ... + n) { remove numbers ending in 0 }
+ (1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 + 1 + 11 + ... + n/10) { add back in numbers with their 0's removed }
This gives a nice simple algorithm:
f :: Integer -> Integer
f 0 = 0
f n = n * (n +1) `div` 2 -- famous Euler formula
- 10 * n' * (n'+1) `div` 2 -- subtract the multiples of 10
+ f n'
where n' = n `div` 10
For example, we can see it run in ghci:
> mapM_ print [(n, f n) | n <- [9, 10, 11, 99, 100, 101]]
(9,45)
(10,46)
(11,57)
(99,4545)
(100,4546)
(101,4647)
It recurses only log(n) times, doing O(1) arithmetic operations at each recursion, so does O(log(n)) arithmetic operations total. (Indeed, it's a small enough computation that you could feasibly do it by hand for the first 1000 terms if you wanted.)
| |
doc_23535482
|
I am able to create a standard survival curve without issue using the following code:
km <- svykm(Surv(rec_time,rec)~procedure, design=design.mnps)
plot(km)
I have looked online, and if it wasn't a svykmlist object, the following would work:
km <- svykm(Surv(rec_time,rec)~procedure, design=design.mnps, function(x) 1-x)
plot(km)
I get the following error:
1: In plot.window(...) : "FUN" is not a graphical parameter
, etc.
I have also tried FUN="event", which works fine with a traditional survey object but not the svykm object. I have also tried manipulating the survey object, which has been unsuccessful.
Is there anything I haven't tried yet that can work with the svykmlist object?
Thanks,
Matt
A: when you post on SO, please use a minimal reproducible example. the object design.mnps isn't useful. please edit your question and clarify why you aren't simply inverting your function in the formula like this? thanks
example from ?svykm
library(survey)
data(pbc, package="survival")
pbc$randomized <- with(pbc, !is.na(trt) & trt>0)
biasmodel<-glm(randomized~age*edema,data=pbc)
pbc$randprob<-fitted(biasmodel)
dpbc<-svydesign(id=~1, prob=~randprob, strata=~edema, data=subset(pbc,randomized))
s2<-svykm(Surv(time,status>0)~I(bili>6), design=dpbc)
plot(s2)
sz<-svykm(Surv(time,status>0)~I( 1 - I(bili>6) ), design=dpbc)
plot(sz)
| |
doc_23535483
|
public static readonly DependencyProperty HtmlProperty = DependencyProperty.RegisterAttached(
"Html",
typeof(string),
typeof(Class1),
new FrameworkPropertyMetadata(OnHtmlChanged));
[AttachedPropertyBrowsableForType(typeof(WebBrowser))]
public static string GetHtml(WebBrowser d)
{
return (string)d.GetValue(HtmlProperty);
}
public static void SetHtml(WebBrowser d, string value)
{
d.SetValue(HtmlProperty, value);
}
static void OnHtmlChanged(DependencyObject d, DependencyPropertyChangedEventArgs e)
{
WebBrowser wb = d as WebBrowser;
if (wb != null)
wb.NavigateToString(e.NewValue as string);
}
and in my View my Property like that:
private string _html;
public string html1
{
get
{
string html = "html string here";
return _html + html;
}
set
{
_html = value;
NotifyPropertyChanged(nameof(html1));
}
}
and my XAML like that:
<WebBrowser local:Class1.Html="{Binding html}"></WebBrowser>
But it dosent show my html string. What am I doing wrong?
A: If it works when you set the attached property to a local value like this you know that your binding to html fails:
<WebBrowser local:Class1.Html="test..."></WebBrowser>
Also, you bind to a property called "html" but the one you have posted is named html1.
Ensure that you have set the DataContext of the control to a class that has a public html property that returns a string. Then your code should work.
| |
doc_23535484
|
That works fine with Chrome browser.
The CreateJS framework should work with iOS.
You can find my code here:
Link no more available
Do you have any idea?
Otherwise, do you have any other solution to create HTML5 animation for iPad from the web?
In addition, I need to add sound to this animation with synchronization with the animation, so the animation must not lag, I need perfect performances.
Thanks.
A: Sorry, my first answer was not helpful. I took a deeper look here, and the issue is probably because of the size of the PNG. Your PNG is currently 4096x2048, which is double the acceptable size for iOS.
Ensure your images are less than 2048x2048 - you can use ZoΓ« to export them from Flash, which will give you multiple images.
http://createjs.com/zoe
Cheers.
| |
doc_23535485
|
dateTextBox.Click();
dateTextBox.SendKeys(date);
driver.FindElement(By.Id("btnSave")).Click();
I have a page where I have a Telerik date control with a textbox. The user inputs a date into the text box and clicks the save button.
For some reason when I run the test it enters the date into the text box but when it goes to save it clears the textbox of the date control so nothing is present in the date control text box.
A: Finally figured out the answer to this.
Really the only way to input a value into the telerik RAD Date picker is through javascript command
public void runScript(IWebDriver driver, String script)
{
IJavaScriptExecutor js = driver as IJavaScriptExecutor;
js.ExecuteScript(script);
}
public void setDatePicker(IWebDriver driver, String date, String Id)
{
String script = string.Format("$find('{0}').set_selectedDate(new Date('{1}'))", Id, date);
runScript(driver, script);
}
First i format a string for the java script command then run the runScript method which just executes the script. Hope this helps any future BAT developers who run into this in the future.
| |
doc_23535486
|
The columns are created in the FacadeTableComponent based on someComponent, and passed as an input to the NgxDataTable.
But now I want to add a tree view to the datatable (that works) but I want the FacadeTableComponent to set a default first column with a button that does the tree collapsing/expanding. So I thought that the default column could be defined as a ngx-datatable-column in the FacadeTableComponent html template. The problem seems that the input columns seem to overwrite my default column.
Now I tried to somehow grab the ngx-datatable-column with a @ViewChild and unshift it to the generated columns in the FacadeTableComponent, but I am not able to grab it as a directive, rather as a ElementRef.
I could of course just define a default column in the .ts file and generate it when I generate the rest of the columns, but I want to know if it is possible to do it the way I described it in the title.
The reason for this default column is that when I set one of the existing columns as a treeView column, the ngx datatable adds a default button, even if I provide my own template to the cell, and I want to provide my own button in the template.
<ngx-datatable #ngxDataTable class="datatable" [rows]="collection" [columnMode]="columnMode"
[footerHeight]="footerHeight" [selectionType]="selectionType" [sortType]="sortType" [sorts]="sorts"
[rowIdentity]="rowIdentity" [messages]="tableMessages" [selected]="selectedRows" [columns]="columns"
[headerHeight]="headerHeight" [rowHeight]="rowHeight" [scrollbarV]="true" [scrollbarH]="true"
(select)="onSelect($event)" (activate)="onActive($event)" [externalSorting]="true" [rowClass]="rowCssClass"
[sortType]="'single'" [treeFromRelation]="'MasterId'" [treeToRelation]="'id'"
(treeAction)="onTreeAction($event)" (sort)="sortRows($event)" (resize)="resizeTriggered($event)">
<!-- Datatable tree view column -->
<ngx-datatable-column #column_tree_view name="Tree" [isTreeColumn]="true" [width]="150"
[treeLevelIndent]="20">
<ng-template ngx-datatable-tree-toggle let-tree="cellContext">
<button [disabled]="tree.treeStatus==='disabled'" (click)="tree.onTreeAction()">
<span *ngIf="tree.treeStatus==='loading'">
...
</span>
<span *ngIf="tree.treeStatus==='collapsed'">
β
</span>
<span *ngIf="tree.treeStatus==='expanded'">
β
</span>
<span *ngIf="tree.treeStatus==='disabled'">
β
</span>
</button>
</ng-template>
</ngx-datatable-column>
<ngx-datatable>
A: I already solved it if someone stomps upon this question:
Firs we define an ng-template on the html of the FacadeTableComponent:
<ng-template #treeTemplate let-row="row">
<div *ngIf="row.IsTreeColumn" class="tree-cell">
<ng-container *ngTemplateOutlet="treeColumnTemplate; context: {row : row}"></ng-container>
<p-button (click)="onTreeAction(row)">
<span *ngIf="row.treeStatus==='collapsed'">
β
</span>
<span *ngIf="row.treeStatus==='expanded'">
β
</span>
</p-button>
</div>
After that I just grab the template with the directive viewChild, define a column with the prop value of '_treeColumnName', which is provided as an input of the Facade and unshift it to the columns array:
@ViewChild('treeTemplate') private _treeViewTemplate: TemplateRef<any>;
private createTreeViewColumn(): void {
const column: TableColumn= {
name: this._treeColumnName,
prop: this._treeColumnName,
} as TableColumn;
if (this.columns.findIndex((tablecolumn: TableColumn) => tablecolumn.prop === this._treeColumnName) === -1) {
// the isTreeColumn is not included in the TableColumn, therefore a cast is needed
const tablecolumn = column as TableColumn;
tablecolumn.isTreeColumn = true;
this.columns.unshift(tablecolumn);
}
}
private removeTreeViewColumn(): void {
const treecolumnindex = this.columns.findIndex((tablecolumn: TableColumn) => tablecolumn.prop === this._treeColumnName);
if (treecolumnindex !== -1) {
this.columns.splice(treecolumnindex, 1);
}
}
Finally I just add the implementation to the expand/collapse feature:
private readonly _treeStatusCollapsed: string = 'collapsed';
private readonly _treeStatusExpanded: string = 'expanded';
...
/** Expands or collapses a tree view row
*/
public onTreeAction(event: any): void {
const row = event;
if (row.treeStatus === this._treeStatusCollapsed) {
row.treeStatus = this._treeStatusExpanded;
} else {
row.treeStatus = this._treeStatusCollapsed;
}
// trigger a change detection
this.rows= [...this.rows];
this._changeDetectorRef.detectChanges();
}
And that is it, works like a charm. Of course one could then provida whatever template it wants, or via css classes change the icons.
| |
doc_23535487
|
In this page there is a button to export the table in excel.
When I click this button, the script prepare the string and send a POST request to PHP.
The sent string has a total of about 60000 chracters.
When I do $_POST in the php page, the php server crashes.
I have alredy changed the php.ini file in the following way:
max_input_vars = 15000
memory_limit = -1
upload_max_filesize = 100M
max_file_uploads = 20
This still does not work.
There is some method or workaround to resolve this issue?
I try to modify my code a lot of time.
This is the code:
JAVASCRIPT AJAX REQUEST
function esporta(tipo,id,header,dati){
var dataString = "id="+id;
if(tipo == "excel"){
if(header != ""){
dataString += "&header="+header;
}else{
dataString += "&header=vuoto";
}
dataString += "&dati="+dati;
}
if($.trim(id).length>0){
$.ajax({
type: 'POST',
url: 'lib/excel-export.php',
data: dataString,
cache: false,
beforeSend: function(){
$('.se-pre-con').fadeIn('slow');
},
success: function(data){
$('.se-pre-con').fadeOut('slow');
if(data){
if(data == 'errore'){
alert('Errore!');
}else{
document.location = "lib/download.php?nomefile="+data;
}
}else{
alert('Errore! Mancata restituzione dei dati!');
}
}
});
}
}
PHP
...
$sheets = $wkb->Worksheets(1); #Select the sheet
$sheets->activate; #Activate it
$postheader = $_POST['header'];
$postdati = urldecode($_POST['dati']);
$header = explode(";",$postheader);
$dati = explode(";",$postdati);
...
The following code
$postdati = urldecode($_POST['dati']);
crashes the server (if I comment out this line, the server doesn't crash).
In the excel there are only 30000 characters.
Please note that I can't perform database query in the PHP page. I have to take the data from that table.
I use appserver.io webserver on my local PC.
Thank you in advance.
A: 60,000 characters is merely 60KB, or a little bit more depending on character encoding.
Have you checked the post_max_size directive of your php.ini?
From php.ini's comments:
Maximum size of POST data that PHP will accept.
Its value may be 0 to disable the limit. It is ignored if POST data reading is disabled
through enable_post_data_reading.
http://php.net/post-max-size
| |
doc_23535488
|
public class Customer
{
public string Name { get; set; }
public DateTime DOB { get; set; }
[System.ComponentModel.DataAnnotations.Schema.NotMapped] // <- this is what I want to do, but can't in PCL
public AccountCollection Accounts { get; set; }
}
The above has the "NotMapped" attribute, which is what I want - but it's not available in a portable class library (PCL). The thing is, the class I need is defined in an assembly that WILL be used on the portable device but it will be filled from entity framework on the web, which DOES have access to the NotMapped attribute. If I could find a way to add the property to EF's "NotMapped" list, that would be ideal.
Is there a way to get this to work? That is, a way to do what "NotMapped" does programmatically?
I've considered other workarounds, but none of them are ideal:
*
*Could create a DAL separate from my domain layer and translate
between the two (but requires mapping and two models instead of one)
*Could write custom EF queries and updates to ignore the property (but means writing all the linq/SQL/procs myself)
A: Found the answer in the Context's OnModelCreating() overload. Accessing the modelBuilder parameter it's possible to find the entity and ignore specific properties. This works even when the POCO is defined in a PCL.
For example:
protected override void OnModelCreating(DbModelBuilder modelBuilder)
{
// Ignore Customer.Accounts
modelBuilder.Entity<Customer>().Ignore(c => c.Accounts);
}
| |
doc_23535489
|
SELECT
InventoryItem.ItemName as 'Part Number',
InventoryItemDescription.ItemDescription as 'Description',
InventoryStockTotal.UnitsInStock as 'In Stock',
InventoryStockTotal.WarehouseCode AS 'Warehouse',
InventoryItem.Status,
InventoryItem.AverageCost AS 'Average Cost',
(InventoryItem.AverageCost * UnitsInStock) AS 'Stock Value'
FROM dbo.InventoryItemDescription
INNER JOIN dbo.InventoryItem
ON InventoryItemDescription.ItemCode = InventoryItem.ItemCode
INNER JOIN dbo.InventoryStockTotal
ON InventoryStockTotal.ItemCode = InventoryItem.ItemCode
This gives me a result like this
βββββββββββββββ¦ββββββββββββββ¦βββββββββββ¦ββββββββββββββββββββ¦βββββββββ¦βββββββββββββββ¦ββββββββββββββ
β Part Number β Description β In Stock β Warehouse β Status β Average Cost β Stock Value β
β ββββββββββββββ¬ββββββββββββββ¬βββββββββββ¬ββββββββββββββββββββ¬βββββββββ¬βββββββββββββββ¬ββββββββββββββ£
β 555 β FILTER β 0 β BRISBANE β A β 8.761043 β 0 β
β 555 β FILTER β 187 β MAIN β A β 8.761043 β 1638.315041 β
β 555 β FILTER β 0 β MELBOURNE β A β 8.761043 β 0 β
β 555 β FILTER β 21 β PERTH β A β 8.761043 β 183.981903 β
β 555 β FILTER β 0 β PATTISONS β A β 8.761043 β 0 β
β 555 β FILTER β 12 β QLD Warehouse (1) β A β 8.761043 β 105.132516 β
β 555 β FILTER β 22 β SYDNEY β A β 8.761043 β 192.742946 β
βββββββββββββββ©ββββββββββββββ©βββββββββββ©ββββββββββββββββββββ©βββββββββ©βββββββββββββββ©ββββββββββββββ
However, I'm trying to write a query that will give me the TOTAL for each part number, as follows (obviously the warehouse code becomes redundant if I'm showing total for all warehouses)
This query groups by the part number.
βββββββββββββββ¦βββββββββββββββββ¦βββββββββββ¦βββββββββ¦βββββββββββββββ¦ββββββββββββββ
β Part Number β Description β In Stock β Status β Average Cost β Stock Value β
β ββββββββββββββ¬βββββββββββββββββ¬βββββββββββ¬βββββββββ¬βββββββββββββββ¬ββββββββββββββ£
β 555 β WIX AIR FILTER β 242 β A β 8.761043 β 2120.172406 β
βββββββββββββββ©βββββββββββββββββ©βββββββββββ©βββββββββ©βββββββββββββββ©ββββββββββββββ
The only query I've been able to get to work is this
SELECT
InventoryItem.ItemName as 'Part Number',
InventoryItem
SUM(InventoryStockTotal.UnitsInStock) as 'In Stock',
AVG(InventoryItem.AverageCost) AS 'Average Cost',
(AVG(InventoryItem.AverageCost) * SUM(InventoryStockTotal.UnitsInStock)) AS 'Stock Value'
FROM dbo.InventoryItemDescription
INNER JOIN dbo.InventoryItem
ON InventoryItemDescription.ItemCode = InventoryItem.ItemCode
INNER JOIN dbo.InventoryStockTotal
ON InventoryStockTotal.ItemCode = InventoryItem.ItemCode
GROUP BY InventoryItem.ItemName
Which gives me this
βββββββββββββββ¦βββββββββββββ¦βββββββββββββββ¦ββββββββββββββ
β Part Number β In Stock β Average Cost β Stock Value β
β ββββββββββββββ¬βββββββββββββ¬βββββββββββββββ¬ββββββββββββββ£
β 555 β 242.000000 β 8.761043 β 2120.172406 β
βββββββββββββββ©βββββββββββββ©βββββββββββββββ©ββββββββββββββ
THE PROBLEM
I need to include Item Description and Status code etc in the results table as well, except when I try and add them to the select statement it returns an error
Column 'dbo.InventoryItemDescription.ItemDescription' is invalid in the select list because it is not contained in either an aggregate function or the GROUP BY clause. I understand the cause of this error because its trying to group a column that doesn't have an aggregate, but how can I get around this?
THE IDEAL SOLUTION
The description for the part numbers will be the same for every instance, is there some way i can instruct SQL to select only the first instance of the Description?
** EDIT 2 **
Could I possibly utilise the SELECT DISTINCT function?
A: If description is always the same for each part number, don't bother adding it to group by clause.
SELECT
InventoryItem.ItemName as 'Part Number',
InventoryItemDescription.ItemDescription as 'Description',
SUM(InventoryStockTotal.UnitsInStock) as 'In Stock',
FROM ...
GROUP BY InventoryItem.ItemName, InventoryItemDescription.ItemDescription
You can use SELECT DISTINCT, which is the same as GROUP BY.
Or you can leverage window function to calculate avg/total while still keeping every description or warehouse value.
WITH CTE AS
(
SELECT
InventoryItem.ItemName as 'Part Number',
InventoryItemDescription.ItemDescription as 'Description',
InventoryStockTotal.WarehouseCode AS 'Warehouse',
InventoryItem.Status,
SUM(InventoryStockTotal.UnitsInStock) OVER (PARTITION BY InventoryItem.ItemName) as 'In Stock',
ROW_NUMBER() OVER() AS RowNumber
FROM
....
// NO GROUP BY
)
SELECT colums
FROM CTE
WHERE RowNumber = 1
| |
doc_23535490
|
The .py file has the following:
from flask import Flask, render_template, flash, session, redirect, url_for
from flask_wtf import FlaskForm
from wtforms import StringField,SubmitField
from wtforms.validators import DataRequired
app = Flask(__name__)
app.config['SECRET_KEY'] = 'kmykey'
class SimpleForm(FlaskForm):
username = StringField("Username:", validators=[DataRequired()])
password = StringField("Password:", validators=[DataRequired()])
submit = SubmitField("Submit")
@app.route('/', methods = ['GET', 'POST'])
def index():
form = SimpleForm()
if form.validate_on_submit():
session['username'] = form.username.data
session['password'] = form.password.data
return redirect(url_for('index'))
return render_template('Login_Page.html', form=form)
if __name__ == '__main__':
app.run()
The corresponding jinja template consists of:
<!DOCTYPE html>
<html lang="en" dir="ltr">
<head>
<meta charset="utf-8">
<title>Ticket Booking</title>
<link rel= "stylesheet" type= "text/css" href= "{{ url_for('static',filename='Login_Page.css') }}">
<link href="https://fonts.googleapis.com/css2?family=Anton&display=swap" rel="stylesheet">
<link rel="stylesheet" href="https://stackpath.bootstrapcdn.com/bootstrap/4.5.0/css/bootstrap.min.css" integrity="sha384-9aIt2nRpC12Uk9gS9baDl411NQApFmC26EwAOH8WgZl5MYYxFfc+NcPb1dKGj7Sk" crossorigin="anonymous">
<!-- JS, Popper.js, and jQuery -->
<script src="https://code.jquery.com/jquery-3.5.1.slim.min.js" integrity="sha384-DfXdz2htPH0lsSSs5nCTpuj/zy4C+OGpamoFVy38MVBnE+IbbVYUew+OrCXaRkfj" crossorigin="anonymous"></script>
<script src="https://cdn.jsdelivr.net/npm/popper.js@1.16.0/dist/umd/popper.min.js" integrity="sha384-Q6E9RHvbIyZFJoft+2mJbHaEWldlvI9IOYy5n3zV9zzTtmI3UksdQRVvoxMfooAo" crossorigin="anonymous"></script>
<script src="https://stackpath.bootstrapcdn.com/bootstrap/4.5.0/js/bootstrap.min.js" integrity="sha384-OgVRvuATP1z7JjHLkuOU7Xw704+h835Lr+6QL9UvYjZE3Ipu6Tp75j7Bh/kR0JKI" crossorigin="anonymous"></script>
</head>
<body>
<div align="center" class="main1">
<form method="POST">
<h1>Railway Booking Portal</h1>
<h2>Welcome!</h2>
<br>
{# This hidden_tag is a CSRF security feature. #}
{{ form.hidden_tag() }}
{{ form.username.label(class_='uname') }} {{ form.username(placeholder='email here') }}
<br>
{{ form.password.label(class_='passwd') }} {{ form.password() }}
<br>
<a class="abc" href="Sign_Up.html"><u>SignUp</u></a>
<br>
<a class="abc1" href="Password_Reset.html"><u>ForgotPassword</u></a>
<br>
<br>
{{ form.submit() }}
<br>
<p>"One's destination is never a place, but a new way of seeing things." - Henry Miller</p>
</form>
</div>
I'm successfully able to connect HTML with css and background image can be seen when app is run thru Flask, however, username & password labels/fields are not getting formatted. The css file consists of the following:
body{
background: url(railway-tracks.jpeg);
background-repeat: no-repeat;
}
h1{
color: black;
}
p{
font-family: 'Anton', sans-serif;
font-size: 200%;
color: black;
}
.uname{
display: inline-block;
min-width: 90px;
color: red;
}
.passwd{
display: inline-block;
min-width: 90px;
color: red;
}
Kindly guide what is the part that I'm missing or using it incorrectly?
Thank you
(Updated)
A: You have:
class_='uname'
class_='passwd'
but style for:
.username{}
.password{}
The class names should match
UPDATE after OP edit
You havenβt added the class declarations to the labels. You added them to the fields. Try:
{{ form.username.label(class_='uname') }} {{ form.username(placeholder='email here') }}
{{ form.password.label(class_='passwd') }} {{ form.password}}
or try:
{{ form.username.label, extra_classes='uname' }} {{ form.username(placeholder='email here') }}
{{ form.password.label, extra_classes='passwd' }} {{ form.password}}
It works:
| |
doc_23535491
|
In the search page "/share/page/dp/ws/faceted-search". I have included a button "AlfDynamicPayloadButton".
When the button is clicked it opens a dialog with "ALF_CREATE_DIALOG_REQUEST" and inside of it my custom widget.
I need the current search parameters into that widget to create a special visualisation.
My code in the file "faceted-search.get.js":
var myWidget = {
name : "alfresco/buttons/AlfDynamicPayloadButton",
config : {
label : "My Button",
useHash : true,
hashDataMapping : {
"hashFragmentParameterName" : "buttonPayloadProperty"
},
publishPayloadSubscriptions : [ {
topic : "ALF_SEARCH_REQUEST"
}],
publishTopic : "ALF_CREATE_DIALOG_REQUEST",
publishPayloadType : "PROCESS",
publishPayloadModifiers : [ "processDataBindings" ],
publishPayload : {
dialogTitle : "My Title",
widgetsContent : [ {
name : "myPackage/Example",
config : {
width : 400,
height : 500
// other configurations
}
}]
}
}
};
var widget = widgetUtils.findObject(model.jsonModel.widgets, "id",
"FCTSRCH_TOP_MENU_BAR");
widget.config.widgets.push(myWidget);
My Widget:
define(
[
"dojo/_base/declare",
"dijit/_WidgetBase",
"alfresco/core/Core",
"dijit/_TemplatedMixin",
"dojo/_base/lang",
"dojo/text!./html/Example.html"
],
function(declare, _Widget, Core, _Templated, lang, template) {
[ _Widget, Core, _Templated ],{
templateString : template,
i18nRequirements : [ {
i18nFile : "./i18n/Example.properties"
} ],
cssRequirements : [ {
cssFile : "./css/Example.css"
} ],
constructor : function example__constructor() {
// the widget is created each time the button is pressed
this.alfSubscribe("ALF_SEARCH_REQUEST",
lang.hitch(this, this.upgradeSearchParameter));
this.inherited(arguments);
},
upgradeSearchParameter:
function example__upgradeSearchParameter(args){
// this line never run
this.searchParameter = args;
}
});
});
So far I have tried:
*
*Subscribe inside the widget to ALF_SEARCH_REQUEST. The problem with that is the widget haven't been created when topic is published.
*Include "alfresco/services/SearchService" has dependency of my widget. I can access with that to some information of the query like the "sort", "site", "sortAscending", etc, but not the "term".
*Include "alfresco/search/AlfSearchList" has dependency of my widget. I have "searchTerm", but it is always empty "".
*Using "publishPayloadSubscriptions" in my button. All the information of the parameters of the query are inside of the dialog, but it not exists option to populate my widget with that information
publishPayloadSubscriptions : [ {
topic : "ALF_SEARCH_REQUEST"
}],
Is there some way to get all the parameters of the last query in my custom widget?
A: I think you're on the right path with subscribing to the ALF_SEARCH_REQUEST topic. If you're finding that the initial ALF_SEARCH_REQUEST topic (on loading the page) is being published before your AlfDynamicPayloadButton has registered its subscription then you might want to consider upgrading to a more recent Aikau release.
We fixed an issue in the 1.0.68 release to ensure that no publications are fired until all widgets have completed loading on the page (this was actually to address the scenario where there are multiple Surf components on the page which shouldn't be the case on the faceted search page!). We work around this problem by having a shared singleton PubQueue that doesn't release any publications until after all widgets and services have been created.
I suppose this could depends where you're creating your AlfDynamicPayloadButton - for example, if it's in the search results then it will be created after the ALF_SEARCH_REQUEST topic has been published.
Have you checked the DebugLog to ensure that the subscription in your button is being setup correctly (for example that there are no scoping issues?).
Have you verified that a fixed payload (with hard-coded data) will result in the displaying the dialog as you require? Have you verified that the button is not successfully subscribing to the topic but just not building the payload as required.
Could you also update your question to show the configuration for your AlfDynamicPayloadButton as this might help me figure out what the problem is.
A: OK, now that you've added the model I think I might be able to provide a better answer...
It looks like you've just copied the example from the JSDoc. Where you've set the hashDataMapping configuration you can actually reconfigure this to get the search term.
Each time you search, the search text is set on the URL as the searchTerm hash parameter. This means that with useHash configured to be true you can configure the AlfDynamicPayloadButton like this:
{
name: "alfresco/buttons/AlfDynamicPayloadButton",
config : {
label : "My Button",
useHash : true,
hashDataMapping : {
searchTerm: "widgetsContent.0.config.searchTerm"
},
publishPayloadSubscriptions: [],
publishTopic: "ALF_CREATE_DIALOG_REQUEST",
publishPayload: {
dialogTitle: "My Title",
widgetsContent: [
{
name: "myPackage/Example",
config: {
width : 400,
height : 500
// other configurations
}
}
]
}
}
};
The key thing here is that you are mapping the searchTerm URL hash value to the widgetsContent.0.config.searchTerm property of your publishPayload. This means that your custom widget within your dialog will be assigned the last search term that was used (to an attribute also called searchTerm that your widget can reference).
| |
doc_23535492
|
I would prefer to publish the common things in a maven repo, but any other way is also welcome.
Clarification: What I am basically looking for is something like
apply from: "foo"
and foo is not a local file, but in a jar on the classpath.
A: *
*Beginner - Shared Gradle scripts:
One way to start could be to use a shared gradle package which
could sit in your version control system. This could contain the
common settings for all projects in .gradle files that can include
IDE, server, test coverage etc settings - the boiler plate code.
Add these to your project and then refer/apply these settings via your
project's build script, eg:
apply from: <path to the shared gradle file>.
*Custom Gradle plugin:
You could write your plugin and resue it across projects. Here's a link for writing a custom gradle plugin for your enterprise needs.
http://www.gradle.org/docs/current/userguide/custom_plugins.html
| |
doc_23535493
|
I am not sure what could be advantages and disadvantages of providing .apk file?
Questions are :
*
*Will Google play count direct .apk installation as a download, when connected to internet ?
*Will users with direct .apk installation get any update published later ?
A: To answer your questions:
*
*Yes, you will get a download prompt if you click on an .apk in Android. When you go to open the completed download, it will offer it up for install (see caveats below)
*If you offer your .apk up for direct download outside of Google Play you get no "update checking" -- you have to do that yourself. Not entirely sure what happens if the .apk is available in the play store and via direct download.
It is easier to talk about the disadvantages for the approach of distributing the .apk yourself.
*
*You have to do all the tracking yourself, if you publish to the play store you get some statistics
*Similarly, you have to do all "update checking" on your own (either via writing it in your app or some other way.)
*No secure way of distributing your application. The built in Android browser does not support downloads over HTTPS streams that require authentication **
*Easier for users to get the source code of your app. They can download the .apk from your site, open it in 7zip (or similar) and have at the underlying class files. Whether or not this is a concern is really for you to decide.
The most important reason
Your users will have to check "Allow installation of packages from unknown sources". Your average person might not know how to do this, and may be hesitant to do so. So, it may limit your ability to gain a a wide market share.
So, in summary, ask yourself if not being in Google Play/Android Market is really worth the hassle that comes for both you and your users.
** Not sure if this is true with Chrome on Android -- it is certainly true with the older default browser
A: *
*Google play collects statistics of Apps only installed through Play Store , with a Google account logged in. Read Documentation on App statistics.
*Newer version of Play Store app can auto detect if any of installed app is also available on play store, and will notify for the update.
Also, there are numerous third party app markets other than Play Store. You can upload your app there too (auto update is not available with all of them).
A: *
*Seems no, correct me if I am wrong :)
*Yes, provided that the package name is the same and the version code of the apk file you've uploaded to google play is larger than the one installed in the device.
| |
doc_23535494
|
The Upload view contains following
<div class="jumbotron">
<form action="/Upload"
class="dropzone"
id="dropzoneJsForm"
style="background-color:#00BFFF"></form>
</div>
And the Upload controller contains this
[HttpPost]
public async Task<IActionResult> Upload(IFormFile file, IHostingEnvironment _environment)
{
var uploads = Path.Combine(_environment.WebRootPath, "uploads");
if (file.Length > 0)
{
using (var fileStream = new FileStream(Path.Combine(uploads, file.FileName), FileMode.Create))
{
await file.CopyToAsync(fileStream);
}
}
return RedirectToAction("Index");
}
When I run the application in debug mode and add files I see the progressbar and the checkmark indicating success. But no files has been uploaded in the upload folder in wwwroot.
I have app.UseStaticFiles() in the Configure method.
Target framework: .Net Core 2.0
I am using Visual Studio 2017 Professional latest version
A: <webconfig>
<system.web>
<httpRuntime maxRequestLength="15360" requestLengthDiskThreshold="15360"/>
</system.web>
'consider using fileupload='multiple' attribute on control
A: I found the mistake. A stupid beginner mistake. When I changed the name of the controller method to Index it worked.
| |
doc_23535495
|
At the very bottom of my Storyboard file, I see block named "inferredMetricsTieBreakers", with bunch of "segue" tags contained within. It seems that some segue in my local repo is conflicted with another segue in the remote repo. To be safe, I could just "choose both". But since this happened once before, I'm afraid that it will keep happening, and eventually have a long list of these segue references at the end of my storyboard file.
Just wondering if anyone is that intimately knowledgeable about these tags in the file, or whether I should just blindly continue to just "choose both" and ignore the issue.
Thanks,
-Dan.
A: Each view controller in a storyboard has "Simulated Metrics" which you can see in the attributes inspector:
Some of these metrics are inferred (thus, inferred metrics).
As mentioned by @thesystem, if a given view controller is the destination of multiple segues, there could be differences between the simulated metrics of the source view controllers of the segues. To address these differences IB picks a segue to break the tie when resolving the inferred metrics for the destination view controller.
@rick-pastoor's conclusion that it's safe to remove the entire inferredMetricsTieBreakers section is correct in that IB can just choose different tie-breaking segues. However, there is no guarantee that the new tie-breakers will lead to the same layout results in IB.*
For example, I had a situation in which, depending on the tie-breaking segue, a view controller was shown in IB either with or without a status bar. Its view maintained a height of 568pts in both cases, such that the position of the top layout guide kept changing. This snowballed into other undesired (and largely meaningless) changes to the frames of views constrained to the top layout guide.
Based on these observations, choosing both the new and old sets of inferred metrics is not advisable. Instead, remove both sets and then open the storyboard in IB to allow the ties to be broken before committing the merge. To avoid undesired frame changes due to a change in tie-breaking segue, pick some value other than "Inferred" for the relevant simulated metrics of the destination view controller. This will ensure that IB generates a consistent layout result.
* The results at runtime should be the same unless there is any logic that relies on the initial layout immediately after the view is unarchived.
A: Same thing going on here. Got errors in my storyboard file after using your "choose both" method. Found that searching the storyboard for the segue references resulted in one segue that was mentioned inside the inferredMetricsTieBreakers section. Removing the segue from the list solved my breaking build.
To try and find out what this inferredMetricsTieBreakers does, I tried removing the whole section first. Breaking my build. Next I removed all the items. During the build, Xcode added 2 new and different segues to my list (before the merge I had 3). The application I'm building is working fine.
My conclusion: it's safe to remove all the items and perform a clean build. This will keep your storyboard clean.
A: It looks like, the tie breakers occur when in the storyboard one view controller is connected from two or more other view controllers via segues and its simulated metrics setting is set to "inferred" but Xcode cannot make sure that inferred means exactly one metrics setting (landscape or portrait) in every case.
I fixed it by changing all controllers simulated metrics to "inferred" and all metrics are inferred from a controller that has fixed simulated metrics setting "landscape". After that I removed the tie breaker segue ids from the section (but not the section itself).
A: Changed the metrics from Inferred to Freeform (Xcode 8, Swift 3). Solved my problem.
| |
doc_23535496
|
and we have received numerous reports from users that upload fails starting that date. Direct upload ( < 4 MB ) method worked OK and still works OK.
It seems Microsoft did some migration to the new Azure portal around that date. Could it be related and that azure does not (yet) support session based uploads? Any other idea what's wrong + how to restore the session based upload?
Here's the relevant code section
DriveItemUploadableProperties props = new DriveItemUploadableProperties();
List<Option> options = new ArrayList<>();
if (_fileInfo.isNew())
options.add(new QueryOption("@microsoft.graph.conflictBehavior", "rename"));
else
options.add(new QueryOption("@microsoft.graph.conflictBehavior", "replace"));
String encodedName = OneDrive.buildURLEncodedPath(_fileInfo.name());
// This excepts with 400, bad request
UploadSession session = OneDrive.instance().client().getMe().getDrive().getItems(_fileInfo.parentId()).getItemWithPath(encodedName).getCreateUploadSession(props).buildRequest(options).post();
// ** never reaches this**
if (session != null) {
}
Here's the detailed log:
E/DefaultHttpProvider[sendRequestInternal] - 333: OneDrive Service exception POST https://graph.microsoft.com/v1.0/me/drive/items/[removed]:/Direct
%20upload%20Conflict%20Copy.smmx:/microsoft.graph.createUploadSession?%40microsoft.graph.conflictBehavior=replace
SdkVersion : graph-android-v1.7.0
E/DefaultHttpProvider[sendRequestInternal] - 333: Authorization : bearer [removed]
{"item":{}}
E/DefaultHttpProvider[sendRequestInternal] - 333: 400 : Bad Request
E/DefaultHttpProvider[sendRequestInternal] - 333: X-Android-Selected-Protocol : http/1.1
E/DefaultHttpProvider[sendRequestInternal] - 333: Strict-Transport-Security : max-age=31536000
Cache-Control : private
x-ms-ags-diagnostic : {"ServerInfo":{"DataCenter":"West Europe","Slice":"SliceC","Ring":"5","ScaleUnit":"001","RoleInstance":"AGSFE_IN_21"}}
E/DefaultHttpProvider[sendRequestInternal] - 333: client-request-id : [removed]
X-Android-Response-Source : NETWORK 400
X-Android-Sent-Millis : 1573630667766
E/DefaultHttpProvider[sendRequestInternal] - 333: request-id : 22625e49-3b7f-466a-a75b-431a0b300c99
Content-Length : 212
E/DefaultHttpProvider[sendRequestInternal] - 333: X-Android-Received-Millis : 1573630668566
Date : Wed, 13 Nov 2019 07:37:53 GMT
Content-Type : application/json
E/DefaultHttpProvider[sendRequestInternal] - 333: {
"error": {
"code": "invalidRequest",
E/DefaultHttpProvider[sendRequestInternal] - 333: "message": "Bad Argument",
"innerError": {
E/DefaultHttpProvider[sendRequestInternal] - 333: "request-id": "[removed]",
"date": "2019-11-13T07:37:53"
E/DefaultHttpProvider[sendRequestInternal] - 333: }
}
E/DefaultHttpProvider[sendRequestInternal] - 333: }
E/DefaultHttpProvider[sendRequestInternal] - 333: Throwable detail:
com.microsoft.graph.http.GraphServiceException: POST https://graph.microsoft.com/v1.0/me/drive/items/[removed]:/Direct%20upload%20Conflict
%20Copy.smmx:/microsoft.graph.createUploadSession?%40microsoft.graph.conflictBehavior=replace
SdkVersion : graph-android-v1.7.0
Authorization : bearer [removed]
{"item":{}}
400 : Bad Request
[...]
[Some information was truncated for brevity, enable debug logging for more details]
at com.microsoft.graph.http.DefaultHttpProvider.handleErrorResponse(DefaultHttpProvider.java:357)
at com.microsoft.graph.http.DefaultHttpProvider.sendRequestInternal(DefaultHttpProvider.java:294)
at com.microsoft.graph.http.DefaultHttpProvider.send(DefaultHttpProvider.java:190)
at com.microsoft.graph.http.DefaultHttpProvider.send(DefaultHttpProvider.java:170)
at com.microsoft.graph.http.BaseRequest.send(BaseRequest.java:272)
at com.microsoft.graph.generated.BaseDriveItemCreateUploadSessionRequest.post(BaseDriveItemCreateUploadSessionRequest.java:43)
at [removed].onedrive.OneDriveFileUploader.doInBackground(OneDriveFileUploader.java:77)
at [removed].onedrive.OneDriveFileUploader.doInBackground(OneDriveFileUploader.java:23)
at android.os.AsyncTask$2.call(AsyncTask.java:334)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at android.os.AsyncTask$SerialExecutor$1.run(AsyncTask.java:246)
at java.util.concurrent.ThreadPoolExecutor.processTask(ThreadPoolExecutor.java:1187)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1152)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641)
at java.lang.Thread.run(Thread.java:784)
A: It's not immediately obvious why you're seeing a change in behavior, but you may be able to work around it by shifting the @microsoft.graph.conflictBehavior annotation from a query string parameter to being in the body instead:
POST https://graph.microsoft.com/v1.0/me/drive/root:/test.txt:/createUploadSession
{
"item": {
"@microsoft.graph.conflictBehavior": "replace"
}
}
See https://learn.microsoft.com/en-us/graph/api/driveitem-createuploadsession?view=graph-rest-1.0#request-body
| |
doc_23535497
|
So two questions: Why is this the case, and how can I prevent this?
Thank you.
A: MongoDB treats all number literals as floating point by default, and above a certain threshold (32 bits?) it switches to scientific notation when exporting to JSON.
A: For dumping data you can use --eval option of mongo command line client and jq command. That does not give data in scientific notation.
for exmaple
mongo --quiet localhost:5010/user --username alok --password alok --authenticationDatabase admin --eval 'db.users.find({}, {_id: 1, username:1, name: 1}).limit(10).toArray().map(JSON.stringify).join("\n")' | while read item; do echo $(echo $item | jq ._id),$(echo $item | jq .username),$(echo $item | jq .name); done > data.csv
| |
doc_23535498
|
extension String {
func join<S : SequenceType where String == String>(elements: S) -> String
}
I would expect it to be
extension String {
func join<S : SequenceType where S.Element == String>(elements: S) -> String
}
because I think the intent should be to ensure the elements in a sequence are String when passed to the function join.
So my questions are:
*
*Is the function declaration in the Xcode 6 GM correct?
*If so, why does it make sense?
| |
doc_23535499
|
self.< IBOutlet property > = nil;
in -(void)viewDidUnload
If I'm using iOS 4 (and using ARC) and forced to use unsafe_unretained instead, does it mean I have to override viewDidUnload and set the property to nil manually?
EDIT:
This relates to my case: Should IBOutlets be strong or weak under ARC?
The exception being: I can't use the 'weak' keyword which creates the zeroing reference.
Hope my question is clearer.
Thanks
A: When using ARC, as I am sure you have realized, the weak attribute cannot be used pre-iOS5. The other side of that coin would be to use unsafe_unretained. Weak attributes will automatically set your properties to nil. Unsafe_retained (aka "assign" in pre-iOS 5), will not, and you need to do this yourself.
A: Without a property (in iOS) the IBOutlet will be ivar is set and retained by KVC. With a @property the ivar is set by setting the property.
On an ARC project, if one creates a nib and drags an item (say UILabel) to the .h file a strong @property will be added as well as in the .m file a line setting the property to nil will be added to the viewDidUnload method and a @synthesize statement for the property.
There are other ways to handle the retaining of nib IBOutlets that work and may even be better by some metric.
From the Apple document Resource Programming Guide - Managing the Lifetimes of Objects from Nib Files:
Because the behavior of outlets depends on the platform, the actual
declaration differs:
For iOS, you should use:
@property (nonatomic, retain) IBOutlet UserInterfaceElementClass *anOutlet;
For OS X, you should use:
@property (assign) IBOutlet UserInterfaceElementClass *anOutlet;
My belief is to not fight the way Apple does things, doing so tends to make things harder. Also consider that Apple has inside information of the future of the platform. :-)
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.