id
stringlengths 5
11
| text
stringlengths 0
146k
| title
stringclasses 1
value |
|---|---|---|
doc_23533200
|
Thanks!
A: You can use a Data URI Scheme to convert the JSON to an "URL" which can be then downloaded, i.e.
/* Firstly you'll need a javascript library which can encode data to base 64
* http://archive.plugins.jquery.com/project/base64 - jQuery Plugin
* http://www.webtoolkit.info/javascript-base64.html - Collection of functions
* http://stackoverflow.com/a/6740027/451672 - '' '' "
*/
/* Now we must convert the JSON to text, which can be done with the
JSON.stringify() method
* http://msdn.microsoft.com/en-us/library/cc836459(v=vs.85).aspx
*/
var data = JSON.stringify(myObject);
/* Now we convert the data to a Data URI Scheme, which must be Base64 encoded
make sure you use the appropriate method to Base64 encode your data depending
on the library you chose to use.
* application/octet-stream simply tells your browser to treat the URL as
arbitrary binary data, and won't try to display it in the browser window when
opened.
*/
var url = "data:application/octet-stream;base64," + Base64.encode(data);
/* To force the browser to download a file we need to use a custom method which
creates a hidden iframe, which allows browsers to download any given file
* http://stackoverflow.com/a/3749395/451672
*/
var downloadURL = function(url)
{
var iframe;
iframe = document.getElementById("hiddenDownloader");
if (iframe === null)
{
iframe = document.createElement('iframe');
iframe.id = "hiddenDownloader";
iframe.style.display = "none";
document.body.appendChild(iframe);
}
iframe.src = url;
}
/* Now downloading is simple as executing the following code, make sure that
the DOM is not modified until after the page has finished loading!
*/
window.onload = function()
{
var link = document.getElementById("downloadJSON");
link.onclick = function()
{
downloadURL(url);
}
}
jsFiddle: http://jsfiddle.net/cQV7X/
A: console.log(JSON.stringify([1,2,3,4,5]));
Open gedit or other text editor. Copy from console. Paste in file. Save as array.json.
EDIT
Chrome supports copy so you could do copy("foobar") and "foobar" will be on your clipboard.
A: There's a jQuery plugin jQuery.twFile that allows you to read and write to a local file.
| |
doc_23533201
|
<VBox fx:id="box" spacing="15" styleClass="sectionStyle">
<StackPane>
<TablePagination fx:id="pagination" StackPane.alignment="CENTER"/>
</StackPane>
</VBox>
nothing appears. But when I do it in code like this:
pagination = new TablePagination(itemTable,items);
StackPane pane = new StackPane();
pane.setAlignment(pagination, Pos.CENTER);
pane.getChildren().add(pagination);
box.getChildren().add(pane);
My control gets rendered but not in center. So what am I missing?
A: In your code version TablePagination is centered within StackPane, but nodes inside TablePagination are not. Call:
pagination.setAlignment(Pos.CENTER);
Note that StackPane.setAlignment method is static, and you sould call:
StackPane.setAlignment(pagination, Pos.CENTER);
| |
doc_23533202
|
private void doSomething(Object a, Object b){
var myLocalVar = deriveValFrom(a);
if (null == myLocalVar){
myLocalVar = deriveValFrom(b);
}
LOG.debug(() -> String.format("settled on value %s", myLocalVar));
}
The code above does not compile, since myLocalVar is neither final nor effectively final.
current ideas
As the answers to this question show, I might define a temp variable and thus bloat the code.
Otherwise I could implement a private method like
private void logToDebug(String formatStr, Object p0, Objects... objs){
LOG.debug(()->String.format(formatStr, p0, objs));
}
Which makes the compiler stop complaining but adds 'off-topic' code to the class.
actual Q
Is there a better way to achieve lazy evaluation and concise code?
relevant info
While logging is done under the hood by log4j I have to work through a custom facade which I may extend (for instance with the debug(String formatStr, p0, ...)-Methods) but I would like to keep the extension to a minimum.
A: I'd split logic into two methods:
private void doSomething(Object a, Object b){
final var myLocalVar = deriveValFrom(a);
if (myLocalVar != null)
return myLocalVar;
return deriveValFrom(b);
}
private void doSomethingAndLog(Object a, Object b){
final var myLocalVar = doSomething(a, b);
LOG.debug(() -> String.format("settled on value %s", myLocalVar));
}
A: It depends a bit on what part of the exection you want to be lazy.
A final constant would normally use an assignless expression. Optional is very suitable for that.
But you can do all in the log call.
private void doSomething(Object a, Object b) {
final var myLocalVar = Optional.ofNullable(deriveValFrom(a))
.orElseGet(() -> deriveValFrom(b)));
LOG.debug(() -> "settled on value %s".formatted(
Optional.ofNullable(deriveValFrom(a))
.orElseGet(() -> deriveValFrom(b)));
}
| |
doc_23533203
|
I would like to have my web.config entries customizable per user/machine/environment.
I could have my configurable/changeable entries marked in the web.config and would like those entries overridden by the respective user/environment file and would like to have an order that decides which entries should trump the other if the entry is found in multiple files.
for eg: web.config has a $connectionstring entry and the customization files per user/environment could have the potential values to replace $connectionstring depending on the context/configuration the solution is built
which means, I could have a set of files like below:
user_joe.config
$connectionstring = db_where_joe_like_to_connect_to
staging.config
$connectionstring = db_where_staging_connect_to
production.config
$connectionstring = db_production
so if joe is compiling the solution from his Dev box, the web.config should have the value "db_where_joe_like_to_connect_to" for $connectionstring.
I am hoping there could be a solution that doesn't involve Nant.
hope someone can throw pointers.
A: You can use visual studio 2010's web.config transform settings.
http://weblogs.asp.net/gunnarpeipman/archive/2009/06/16/visual-studio-2010-web-config-transforms.aspx
This will allow each developer to have their portion of a web.config that can get merged in for their build settings.
Internally we use an event that was pieced together from various places on the net- since normally this happens during publishing and we wanted it to happen at compile time.
Add a BeforeBuild target
So - from the csproj file:
<Target Name="BeforeBuild">
<TransformXml Source="$(SolutionDir)Web.config" Transform="$(SolutionDir)Web.$(Configuration).config" Destination="$(SolutionDir)Web.$(Configuration).config.transformed" />
</Target>
<PropertyGroup>
<PostBuildEvent>xcopy "$(SolutionDir)Web.$(Configuration).config.transformed" "$(SolutionDir)Web.config" /R /Y</PostBuildEvent>
</PropertyGroup>
A: I would suggest using the configSource attribute in the web.config entries for debug builds. Then, in your test and releae builds you can use data transformations to insert the testing and production entries.
You would do something like this:
<connectionStrings configSource="myLocalConnectionStrings.cfg" />
Then you have a local file called myLocalConnectionStrings that you don't check into source control. In your Web.config.Release you simply transform the connectionStrings section to include the production strings and remove the configSource attribute.
A: As Adam said in his answer, you can kind of do this using web.config transforms. Basically you'd have to create a new solution configuration for each environment. Note that having one for each developer will likely quickly become unmaintaniable, as each configuration / platform combination can have it's own build settings.
Also, the transforms are ONLY applied during the web site packaging (calling the Package target). So if you're trying to use this so that joe and sally can have different configs on their own machine, this won't do that for you.
In that case you're probably better off trying to get everyone on the same configuration, than allowing configs to fragment. The more differences between each environment, the harder time you'll have deploying.
A: Here is a T4 solution. This worked for my case because this was an internal tool that would only be used by developers and because I don't need further processing for the "included" files.
File name App.tt.
<#@ template debug="false" hostspecific="true" language="C#" #>
<#@ import namespace="System" #>
<#@ import namespace="System.IO" #>
<#@ output extension=".config" #>
<#
string pathToConfigurations = Host.ResolvePath("Configurations");
string pathToMachine = Path.Combine(pathToConfigurations, Environment.MachineName + ".config");
if (File.Exists(pathToMachine))
{
Write(File.ReadAllText(pathToMachine));
}
else
{
Write(File.ReadAllText(Path.Combine(pathToConfigurations, "App.config")));
}
#>
| |
doc_23533204
|
TypeError: this.attrs.container is undefined
[Break On This Error]
this.attrs.container.appendChild(this.content);
Please help me out in this..
A: https://github.com/ericdrowell/KineticJS/wiki/Change-Log
What you really have to do is go through this change log and look at what you have in your code vs what's in the newer versions.
My guess, from looking at your code, is that Kinetic.Shape is no longer supporting drawing of images, that's what Kinetic.Image is for.
| |
doc_23533205
|
if (currentToken) {
$('#fcm_id').val(currentToken);
sendTokenToServer(currentToken);
updateUIForPushEnabled(currentToken);
} else {
// Show permission request.
console.log('No Instance ID token available. Request permission to generate one.');
// Show permission UI.
updateUIForPushPermissionRequired();
setTokenSentToServer(false);
}
The code is working fine with Pc Browers but not genrate fcm_id when i will try to login with mobile browser.i am searched too much but noboday getting that type of issue.
Anyone can help me to getting out this issue.
| |
doc_23533206
|
Sometimes I need to retrieve some rows ordered by date_time DESC, but also need them sorted randomly if there are an exact two or more rows with the exact same time.
This is my query:
SELECT * FROM ads ORDER BY created_at DESC, RAND()
But, I test it and always give ve the same order, even if I manually edit three fields to have the exact same date_time value. I need those rows randomly ordered if they have the same edited time.
Complete query:
$query = "SELECT items.*,
user_data.s_name,
user_data.s_email,
user_data.s_phone_mobile,
item_info.s_title,
item_info.s_description,
item_region.fk_i_region_id,
pictures.pk_i_id AS picture_name, pictures.s_extension, pictures.s_path,
(SELECT GROUP_CONCAT(meta.s_value SEPARATOR '|$|') FROM oc_t_item_meta meta WHERE meta.fk_i_item_id = items.pk_i_id) AS metadata
FROM " . DB_TABLE_PREFIX . "t_item items
JOIN " . DB_TABLE_PREFIX . "t_user user_data ON items.fk_i_user_id = user_data.pk_i_id
JOIN " . DB_TABLE_PREFIX . "t_item_description item_info ON items.pk_i_id = item_info.fk_i_item_id
JOIN " . DB_TABLE_PREFIX . "t_item_location item_region ON items.pk_i_id = item_region.fk_i_item_id
LEFT OUTER JOIN " . DB_TABLE_PREFIX . "t_item_resource pictures ON items.pk_i_id = pictures.fk_i_item_id
WHERE items.fk_i_category_id = " . $catId . "
AND items.dt_mod_date > '" . $week . " 00:00:00'
GROUP BY items.pk_i_id
ORDER BY items.dt_mod_date DESC, RAND()";
Thanks!
A: As RAND will make your query run extremely slow, you could instead sort in PHP. For example, the following code will sort the dates descending, or randomly if they are the same:
$data = [
['id' => 1, 'dt_mod_date' => '2017-01-01'],
['id' => 2, 'dt_mod_date' => '2017-01-01'],
['id' => 3, 'dt_mod_date' => '2017-01-01'],
['id' => 5, 'dt_mod_date' => '2017-01-03'],
['id' => 4, 'dt_mod_date' => '2017-01-02'],
];
usort($data, function ($a, $b) {
if ($a['dt_mod_date'] === $b['dt_mod_date']) {
return (bool) mt_rand(0, 1);
}
return strtotime($a['dt_mod_date']) < strtotime($b['dt_mod_date']);
});
var_dump($data);
// Result
// array(5) {
// [0] =>
// array(2) {
// 'id' =>
// int(5)
// 'dt_mod_date' =>
// string(10) "2017-01-03"
// }
// [1] =>
// array(2) {
// 'id' =>
// int(4)
// 'dt_mod_date' =>
// string(10) "2017-01-02"
// }
// [2] =>
// array(2) {
// 'id' =>
// int(1)
// 'dt_mod_date' =>
// string(10) "2017-01-01"
// }
// [3] =>
// array(2) {
// 'id' =>
// int(3)
// 'dt_mod_date' =>
// string(10) "2017-01-01"
// }
// [4] =>
// array(2) {
// 'id' =>
// int(2)
// 'dt_mod_date' =>
// string(10) "2017-01-01"
// }
// }
| |
doc_23533207
|
but i don't get from where it can be change as I don't have xml files
Can anyone please help me to find this?
A: I had to do the same thing in Magento go, so what I did was changed the blocks' position using theme editor and increased the width of main column using css.
A: Go to \app\design\frontend\default\your theme\layout/catalogsearch.xml
in line no 52:
<action method="setTemplate"><template>page/3columns.phtml</template></action>
change into:
<action method="setTemplate"><template>page/2columns-left.phtml</template></action>
| |
doc_23533208
|
Now I have heard of the Service that android has but since my class is already extended, I cannot extend it with Service. Do you know of any solution that I can do to solve this small problem.
Currently my code is as follows:
public class MainActivity extends DroidGap {
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
super.init();
super.setBooleanProperty("keepRunning", true);
super.setIntegerProperty("loadUrlTimeoutValue", 50000);
super.loadUrl("file:///android_asset/www/index.html",50000);
}
}
Thanks,
Keith Spiteri
A: Open your config.xml in "res/xml/config.xml" and locate for "preference" tags.
Add this line/tag before or after other preferences tags.
<preference name="keepRunning" value="true" />
This is a solution for your question, it's simple, the app will keep running when your application is hidden, and he'll close when you close the application.
It's it.
I hope I helped you.
Att. Rodrigo
| |
doc_23533209
|
fig, ax = plt.subplots(figsize=(16,8))
y1 = sns.lineplot('game_seconds_remaining', 'home_wp', data=vb, color='#4F2683',linewidth=2)
y2 = sns.lineplot('game_seconds_remaining', 'away_wp', data=vb, color='#FB4F14',linewidth=2)
x = plt.axhline(y=.50, color='white', alpha=0.7)
ax.fill_between(x, y1, y2, where=(y1 > x), color='C0', alpha=0.3, interpolate=True)
Output: TypeError: '>' not supported between instances of 'AxesSubplot' and 'AxesSubplot'
Why is this not working for me yet it works on the documentation? What I want to do is shade any area that dips below the horizontal line (x) and above it. Any help is greatly appreciated. Thanks!
A: Using dummy data:
# dummy dataframe
x = np.linspace(0,2*np.pi, 100)
df= pd.DataFrame({
'a': x,
'b': np.cos(x),
'c': np.sin(x),
})
fig = plt.figure()
ax=fig.add_subplot(111)
sns.lineplot('a', 'b', data=df, ax=ax, label='b')
sns.lineplot('a', 'c', data=df, ax=ax, label='c')
ax.fill_between(df['a'], 0.5, df['b'], where=df['b']>.5)
ax.fill_between(df['a'], 0.5, df['c'], where=df['c']>.5)
| |
doc_23533210
|
Is it possible to call adb command to click on this view?
I know it's possible to make it click on a specific coordinate, but is it possible to use the ID instead ?
I ask this because I do know that the "Layout Inspector" tool (available via Android Studio) and the "View hierarchy" tool (available via "Android Device Monitor", previously used via DDMS) can show the ids of the views (and even their coordinates and bounding box), so maybe it can be a better way to simulate touches when performing some automatic tests.
I can use a rooted method if needed.
EDIT: I've set a bounty in case there is an easier/better way than what I've written in my own answer, which was to parse the result of "adb shell dumpsys activity top" .
I would like to know if it's possible to get the views coordinates (and sizes of course) that are shown on the screen, including as much information about them (to identify each).
This should be possible via the device too. Maybe something that has the same output data of what's available from the "monitor" tool :
Notice how it can get the basic information of the views, including the text, the id, and the bounds of each.
As I've read, this might be possible via AccessibilityService, but sadly I can't understand how it all works, what are its capabilities, how to trigger it, what are its requirements, etc...
A: Using what @pskink explained in the comments above, here's how I achieved this:
First, I ran this command:
adb shell dumpsys activity top
Then, I used this code to parse it:
public class ViewCoordsGetter {
public static Rect getViewBoundyingBox(String viewIdStr) {
final List<String> viewHierarchyLog = //result of the command
for (int i = 0; i < viewHierarchyLog.size(); ++i) {
String line = viewHierarchyLog.get(i);
if (line.contains(":id/" + viewIdStr + "}")) {
Rect result = getBoundingBoxFromLine(line);
if (i == 0)
return result;
int currentLineStart = getStartOfViewDetailsInLine(line);
for (int j = i - 1; j >= 0; --j) {
line = viewHierarchyLog.get(j);
if ("View Hierarchy:".equals(line.trim()))
break;
int newLineStart = getStartOfViewDetailsInLine(line);
if (newLineStart < currentLineStart) {
final Rect boundingBoxFromLine = getBoundingBoxFromLine(line);
result.left += boundingBoxFromLine.left;
result.right += boundingBoxFromLine.left;
result.top += boundingBoxFromLine.top;
result.bottom += boundingBoxFromLine.top;
currentLineStart = newLineStart;
}
}
return result;
}
}
return null;
}
private static int getStartOfViewDetailsInLine(String s) {
int i = 0;
while (true)
if (s.charAt(i++) != ' ')
return --i;
}
private static Rect getBoundingBoxFromLine(String line) {
int endIndex = line.indexOf(',', 0);
int startIndex = endIndex - 1;
while (!Character.isSpaceChar(line.charAt(startIndex - 1)))
--startIndex;
int left = Integer.parseInt(line.substring(startIndex, endIndex));
startIndex = endIndex + 1;
endIndex = line.indexOf('-', startIndex);
endIndex = line.charAt(endIndex - 1) == ',' ? line.indexOf('-', endIndex + 1) : endIndex;
int top = Integer.parseInt(line.substring(startIndex, endIndex));
startIndex = endIndex + 1;
endIndex = line.indexOf(',', startIndex);
int right = Integer.parseInt(line.substring(startIndex, endIndex));
startIndex = endIndex + 1;
//noinspection StatementWithEmptyBody
for (endIndex = startIndex + 1; Character.isDigit(line.charAt(endIndex)); ++endIndex)
;
int bot = Integer.parseInt(line.substring(startIndex, endIndex));
return new Rect(left, top, right, bot);
}
}
| |
doc_23533211
|
Here is the code snippet:
import {Chart as ChartJS} from 'chart.js';
import { Bar } from 'react-chartjs-2';
import React, { useEffect, useReducer } from 'react';
import Layout from '../../components/Layout';
import { getError } from '../../utils/error';
ChartJS.register(
CategoryScale,
LinearScale,
BarElement,
Title,
Tooltip,
Legend
);
A: 2 Things, you are not importing all the elements you are trying to register and since Chart.register is not a function I suspect you are using V2 of Chart.js which is non treeshakable. So you can't and don't have to use register to show the charts unless you are going to upgrade to V3 and use treeshaking
If you want to register any plugins in V2 you need to register them on the plugins of the chart like so:
Chart.plugins.register(plugins);
| |
doc_23533212
|
I thought I'd give wkhtmltopdf (via node-wkhtmltopdf) a try, but it expects a URL.
My current thought (which isn't great) is to expose the HTML via express since I'm already exposing a REST API with this server. While doing this isn't rocket science, it seems pretty complicated to just hand something content from memory.
Does anyone have a good pattern for using wkhtmltopdf from node with HTML held in memory?
A: Apparently there are two npm packages for this. If you google 'node wkhtmltopdf' you are likely to run into this one first: node-wkhtmltopdf
...but if you look further you'll find: wkhtmltopdf which is seems more actively maintained and has documentation explaining how to use it directly, as mentioned by @Ben Fortune in the comments above.
Using the correct package, it seems well explained how to use HTML directly.
| |
doc_23533213
|
I have installed IIS Express.
The win7 system is up-to-date.
when I go to tools -> options, I don't see the web options.
any idea?
A: You need VS 2010 sp1 to use 'IIS Express' with Visual Studio. Please check if you have VS 2010 SP1 installed or not.
| |
doc_23533214
|
SELECT field1, field2,...fieldN table_name1, table_name2...
[WHERE condition1 [AND [OR]] condition2.....
So I want this search query. Anybody know how to get the final sql query please ?
A: You can get all the files related to the search here in this folder app/code/core/Mage/CatalogSearch/ . Magento search save queries for the future and results for caching and statistics. Make your query to join the product collection with the search result table. More you can find in this file app/code/core/Mage/CatalogSearch/Model/Resource/Fulltext/Engine.php
Hope it helps.
Thanks
A: Magento collecting data with lot of internal queries - models and lot of checks, and may be more than 1 table. So it is not possible to get the query like what I'm looking for.
But we can get the collection query using,
Mage::log((string)$collection->getSelect(),null,'test.log',true);
or just print,
$Collection->printLogQuery(true);
| |
doc_23533215
|
For example if there was a list of 3 reds in the column and 2 blues in the same column it would need to be incremented as red1, red2, red3 and blue1, blue 2. I've been searching but so far have not found anything that works how I need it to.
I have sorted the column and just need to add a number that increments by 1 for the same cell entry.
The column I have is 5000 products (with many random duplicates) and if the products have the same name then they need to be incremented by 1.
Any help would be appreciated. Thank you and I hope this makes sense.
A: You can use COUNTIF using a little trick with the range.
=COUNTIF($A$1:A1, A1)
And if you want to directly concatenate...
=A1&COUNTIF($A$1:A1, A1)
And drag the formula down.
Adjust the range accordingly.
| |
doc_23533216
|
I'm trying to create a column layout that has two parts, left and right.
The left side will show console output, the right will contain all the controls (joining channels, commands etc).
I'm having issues creating the two columns. I have a JPanel that is the whole width and height of the window and has a border of 10 pixels, and then I have two panels within that; left and right.
The left and right panels are for some reason taking the whole size of the window, and the right panel is overlapping everything.
Here's an example picture: http://i.imgur.com/lc4vHVH.png
The white is the right panel, it should only be half the size and have an identical but black panel on the left of it.
Here's my current code, sorry if it's messy, new to the whole Swing GUI.. Thanks for any help.
package tk.TaylerKing.GribbyBot;
import java.awt.Color;
import java.awt.Dimension;
import java.awt.event.ActionEvent;
import java.awt.event.ActionListener;
import java.awt.event.KeyEvent;
import java.util.ArrayList;
import java.util.HashMap;
import javax.swing.BorderFactory;
import javax.swing.SwingUtilities;
import javax.swing.UIManager;
import com.alee.laf.WebLookAndFeel;
import com.alee.laf.menu.WebMenuBar;
import com.alee.laf.menu.WebMenuItem;
import com.alee.laf.panel.WebPanel;
import com.alee.laf.rootpane.WebFrame;
public class GribbyBot extends WebFrame {
private static final long serialVersionUID = 4641597667372956773L;
public static HashMap<String, ArrayList<String>> connections = new HashMap<String, ArrayList<String>>();
public static void main(String[] args) throws Exception {
UIManager.setLookAndFeel(WebLookAndFeel.class.getCanonicalName());
SwingUtilities.invokeLater(new Runnable(){
public void run(){
GribbyBot gb = new GribbyBot();
gb.setVisible(true);
}
});
}
public GribbyBot(){
WebPanel panel = new WebPanel();
panel.setBorder(BorderFactory.createEmptyBorder(10, 10, 10, 10));
panel.setPreferredSize(new Dimension(780, 580));
WebMenuBar menu = new WebMenuBar();
WebMenuItem file = new WebMenuItem("Exit");
file.setMnemonic(KeyEvent.VK_E);
file.addActionListener(new ActionListener(){
public void actionPerformed(ActionEvent e) {
System.exit(0);
}
});
menu.add(file);
setJMenuBar(menu);
WebPanel left = new WebPanel();
left.setPreferredSize(new Dimension(380, 580));
left.setBackground(Color.BLACK);
WebPanel right = new WebPanel();
right.setPreferredSize(new Dimension(380, 580));
right.setBackground(Color.WHITE);
add(panel);
panel.add(left);
panel.add(right);
setTitle("GribbyBot");
setSize(800, 600);
setLocationRelativeTo(null);
setDefaultCloseOperation(EXIT_ON_CLOSE);
setResizable(false);
}
}
On a side note, all the variables that are prefixed with "Web" are the same as Swing, but it's a custom GUI.
A: Override JComponent#getPreferredSize() instead of using setPreferredSize()
Read more Should I avoid the use of set(Preferred|Maximum|Minimum)Size methods in Java Swing?
If extremely needed in case of Performing Custom Painting then try in this way:
Sample code:
final JPanel panel = new JPanel(){
@Override
public void paintComponent(Graphics g){
super.paintComponent(g);
// your custom painting code here
}
@Override
public Dimension getPreferredSize() {
return new Dimension(40, 40);
}
};
Why are using setPreferredSize() method whereas you can achieve this design easily using any proper layout such as BoxLayout, GridLayout, BorderLayout etc.
Read more about layout How to Use Various Layout Managers
EDIT
try JPanel panel = new JPanel(new GridLayout(1,2));
| |
doc_23533217
|
Version #1: my main thought was answering the question (what can/does a customer do?)
Version #2: here I was thinking that methods relating to orders such as(view order, cancel order...) should be defined in the class order too.
A: Cohesion is about related responsibilities and clear purpose. The more different, unrelated responsibilities, the less cohesive a class is (see also this other SO question). A pragmatic advice from OOP pioneer Rebecca Wirfs-Brock and Alan McKean is in their book Object Design (which they now made available for free):
A good test of whether an object is well formed is that its responsibilities form a cohesive unit. Does it stick to its purpose? Are its responsibilities clearly stated? Do they match its role?
Option #1 is not cohesive. The responsibilities of Customer is about profile management, authentication, and order management. One could claim that the two first are related, but the last one does clearly not belong there. By the way, option #1 does not comply with the single responsibility principle, nor even with fundaments of encapsulation (Customer can access Order's internals, worse: it has to). This last observation let us conclude that Order needs the Customer for managing it, and Customer could not work with a different Order, which shows a strong coupling in addition to low cohesion.
Option #2 is cohesive. Customer is only about managing the customer (including login and logout), and Order is only about managing the orders.
As a side remark: your option #1 is not realistic from an UML point of view, since all the properties are private, but no operation exist to access them ;-)
| |
doc_23533218
|
I am doing something like:
sudo mkdir /mnt/postgresql
sudo mkdir /mnt/postgresql/space
sudo chown -R postgres:postgres /mnt/postgresql
Then, as the posgres superuser (sudo -u postgres psql), I am using the following command :
CREATE TABLESPACE myspace LOCATION '/mnt/postgresql/space';
I get the following error:
ERROR: could not set permissions on directory "/mnt/postgresql/space": Operation not permitted
I am using psql 9.2.24 with CentOS Linux release 7.7.1908.
Why the PostgreSQL system user could not set permissions on directory that it is owned ?
| |
doc_23533219
|
dropbox.com/s/fpw3obrqcx8ld1q/GrandAverage.RData?dl=0
The part of the code if have to use for this I am using is given below:
set <- GrandAverage[, 5:7];
Beh.Parameters <- function (lambda, alpha, temp) {
u = 0.5 * set$Gain^alpha + 0.5 * lambda * set$Loss^alpha
GambleProbability <- 1 / (1 + exp(-temp * u))
loglike <- set$Decision*log(GambleProbability) +
(1- set$Decision)*log(1-GambleProbability)
return(-sum(loglike))
}
temp_s <- 0.1 #runif(1, 0.1, 1)
ML.estim1 <- mle(Beh.Parameters, start = list (lambda = 1, alpha = 1, temp = temp_s), nobs = length(set$Decision))
ML.estim2 <- mle(Beh.Parameters, start = list(lambda = 0.1, alpha = 0.1, temp = temp_s), nobs = length(set$Decision))
I use the mle function in order to estimate the 3 parameters (lambda, alpha and temp), without the alpha i receive this output for example:
ML.estim1
Call:
mle(minuslogl = Beh.Parameters, start = list(lambda = 1, temp = temp_s),
nobs = length(set$Decision))
Coefficients:
lambda temp
1.298023 1.041057
When I try to run it without the alpha parameter it works fine but when I include it I received these two errors:
Error in optim(start, f, method = method, hessian = TRUE, ...) :
non-finite finite-difference value [2] (for the first MLE) Error in
optim(start, f, method = method, hessian = TRUE, ...) : initial
value in 'vmmin' is not finite (for the second MLE)
I tried to recode the matrix, singular value decomposition, BFGS etc.
Any help is welcome...thanks in advance.
A: Your Loss variable is negative. In R, raising negative values to a fractional power (i.e. set$Loss^alpha where alpha is non-integer) returns NaN values. (The only general alternative is to return a complex-valued answer, which you probably don't want.) Did you mean to code Loss as positive rather than negative? Or maybe you want -abs(set$Loss^alpha) ?
As a general purpose debugging tip, it helps to add
cat(lambda,alpha,temp,-sum(loglike),"\n")
as the second-to-last-line of your objective function so you can better see what's going on.
| |
doc_23533220
|
One of the three protocols to solve the critical section problem is Progress:
Progress is : If no process is executing in its critical section and some processes wish to enter their critical sections, then only those processes that are not executing in their remainder section can participate in deciding which will enter its critical section next, and this selection cannot be postponed indefinitely.
Why are the processes deciding which process should go next? Isn't this the job of a scheduling algorithm?
| |
doc_23533221
|
$old_agreement = new \PayPal\Api\Agreement();
$old_agreement->setId("I-G0JJ5A9KMR--");
$agreementStateDescriptor = new \PayPal\Api\AgreementStateDescriptor();
$agreementStateDescriptor->setNote("Cancel the agreement");
try {
$old_agreement->cancel($agreementStateDescriptor, $this->_apiContext);
$cancelAgreementDetails = Agreement::get($old_agreement->getId(), $this->_apiContext);
} catch (Exception $ex) {
Log::error($ex);
}
A: You're using a deprecated SDK and an old version of PayPal Subscriptions that is not compatible with the current version of PayPal Subscriptions.
The current version of PayPal Subscriptions is not compatible with the old one, and it does not use Agreements.
It uses Products, Plans, and Subscriptions only. Subscriptions have the same I-########### format as those old agreements.
There is no SDK for the current version of PayPal Subscriptions. When implementing API calls for the current version, as desired, you must obtain an access token using the client_id and secret and do the HTTPS API calls yourself, with no server SDK.
(The current Checkout-PHP-SDK has some non-subscription PHP code that you might want to adapt and use as a base, but it has nothing specific to subscriptions.)
| |
doc_23533222
|
{
"ok": true,
"result": {
"code": "694kyH",
"short_link": "shrtco.de\/694kyH",
"full_short_link": "https:\/\/shrtco.de\/694kyH",
"short_link2": "9qr.de\/694kyH",
"full_short_link2": "https:\/\/9qr.de\/694kyH",
"short_link3": "shiny.link\/694kyH",
"full_short_link3": "https:\/\/shiny.link\/694kyH",
"share_link": "shrtco.de\/share\/694kyH",
"full_share_link": "https:\/\/shrtco.de\/share\/694kyH",
"original_link": "http:\/\/google.com"
}
}
{
"ok": false,
"error_code": 2,
"error": "This is not a valid URL, for more infos see shrtco.de\/docs"
}
How will I parse this JSON. I have tried to build my class like following but it is not working:
struct ShortLinkData: Codable {
let ok: Bool
let result: Result?
let errorCode: Int?
let error: String?
private enum CodingKeys : String, CodingKey { case ok, result, errorCode = "error_code", error }
init(from decoder: Decoder) throws {
let container = try decoder.container(keyedBy: CodingKeys.self)
ok = try container.decode(Bool.self, forKey: .ok)
result = try container.decode(Result.self, forKey: .result)
errorCode = try container.decodeIfPresent(Int.self, forKey: .errorCode)
error = try container.decodeIfPresent(String.self, forKey: .error)
}
}
// MARK: - Result
struct Result: Codable {
let code, shortLink: String
let fullShortLink: String
let shortLink2: String
let fullShortLink2: String
let shortLink3: String
let fullShortLink3: String
let shareLink: String
let fullShareLink: String
let originalLink: String
enum CodingKeys: String, CodingKey {
case code
case shortLink = "short_link"
case fullShortLink = "full_short_link"
case shortLink2 = "short_link2"
case fullShortLink2 = "full_short_link2"
case shortLink3 = "short_link3"
case fullShortLink3 = "full_short_link3"
case shareLink = "share_link"
case fullShareLink = "full_share_link"
case originalLink = "original_link"
}
init(from decoder: Decoder) throws {
let container = try decoder.container(keyedBy: CodingKeys.self)
code = try container.decode(String.self, forKey: .code)
shortLink = try container.decode(String.self, forKey: .shortLink)
fullShortLink = try container.decode(String.self, forKey: .fullShortLink)
shortLink2 = try container.decode(String.self, forKey: .shortLink2)
fullShortLink2 = try container.decode(String.self, forKey: .fullShortLink2)
shortLink3 = try container.decode(String.self, forKey: .shortLink3)
fullShortLink3 = try container.decode(String.self, forKey: .fullShortLink3)
shareLink = try container.decode(String.self, forKey: .shareLink)
fullShareLink = try container.decode(String.self, forKey: .fullShareLink)
originalLink = try container.decode(String.self, forKey: .originalLink)
}
}
My parsing code:
let str = String(decoding: data, as: UTF8.self)
print(str)
let shortURL = try? JSONDecoder().decode(ShortLinkData.self, from: data)
return shortURL!
I am always getting nil in shortURL object.
A: You should split this into several steps in order to avoid to handle all these optionals in your model.
First create a struct that has only those properties that are guaranteed to be there. ok in your case:
struct OKResult: Codable{
let ok: Bool
}
then create one for your error state and one for your success state:
struct ErrorResult: Codable{
let ok: Bool
let errorCode: Int
let error: String
private enum CodingKeys: String, CodingKey{
case ok, errorCode = "error_code", error
}
}
struct ShortLinkData: Codable {
let ok: Bool
let result: Result
}
struct Result: Codable {
let code, shortLink: String
let fullShortLink: String
let shortLink2: String
let fullShortLink2: String
let shortLink3: String
let fullShortLink3: String
let shareLink: String
let fullShareLink: String
let originalLink: String
enum CodingKeys: String, CodingKey {
case code
case shortLink = "short_link"
case fullShortLink = "full_short_link"
case shortLink2 = "short_link2"
case fullShortLink2 = "full_short_link2"
case shortLink3 = "short_link3"
case fullShortLink3 = "full_short_link3"
case shareLink = "share_link"
case fullShareLink = "full_share_link"
case originalLink = "original_link"
}
}
Then you can decode the data:
guard try JSONDecoder().decode(OKResult.self, from: data).ok else{
let errorResponse = try JSONDecoder().decode(ErrorResult.self, from: data)
//handle error scenario
fatalError(errorResponse.error) // or throw custom error or return nil etc...
}
let shortlinkData = try JSONDecoder().decode(ShortLinkData.self, from: data)
Remarks:
*
*Your inits are not necessary.
*Never use try? this will hide all errors from you
*you would need to wrap this either in a do catch block or make your function throwing and handle errors further up the tree.
A: Actually there are no optional fields. The server sends two different but distinct JSON strings.
A suitable way to decode both JSON strings is an enum with associated values. It decodes the ok key, then it decodes either the result dictionary or errorCode and error
enum Response : Decodable {
case success(ShortLinkData), failure(Int, String)
private enum CodingKeys : String, CodingKey { case ok, result, errorCode, error }
init(from decoder: Decoder) throws {
let container = try decoder.container(keyedBy: CodingKeys.self)
let ok = try container.decode(Bool.self, forKey: .ok)
if ok {
let result = try container.decode(ShortLinkData.self, forKey: .result)
self = .success(result)
} else {
let errorCode = try container.decode(Int.self, forKey: .errorCode)
let error = try container.decode(String.self, forKey: .error)
self = .failure(errorCode, error)
}
}
}
In ShortLinkData the init method and the CodingKeys are redundant if you specify the convertFromSnakeCase key decoding strategy
struct ShortLinkData: Decodable {
let code, shortLink: String
let fullShortLink: String
let shortLink2, fullShortLink2: String
let shortLink3, fullShortLink3: String
let shareLink, fullShareLink: String
let originalLink: String
}
do {
let decoder = JSONDecoder()
decoder.keyDecodingStrategy = .convertFromSnakeCase
let result = try decoder.decode(Response.self, from: data)
switch result {
case .success(let linkData): print(linkData)
case .failure(let code, let message): print("An error occurred with code \(code) and message \(message)")
}
} catch {
print(error)
}
| |
doc_23533223
|
I have done so, but it says that the paragraph is missing, link is missing etc.
Image to be recreated is here
My code so far is this:
<!DOCTYPE html>
<html>
<head>
<title>This page definitely has a title</title>
</head>
<body>
<font face = "Times New Roman">
<h1> <font size="5">This is some text!A level 1 heading with a tooltip that says 'tooltip' (with the quotes)</h1> </font>
<p><font size="3"><b>This level 3 heading has a link to <a href="https://www.ww3schools.com">https://www.ww3schools.com</a></b></p></font>
<p><font size="3">This is just a regular paragraph.</p>
</font>
</body>
</html>
Any help will be much appreciated, thank you!
A: Your closing tags look to be in the wrong place,
Put the closing /h1 after /font
Your second line should be a h3 instead of a p, and put your closing /h3 after /font
On your third line put your closing /p after /font
Hope this helps!
A: @JakeBlack is correct, please accept his answer. My only extra input is, it helps to find these problems early, by properly indenting code so you can see what code blocks are encapsulating. Also no need for the extra HTML tag you had at the top.
<!DOCTYPE html>
<head>
<title>This page definitely has a title</title>
</head>
<body>
<font face="Times New Roman">
<h1>
<font size="5">
This is some text!A level 1 heading with a tooltip that says 'tooltip' (with the quotes)
</font>
</h1>
<p>
<font size="3">
<b>
This level 3 heading has a link to <a href="https://www.ww3schools.com">https://www.ww3schools.com</a>
</b>
</font>
</p>
<p>
<font size="3">
This is just a regular paragraph.
</font>
</p>
</font>
</body>
</html>
A: Try this
<!DOCTYPE html>
<head>
<title>This page definitely has a title</title>
<style>
body {
font-family: "Times New Roman", Times, serif;
}
</style>
</head>
<body>
<p>This is some text!A level 1 heading with a tooltip that says 'tooltip' (with the quotes)</p>
<p>This level 3 heading has a link to <a href="https://www.ww3schools.com">https://www.ww3schools.com</a></p>
<p>This is just a regular paragraph.</p>
</body>
</html>
| |
doc_23533224
|
#include <iostream>
#include <fstream>
using namespace std;
int main(){
ifstream data_base;
data_base.open("database.txt", ios::out);
string name, a;
int b, c, d, e, test=0;
system ("cls");
cout<<"enter name "<<endl;
cin>>name;
while (data_base >> a >> b >> c >> d >> e){
if (name == a) test=1;
}
if (test!=1)
cout<<"wrong name"<<endl;
return 0;
}
A: Try the following:
#include <iostream>
#include <fstream>
using namespace std;
int main()
{
ifstream data_base;
data_base.open("database.txt", ios::out);
string name, a;
int b, c, d, e, test = 0;
system ("cls");
cout << "enter name ";
cin >> name;
while (data_base >> a >> b >> c >> d >> e)
{
if (name == a)
{
test = 1;
break;
}
}
if (test!=1)
cout << "wrong name" << endl;
return 0;
}
Let me know if it works.
| |
doc_23533225
|
This is my grid with some controls after it has been animated. Notice the improper rendering of the text and the rectangle. How can I solve this rendering?
UPDATE: Code request by Rachel:
<TextBlock Height="35.667" Margin="73.667,19,0,0" TextWrapping="Wrap" VerticalAlignment="Top" FontSize="32" Foreground="Black" Text="close" HorizontalAlignment="Left" Width="73.667" UseLayoutRounding="True"/>
<Rectangle x:Name="BS2" Fill="#FF0178D3" HorizontalAlignment="Left" Height="64.166" Margin="25,0,0,0" Stroke="Black" VerticalAlignment="Top" Width="30.667" StrokeThickness="0" UseLayoutRounding="True"/>
A: Have you tried testing it on several different machines? WPF can be sensitive to differences in graphics cards.
A: try UseLayoutRounding and SnapsToDevicePixels = true
Edit:
I'm curios how does it look like when you do something like
<ScaleTransform ScaleX="1.01" ScaleY="1.01" />
Also you could try wrap it in some other panel (Canvas f.e.)
A: Try changing TextOptions.TextRenderingMode and see if that makes a difference. The results vary per machine.
| |
doc_23533226
| ||
doc_23533227
|
Document
public virtual string Name { get; set; }
public virtual string Description { get; set; }
public virtual User User { get; set; }
Documentmap
public DocumentMap()
{
Map(x => x.Name);
Map(x => x.Description);
References(x => x.User);
}
User
public virtual string UserId { get; set; }
public virtual string FirstName { get; set; }
public virtual string MiddleInitial { get; set; }
public virtual string LastName { get; set; }
private readonly IList<Document> _documents = new List<Document>();
public virtual IEnumerable<Document> Documents { get { return _documents; } }
public virtual void Remove(Document document)
{
_documents.Remove(document);
}
public virtual void Add(Document document)
{
if (!document.IsNew() && _documents.Contains(document)) return;
_documents.Add(document);
}
Map(x => x.UserId);
Map(x => x.FirstName);
Map(x => x.MiddleInitial);
Map(x => x.LastName);
HasMany(x => x.Documents).Access.CamelCaseField(Prefix.Underscore);
Pretty straighforward ( they inherit from base class with stuff like createddate modifieddate etc )
when I try to get all doc by userid I get this
Invalid column name 'UserId'.
the column most definitely is in the table. It also lists several of the base class items as not being there.
I take the sql and past it in query manager and I get intellisense saying they are invalid columns. I run it and it executes just fine. Further more there are plenty of other objects using these base classes with no problems.
I have tried various things like explicitly mapping the key name, the column name using inverse etc to no avail. Don't really know what to do.
Thanks,
Raif
//EDIT as per request sorry it's so verbose. the database is created by nhibernate create schema
Document
public class Document : Entity
{
public virtual string Name { get; set; }
public virtual string Description { get; set; }
public virtual DocumentCategory DocumentCategory { get; set; }
[ValueOf(typeof(DocumentFileType))]
public virtual string FileType { get; set; }
public virtual string FileUrl { get; set; }
public virtual int? Pages { get; set; }
public virtual decimal? Size { get; set; }
public virtual User User { get; set; }
}
public class DocumentMap : EntityMap<Document>
{
public DocumentMap()
{
Map(x => x.Name);
Map(x => x.Description);
Map(x => x.FileUrl);
Map(x => x.Pages);
Map(x => x.Size);
Map(x => x.FileType);
References(x => x.DocumentCategory);
References(x => x.User);
}
}
Entity
public class Entity : IGridEnabledClass, IEquatable<Entity>
{
public virtual int EntityId { get; set; }
public virtual DateTime? CreateDate { get; set; }
public virtual DateTime? ChangeDate { get; set; }
public virtual int ChangedBy { get; set; }
public virtual bool Archived { get; set; }
public virtual bool IsNew()
{
return EntityId == 0;
}
User
public class User : DomainEntity, IUser
{
public virtual string UserId { get; set; }
[ValidateNonEmpty]
public virtual string FirstName { get; set; }
public virtual string MiddleInitial { get; set; }
[ValidateNonEmpty]
public virtual string LastName { get; set; }
public virtual string Title { get; set; }
public virtual DateTime? BirthDate { get; set; }
public virtual string StartPage { get; set; }
public virtual UserLoginInfo UserLoginInfo { get; set; }
public virtual UserStatus UserStatus { get; set; }
public virtual Photo HeadShot { get; set; }
private readonly IList<Document> _documents = new List<Document>();
public virtual IEnumerable<Document> Documents { get { return _documents; } }
public virtual void Remove(Document document)
{
_documents.Remove(document);
}
public virtual void Add(Document document)
{
if (!document.IsNew() && _documents.Contains(document)) return;
_documents.Add(document);
}
several more collections
public class UserMap : DomainEntityMap<User>
{
public UserMap()
{
Map(x => x.UserId);
Map(x => x.FirstName);
Map(x => x.MiddleInitial);
Map(x => x.LastName);
Map(x => x.BirthDate);
Map(x => x.StartPage);
References(x => x.UserStatus);
References(x => x.UserLoginInfo);
References(x => x.HeadShot);
HasMany(x => x.Documents).Access.CamelCaseField(Prefix.Underscore);
database tables create from script select to menu item on management studio
SELECT [EntityId]
,[CreateDate]
,[ChangeDate]
,[ChangedBy]
,[Archived]
,[Name]
,[Description]
,[FileUrl]
,[Pages]
,[Size]
,[FileType]
,[DocumentCategoryId]
,[UserId]
FROM [DecisionCriticalSuite].[dbo].[Document]
GO
SELECT [EntityId]
,[CreateDate]
,[ChangeDate]
,[ChangedBy]
,[Archived]
,[TenantId]
,[OrgId]
,[UserId]
,[FirstName]
,[MiddleInitial]
,[LastName]
,[BirthDate]
,[StartPage]
,[UserStatusId]
,[UserLoginInfoId]
,[HeadShotId]
,[OrganizationId]
FROM [DecisionCriticalSuite].[dbo].[User]
GO
error from nhprof
ERROR:
Invalid column name 'UserId'.
Invalid column name 'UserId'.
Invalid column name 'EntityId'.
Invalid column name 'EntityId'.
Invalid column name 'CreateDate'.
Invalid column name 'ChangeDate'.
Invalid column name 'ChangedBy'.
Invalid column name 'Archived'.
Invalid column name 'FileType'.
Invalid column name 'UserId'.Could not execute query: SELECT documents0_.UserId as UserId1_, documents0_.EntityId as EntityId1_, documents0_.EntityId as EntityId49_0_, documents0_.CreateDate as CreateDate49_0_, documents0_.ChangeDate as ChangeDate49_0_, documents0_.ChangedBy as ChangedBy49_0_, documents0_.Archived as Archived49_0_, documents0_.Name as Name49_0_, documents0_.Description as Descript7_49_0_, documents0_.FileUrl as FileUrl49_0_, documents0_.Pages as Pages49_0_, documents0_.Size as Size49_0_, documents0_.FileType as FileType49_0_, documents0_.DocumentCategoryId as Documen12_49_0_, documents0_.UserId as UserId49_0_ FROM [Document] documents0_ WHERE documents0_.UserId=@p0
A: Make sure nhibernate is querying against the same database as what you are querying against in sql management studio.
A: I just had the same issue because I had mapped Entity1.Entity2 as Entity3.
So when joining, it would attempt to use a property from Entity3 as if it existed on Entity2.
| |
doc_23533228
|
ADD
Copy files to the image at build time. The image has all the files so you can deploy very easily. On the other hand, needing to build every time doesn't look like a good idea in development because building requires the developer to run a command to rebuild the container; additionally, building the container can be time-consuming.
VOLUME
I understand that using docker run -v you can mount a host folder inside your container, this way you can easily modify files and watch the app in your container react to the changes. It looks great in development, but I am not sure how to deploy my files this way.
A: The VOLUME instruction creates a data volume in your Docker container at runtime. The directory provided as an argument to VOLUME is a directory that bypasses the Union File System, and is primarily used for persistent and shared data.
If you run docker inspect <your-container>, you will see under the Mounts section there is a Source which represents the directory location on the host, and a Destination which represents the mounted directory location in the container. For example,
"Mounts": [
{
"Name": "fac362...80535",
"Source": "/var/lib/docker/volumes/fac362...80535/_data",
"Destination": "/webapp",
"Driver": "local",
"Mode": "",
"RW": true,
"Propagation": ""
}
]
Here are 3 use cases for docker run -v:
*
*docker run -v /data: This is analogous to specifying the VOLUME instruction in your Dockerfile.
*docker run -v $host_path:$container_path: This allows you to mount $host_path from your host to $container_path in your container during runtime. In development, this is useful for sharing source code on your host with the container. In production, this can be used to mount things like the host's DNS information (found in /etc/resolv.conf) or secrets into the container. Conversely, you can also use this technique to write the container's logs into specific folders on the host. Both $host_path and $container_path must be absolute paths.
*docker run -v my_volume:$container_path: This creates a data volume in your container at $container_path and names it my_volume. It is essentially the same as creating and naming a volume using docker volume create my_volume. Naming a volume like this is useful for a container data volume and a shared-storage volume using a multi-host storage driver like Flocker.
Notice that the approach of mounting a host folder as a data volume is not available in Dockerfile. To quote the docker documentation,
Note: This is not available from a Dockerfile due to the portability and sharing purpose of it. As the host directory is, by its nature, host-dependent, a host directory specified in a Dockerfile probably wouldn't work on all hosts.
Now if you want to copy your files to containers in non-development environments, you can use the ADD or COPY instructions in your Dockerfile. These are what I usually use for non-development deployment.
A: ADD
The fundamental difference between these two is that ADD makes whatever you're adding, be it a folder or just a file actually part of your image. Anyone who uses the image you've built afterwards will have access to whatever you ADD. This is true even if you afterwards remove it because Docker works in layers and the ADD layer will still exist as part of the image. To be clear, you only ADD something at build time and cannot ever ADD at run-time.
A few examples of cases where you'd want to use ADD:
*
*You have some requirements in a requirements.txt file that you want to reference and install in your Dockerfile. You can then do: ADD ./requirements.txt /requirements.txt followed by RUN pip install -r /requirements.txt
*You want to use your app code as context in your Dockerfile, for example, if you want to set your app directory as the working dir in your image and to have the default command in a container run from your image actually run your app, you can do:
ADD ./ /usr/local/git/my_app
WORKDIR /usr/local/git/my_app
CMD python ./main.py
VOLUME
Volume, on the other hand, just lets a container run from your image have access to some path on whatever local machine the container is being run on. You cannot use files from your VOLUME directory in your Dockerfile. Anything in your volume directory will not be accessible at build-time but will be accessible at run-time.
A few examples of cases where you'd want to use VOLUME:
*
*The app being run in your container makes logs in /var/log/my_app. You want those logs to be accessible on the host machine and not to be deleted when the container is removed. You can do this by creating a mount point at /var/log/my_app by adding VOLUME /var/log/my_app to your Dockerfile and then running your container with docker run -v /host/log/dir/my_app:/var/log/my_app some_repo/some_image:some_tag
*You have some local settings files you want the app in the container to have access to. Perhaps those settings files are different on your local machine vs dev vs production. Especially so if those settings files are secret, in which case you definitely do not want them in your image. A good strategy in that case is to add VOLUME /etc/settings/my_app_settings to your Dockerfile, run your container with docker run -v /host/settings/dir:/etc/settings/my_app_settings some_repo/some_image:some_tag, and make sure the /host/settings/dir exists in all environments you expect your app to be run.
| |
doc_23533229
|
By creating a separate package for these modules and uploading them on PyPI, I could mark such a package as a dependency for the packages I actually want to distribute. These convenience modules are, however, small and of limited use and interest, such that I don't think they warrant distributing as a separate package on PyPI. On the other hand, I am hesitant to copy these convenience modules (i.e., use cp convenience_module.py projectX/.) into each project directory, as this creates multiple copies of the same file both in the VCS repository housing my Python code and in the different source distribution tarballs I would post to PyPI. Is there an elegant solution to this problem?
A: You don't say why you're hesitant to 'provide copies'. In general, I think a reasonable approach is to think about how you've set things up for yourself to use the convenience modules. Did you install them in site-packages (or equivalent), or did you just depend on them being in the directory you ran the code from? However you use the modules, is that situation ideal, or is there a way that would be nicer for you?
Start with that, and figure out how to automate it through setup.py, which lets you put things wherever you want on the system (though I strongly discourage abusing this capability).
Whether you distribute them as a tarball or with the package that needs them, you still have to maintain all of the files, so the only real question is whether you intend for those convenience modules to develop their own user communities with their own support requests, etc., or whether they're decidedly intended only for use in support of this other module.
If you intend those modules to be used only for the one module, include them in the package, perhaps in a 'utils' package inside the distribution. Otherwise you're just cluttering the index with things people might think are useful, but are really joined at the hip with something else that drives the changes and maintenance of them.
If you intend those modules to be generic, and intend to maintain them as such, and think they have use outside of supporting this module, distribute them separately.
A: As far as I know, distributing these small packages via PyPI is only viable option. Yes, it clutters the index with near-useless packages, but its something that should be solved by PyPI maintainers, not package developers. Another alternative is to use stdlib's or other util packages data and functions rather than reinventing the wheel.
Just make sure you describe that utils package as such, or extend them in something more useful for others.
| |
doc_23533230
|
of course, i put nextLine and nextInt, i just want to ask how to make netBeans stop and ask me the input.
A: You will see something called output (window) in bottom --> There you will see prompt for input.
If output window is not available there, goto Window menu--> Select output--> ouput
(or) Ctrl+4
A: because you need to give a message before like this.
// this is your input object
Scanner input = new Scanner(System.in);
// give massage
System.out.println("give some in put");
// to get input where you want
String input = input.nextline();
system.out.println(""+input);
A: It is quite simple. Here is an example for you.
public class Test {
public static void main(String[] args) [
Scanner INPUT = new Scanner(System.in);
int i;
System.out.print("Enter number: "); // This displays to the user to enter a number.
i = INPUT.nextInt(); // Sets i equal to the user input.
}
}
If you are using a string for input try INPUT.nextLine();.
You can find more info on it here: http://download.oracle.com/javase/1,5,0/docs/api/java/util/Scanner.html
| |
doc_23533231
|
The permissions on my default compute engine service account are very broad, I would like to create a new service account with much more restricted permissions, e.g. only access to BigQuery, and only read write to certain datasets. How can I set up or change the notebook instances to only have the permissions of this new service account? Is it possible? Also noting whether or not I'm able to access the instances shell, given the permission is set to "Single user only".
A: You should frist create a custom service account with your desired permissions. If you want to grant permissions to specific datasets you can do that from bigquery by adding this particular service account.
Once your service account has all the permissions set you can specify that service account when creating a notebook instance:
| |
doc_23533232
|
The standard suggestion is to add this:
namespace System.Runtime.CompilerServices
{
public sealed class ExtensionAttribute : Attribute { }
}
This the approach suggested by more than one Microsoft employee and was even featured in MSDN magazine. It's widely hailed by many bloggers as having 'no ill effects'.
Oh, except it will cause a compiler error from a VB.NET project targeting .NET 3.5 or higher.
The authors of Microsoft.Core.Scripting.dll figured it out, and changed 'public' to 'internal'.
namespace System.Runtime.CompilerServices
{
internal sealed class ExtensionAttribute : Attribute { }
}
Which seemed to solve the VB compatibility issue.
So I trustingly used that approach for the latest version (3.2.1) of the widely-used ImageResizing.Net library.
But then, we start getting this compiler error (original report), more or less randomly, for certain users targeting .NET 3.5+.
Error 5 Missing compiler required member
'System.Runtime.CompilerServices.ExtensionAttribute..ctor'
Because the MSBuild/VisualStudio compiler apparently doesn't bother to look at scoping rules when resolving naming conflicts, and the order of assembly references plays a not-quite-docuemented role, I don't fully understand why and when this happens.
There are a few hacky workarounds, like changing the assembly namespace, recreating the project file, deleting/readding System.Core, and fiddling with the target version of the .NET framework. Unfortunately, none of those workarounds are 100% (except aliasing, but that's an unacceptable pain).
How can I fix this while
*
*Maintaining support for extension method use within the assembly,
*Maintaining support for .NET 2.0/3.0
*Not requiring multiple assemblies for each .NET framework version.
Or, is there a hotfix to make the compiler pay attention to scoping rules?
Related questions on SO that don't answer this question
*
*C# Extension methods in .NET 2.0
*Using Extension Methods with .NET Framework 2.0
*strange warning about ExtensionAttribute
*Ambigious reference for ExtensionAttribute when using Iron Python in Asp.Net
*Should I support .NET 2.0?
*Using extension methods in .NET 2.0?
A: We ran into the same issue with IronPython. http://devhawk.net/2008/10/21/the-fifth-assembly/
We ended up moving our custom version of ExtensionAttribute to its own assembly. That way, customers could choose between referencing our custom ExtensionAttribute assembly or System.Core - but never both!
The other tricky thing is that you have to always deploy the ExtensionAttribute assembly - even if you don't reference it in your project. Your project assemblies that expose extension methods will have an assemblyref to that custom ExtensionAttribute assembly, so CLR will get upset if it can't be found.
Given the hard requirement of .NET 2.0 support, I would think the best bet would be to simply not use extension methods at all. I'm not familiar with the ImageResizer project, but it sounds like this was a recent change in ImageResizer. How feasible would it be to change the extension methods to traditional static methods? We actually thought about that for IronPython/DLR, but it wasn't feasible (We were merged with LINQ at that point and LINQ had made heavy use of extension methods for essentially its entire existence).
| |
doc_23533233
|
structure(list(id1 = c(1, 2, 3, 4, 4, 4, 4, 4, 4, 4), id2 = c("a",
"b", "c", "d", "e", "f", "g", "h", "i", "j"), b1 = c(NA, NA,
NA, 1L, 1L, 1L, 1L, 1L, 1L, 1L), b2 = c(1, NA, NA, NA, NA, NA,
1, 1, 1, 1), b3 = c(NA, 1, NA, NA, NA, NA, NA, NA, 1, 1), b4 = c(NA,
NA, 1, NA, NA, NA, NA, NA, 1, 1)), .Names = c("id1", "id2", "b1",
"b2", "b3", "b4"), row.names = c(NA, 10L), class = "data.frame")
df
id1 id2 b1 b2 b3 b4
1 1 a NA 1 NA NA
2 2 b NA NA 1 NA
3 3 c NA NA NA 1
4 4 d 1 NA NA NA
5 4 e 1 NA NA NA
6 4 f 1 NA NA NA
7 4 g 1 1 NA NA
8 4 h 1 1 NA NA
9 4 i 1 1 1 1
10 4 j 1 1 1 1
I need to get it into long format, while ONLY keeping values of 1. Of course, I tried using gather from tidyr and also melt from data.table to no avail as the memory requirements of them are explosive. My original data had zeros and ones, but I filled zeroes with NA and hoped na.rm = TRUE option will help with memory issue. But, it does not.
With just ones retained and lengthened, my data frame will fit easily in memory I have.
Is there a better way to get at this vs. using the standard methods - reasonable compute as a tradeoff for better memory fit is acceptable.
My desired output is the equivalent of:
library(dplyr)
library(tidyr)
df %>% gather(b, value, -id1, -id2, na.rm = TRUE)
id1 id2 b value
1 4 d b1 1
2 4 e b1 1
3 4 f b1 1
4 4 g b1 1
5 4 h b1 1
6 4 i b1 1
7 4 j b1 1
8 1 a b2 1
9 4 g b2 1
10 4 h b2 1
11 4 i b2 1
12 4 j b2 1
13 2 b b3 1
14 4 i b3 1
15 4 j b3 1
16 3 c b4 1
17 4 i b4 1
18 4 j b4 1
# or
reshape2::melt(df, id=c("id1","id2"), na.rm=TRUE)
# or
library(data.table)
melt(setDT(df), id=c("id1","id2"), na.rm=TRUE)
Currently, the call to gather on my full data set gives me this error, which I believe is due to memory issue:
Error in .Call("tidyr_melt_dataframe", PACKAGE = "tidyr", data, id_ind, :
negative length vectors are not allowed
| |
doc_23533234
|
class TestCategory(MPTTModel):
name = models.CharField(max_length=100)
parent = TreeForeignKey('self', on_delete=models.CASCADE, null=True, blank=True, related_name='children')
is_active = models.BooleanField(default=True)
created_at = models.DateTimeField(auto_now_add=True)
update_at = models.DateTimeField(auto_now=True)
class Meta:
verbose_name_plural = "Test Categories"
class MPTTMeta:
order_insertion_by = ['name']
def __str__(self):
return self.name
class Test(models.Model):
QUALITATIVE = "1"
QUANTITATIVE = "0"
TEST_TYPE_CHOICES = [
(QUALITATIVE, 'QUALITATIVE'),
(QUANTITATIVE, 'QUANTITATIVE'),
]
test_category = TreeForeignKey(TestCategory, on_delete=models.CASCADE, related_name='test_category')
name = models.CharField(max_length=50)
unit = models.CharField(max_length=10)
specimen = models.ForeignKey(Specimen, on_delete=models.CASCADE, blank=True, null=True)
test_type = models.CharField(max_length=2, choices=TEST_TYPE_CHOICES, default=QUALITATIVE)
reference_text = models.TextField()
price = models.DecimalField(max_digits=10, decimal_places=2, blank=True, default=0, verbose_name='Price Per Unit')
is_active = models.BooleanField(default=True)
def __str__(self):
return self.name
class Meta:
ordering = ['test_category']
# views.py
def report_create_view(request, pk):
context ={}
form = CreateReportForm(request.POST or None)
if form.is_valid():
patient = self.objects.all().filter(patient_id=pk)
form.instance.patient_id = patient.patient_id
form.save()
context['form']= form
context['patient'] = Patient.objects.all().filter(pk=pk)
context['nodes'] = TestCategory.objects.all()
return render(request, "lab/report_form.html", context)
#template report_form.html
{% load crispy_forms_tags %}
{% load mptt_tags %}
<form method="post" novalidate>
{% csrf_token %}
{{ form|crispy }}
{% for node,structure in nodes|tree_info %}
{% if structure.new_level %}<ul><li>{% else %}</li><li>{% endif %}
{{ node.name }}
{% for level in structure.closed_levels %}</li></ul>{% endfor %}
{% endfor %}
<button type="submit" class="btn btn-success">Add Report</button>
</form>
This is what my template looks right now. As you can see my form has only categories name (I am getting TestCategory model only node the actual Test model. How can actually render tests grouped by category?
| |
doc_23533235
|
Basically what I want is to use osx + netbeans for this project but it seems to reference java3d methods that are not included in the outdated version of java3d in the mac 1.6 JDK. My first attempt at resolving this issue was to include the java3d 1.5.2 libraries as external jars for the project but it seems netbeans is still trying to reference the old libraries in the 1.6 JDK instead of the 1.5.2 j3d libraries.
Also when I explore the 1.5.2 jars within the netbeans file explorer the methods (that are not included in the 1.6 jdk) do not show up under their respective classes. However, when I do the same thing on solaris the methods do show up. So basically, I know that the jars I am trying to include do in fact contain the methods/classes I need. netbeans just won't find them
If anything is unclear please ask me to clarify it. I got confused just writing this up.
Thanks in advance for the help!
A: Not sure if this is related, but you could try adjusting the Java Platform and Source/Binary Format settings of your Netbeans project.
Right click on the Netbeans project and click properties, under the 'Sources' Panel adjust the JDK version in the 'Source/Binary Format' combo-box. Under the 'Libraries' panel adjust the platform version in the 'Java Platform' combo-box.
Fiddling with those settings solved a similar problem for me.
A: It sounds like you may need to adjust your CLASSPATH to put the newer java3d jars ahead of the older ones. Try putting the newer Java3D JARs in /Library/Java/Extensions. And, if that doesn't work, set the CLASSPATH variable in ~/.MacOSX/environment.plist to point to the newer JARs (will require logging out and then logging back in for the change to take effect). This might not be nice to other Java applications that depend on Java3D, if they require the older version, though. Basically, what you are encountering is "DLL hell" or "dependency hell", except with Java. Using the Maven2 build system, which requires explicit versioning of dependencies and which automatically downloads and installs required dependencies, would fix that problem. Also note that projects that use the Maven2 are automatically recognized by NetBeans.
| |
doc_23533236
|
docker run -d ubuntu:12.04
docker inspect {{containerhash}} | grep ID
// "ID": "d846ae242838de66f12414fbc8807acb3c77778bdb81babab7115261f4242284"
sudo lxc-attach -n d846ae242838de66f12414fbc8807acb3c77778bdb81babab7115261f4242284 -- /bin/bash
This no longer works because of the 0.9.0 switch to libcontainer.
How can we do this via libcontainer?
There is an option to switch to lxc with a startup option, but I'd like to know how this can be accomplished via libcontainer.
A: Check if you have the nsenter tool. It should be in the util-linux package, after version 2.23. Note: unfortunately, Debian and Ubuntu still ship with util-linux 2.20.
If you have nsenter, it's relatively easy. First, find the PID of the first process of the container (actually, any PID will do, but this is just easier and safer):
PID=$(docker inspect --format '{{.State.Pid}}' my_container_id)
Then, enter like this:
nsenter --target $PID --mount --uts --ipc --net --pid
Voilà! Note, however, that nsenter won't honor capabilities.
If you don't have nsenter (e.g. if you are using Debian or Ubuntu, or your distro has too old util-linux), you can download util-linux and compile it. I have a nsenter binary, maybe I can upload it to the Docker registry if that could help anyone.
Another option is to use nsinit, a helper tool for libcontainer. I don't think that there is a lot of documentation for nsinit since it's very new, but check https://asciinema.org/a/8090 for an example. You will need a Go build environment.
| |
doc_23533237
|
I have coded the first part of creating a map and displaying it and rotating with compass.
override func viewDidLoad() {
super.viewDidLoad()
// Do any additional setup after loading the view, typically from a nib.
locationManager.desiredAccuracy = kCLLocationAccuracyBest
locationManager.delegate = self
locationManager.requestWhenInUseAuthorization()
locationManager.startUpdatingHeading()
}
override func didReceiveMemoryWarning() {
super.didReceiveMemoryWarning()
// Dispose of any resources that can be recreated.
}
func locationManager(manager: CLLocationManager, didUpdateHeading newHeading: CLHeading) {
let rotation: Double = newHeading.magneticHeading * 3.14159 / 180
//let point: CGPoint = CGPointMake(0, -23)
print(rotation)
mapView.transform = CGAffineTransformMakeRotation(-CGFloat(rotation))
}
Here are 4 screenshots of how the screen looks when i rotate at 4 different angels.
Issue is the way its displayed.
Issue 1: When i rotate there are white empty space and thats because my map won't scale. to screen size. How can i solve this?
Issue 2: The rotation is very cranky. Is there a way to make it smoother?
A: You are doing this the hard way. You can use the userTrackingMode property of the MKMapView.
If you set this property to MKUserTrackingModeFollowWithHeading then the mapview will do the work for you. You don't even need to implement the location manager delegate methods.
| |
doc_23533238
|
[...]<body>
<p>Some Text</p>
<img src="path/nodejs.jpg?">
</body>[...]
And a basic node js function as request callback for server.
fs.readFile(filePath, function(err, content){
if (err) throw err;
// * console.log(content.toString());
var mime = require('mime').lookup(filePath);
res.setHeader('Content-Type', mime + "; charset=utf-8");
res.end(content);
});
I wanted to console.log (* line) the content but when I did this, my laptop began beeping booping i.e making sounds. So why does it happen?
A: A binary byte of 7, seen as ASCII text via the console produces a beep.
Example: console.log(String.fromCharCode(7))
Readup more at https://en.wikipedia.org/wiki/Bell_character
These days, encountering it's more often an annoying accident than something useful.
| |
doc_23533239
|
My code:
String userEnteredString = UserEntered.getText();
String userHomeLocal = Tutschedule.userHome;
FileReader dataFile = null;
try {
dataFile = new FileReader(userHomeLocal+"/Users/"+userEnteredString+".data");
} catch (FileNotFoundException ex) {
Logger.getLogger(LoginForm.class.getName()).log(Level.SEVERE, null, ex);
}
String dbData = dataFile.toString();
System.out.println(dbData);
JSONObject dataInfo = (JSONObject)dbData.parse(dataFile);
And here are my imports:
import java.io.*;
import java.util.Iterator;
//import java.io.FileNotFoundException;
import org.json.*;
//import java.io.FileReader;
import java.util.logging.Level;
import java.util.logging.Logger;
import org.json.simple.parser.*;
and here is the part that writes to the DB, I am sure that the problem doesnt lie here because it writes fine, since I checked the db it created and its in there (the line that sends user to login form is not there when I want to create a user for now):
public class Tutschedule {
// TODO Add the MySQL Database Support
public static String userHome;
/**
* @param args the command line arguments
*/
public static void main(String[] args) throws JSONException {
boolean loggedIn = false;
if (loggedIn != true) {
LoginForm.LoginForm();
}
userHome = System.getProperty("user.home")+"/TutSchedule";
System.out.print(userHome);
Scanner scan = new Scanner(System.in);
String username = scan.next();
String password = scan.next();
JSONObject user = new JSONObject();
user.put("username", username);
user.put("password", password);
boolean dirCreate;
String directories =userHome+"/Users";
dirCreate = (new File(directories)).mkdirs();
try {
FileWriter userDataFile = new FileWriter(userHome+"/Users/"+username+".data");
userDataFile.write(user.toString());
userDataFile.flush();
userDataFile.close();
} catch (IOException e) {
e.printStackTrace();
}
System.out.print(user);
}
}
A: I suspect you may need to change this
JSONObject dataInfo = (JSONObject)dbData.parse(dataFile);
to
JSONObject dataInfo = (JSONObject)JSONValue.parse(dataFile);
as dbData is a String and has no parse() method.
| |
doc_23533240
|
ImportError: cannot import name 'QGraphWidget'
I used QtDesigner to make a form and promoted a QGraphicsWidget. I know I did it correctly (I've done it at least 10 times to try and resolve the issue), but the error persists.
I'm using Windows 7, Anaconda, and PyCharm, but I tried running the code in other environments and still got the error.
A: In the documentation it implies that the promoted name can be anything, however, this seems to be untrue (at least at the moment).
Under “Promoted class name”, enter the class name you wish to use (“PlotWidget”, “GraphicsLayoutWidget”, etc).
After re-doing my QGraphicsWidget for the 6th time, I decided to name it one of the example names in the tutorial, which seems to have solved the problem.
In other words name your widget and promoted widget "PlotWidget", "ImageView", "GraphicsLayoutView", or "GraphicsView". (Please keep in mind I have only tested "PlotWidget".)
| |
doc_23533241
|
Here is $Arr1
Array
(
[0] => Windows
)
and here is $Arr2
Array
(
[0] => 5.0
)
How would I combine them so that $Arr[0] = "Windows5.0"?
array_merge($Arr1, $Arr2) Adds $Arr2 to be below $Arr1 which is not what I want
A: array_combine may work for you as long as each array is equal length and the keys are valid. This will structure your data better and then you can use a foreach loop.
<?php
$a = array('Windows', 'Mac', 'Linux');
$b = array('5.0', '6.0', '3.14');
$c = array_combine($a, $b);
print_r($c);
?>
The above example will output:
Array
(
[Windows] => 5.0
[Mac] => 6.0
[Linux] => 3.14
)
So if you need to get the value for Windows it would be:
<?php
foreach($c as $key => value) {
echo $key." ".$value."\n";
}
?>
Which would display:
Windows 5.0
Mac 6.0
Linux 3.14
A: try this
$Arr1 = Array ( "Windows");
$Arr2 = Array ( " 5.0");
$arr = array( $Arr1[0] . $Arr2[0] );
var_dump($arr);
ouput
array (size=1)
0 => string 'Windows 5.0' (length=11)
A: For your particular example, after you do array_merge, do implode on the resulting array, this will give you the desired output.
$Arr = [implode(array_merge($Arr1, $Arr2))]; // works for PHP 5.4+
$Arr = array(implode(array_merge($Arr1, $Arr2))) // for older versions
I have a suspicion that your requirements are a little more complex than that.
For more info on implode, see: http://php.net/manual/en/function.implode.php
If you would like to join the values from multiple entries, try using array_map:
$Arr1 = array('windows', 'floor', 'door');
$Arr2 = array('5.0', '6.0', '7.0');
$Arr = array_map(function($a, $b) { return $a . $b; }, $Arr1, $Arr2);
This will output:
Array
(
[0] => windows5.0
[1] => floor6.0
[2] => door7.0
)
For more info on array_map, see: http://php.net/manual/en/function.array-map.php
A: This will work -- espacially interesting when doing with multiple values:
foreach ($arr1 as $key=>$value)
{
$arr3[] = $value.$arr2[$key];
}
var_dump($arr3);
output:
array(1) { [0]=> string(10) "Windows5.0" }
| |
doc_23533242
|
[OutputCache(CacheProfile = "MyProfile")]
public Result MyControllerAction()....
I have some text in matching view (Views/MyController/MyControllerAction.aspx) that needs to change with each page load, even though the returned page is cached. I think this is also called donut caching.
How can I accomplish this? And where do I need to put the callback function if indeed this is possible. Can I specify the callback on Controller level or Page level.
Thanks
--MB
A: ASP.Net MVC does not yet support donut caching; it's planned for version 3.
| |
doc_23533243
|
I know that unique indexes and non-unique indexes have differences on performance.
But my question is, will there be any difference on performance if the column has both unique constraint and unique index, and just unique index without a unique constraint?
Another question is, does the column statistics have any affect on unique index usage?
A: Oracle Database policies unique constraints with (unique) indexes.
When checking for duplicate entries, querying the table, etc. the database will use the index. Not the constraint. So for the most part performance will come out the same:
create table t (
c1 int, c2 int
);
alter table t
add constraint u
unique ( c1 );
create unique index ui
on t ( c2 );
insert into t
with rws as (
select level x from dual
connect by level <= 10000
)
select x, x from rws;
commit;
exec dbms_stats.gather_table_stats ( user, 't' ) ;
alter session set statistics_level = all;
set serveroutput off
select * from t
where c1 = 1;
select *
from table(dbms_xplan.display_cursor(null, null, 'IOSTATS LAST'));
----------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
----------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 1 |00:00:00.01 | 3 |
| 1 | TABLE ACCESS BY INDEX ROWID| T | 1 | 1 | 1 |00:00:00.01 | 3 |
|* 2 | INDEX UNIQUE SCAN | U | 1 | 1 | 1 |00:00:00.01 | 2 |
----------------------------------------------------------------------------------------------
select * from t
where c2 = 1;
select *
from table(dbms_xplan.display_cursor(null, null, 'IOSTATS LAST'));
----------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
----------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 1 |00:00:00.01 | 3 |
| 1 | TABLE ACCESS BY INDEX ROWID| T | 1 | 1 | 1 |00:00:00.01 | 3 |
|* 2 | INDEX UNIQUE SCAN | UI | 1 | 1 | 1 |00:00:00.01 | 2 |
----------------------------------------------------------------------------------------------
There is one exception. A unique constraint can be the target of a foreign key. Whereas a unique index (alone) can't:
alter table t
add constraint fk
foreign key ( c1 )
references t ( c2 );
ORA-02270: no matching unique or primary key for this column-list
alter table t
add constraint fk
foreign key ( c2 )
references t ( c1 );
Provided you created unique and foreign key constraints, this enables the optimizer to eliminate tables in some queries. Which could give large performance benefits:
select t1.* from t t1
join t t2
on t1.c1 = t2.c2;
select *
from table(dbms_xplan.display_cursor(null, null, 'IOSTATS LAST'));
-------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
-------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 5000 |00:00:00.01 | 202 |
| 1 | NESTED LOOPS | | 1 | 10000 | 5000 |00:00:00.01 | 202 |
| 2 | TABLE ACCESS FULL| T | 1 | 10000 | 5000 |00:00:00.01 | 60 |
|* 3 | INDEX UNIQUE SCAN| UI | 5000 | 1 | 5000 |00:00:00.01 | 142 |
-------------------------------------------------------------------------------------
select t1.* from t t1
join t t2
on t1.c2 = t2.c1;
select *
from table(dbms_xplan.display_cursor(null, null, 'IOSTATS LAST'));
------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 5000 |00:00:00.01 | 60 |
|* 1 | TABLE ACCESS FULL| T | 1 | 10000 | 5000 |00:00:00.01 | 60 |
------------------------------------------------------------------------------------
Table stats will affect whether the optimizer uses the index. If you search for unique values less than 100:
select * from t
where c1 <= 100;
The optimizer is more likely to go for a full table scan if there are only 100 rows in the table. But if there are millions, the index becomes much more attractive.
| |
doc_23533244
|
<Shell.TitleView>
<Grid
Margin="0,0,0,0"
RowDefinitions="*"
ColumnDefinitions="*,*">
<local1:AutoFitLabel
x:Name="labelTitle"
Grid.Column="0"
Text="My page title"
TextColor="{StaticResource TextGray}"
FontSize="Title"
VerticalTextAlignment="Center"
VerticalOptions="Center"
FontFamily="VisbyRegular"
MaxLines="1">
</local1:AutoFitLabel>
<ImageButton
Clicked="ShowSearchPage"
BackgroundColor="White"
Padding="{OnPlatform Android='14', iOS='7'}"
HorizontalOptions="End"
Grid.Column="1"
Source="search">
</ImageButton>
</Grid>
</Shell.TitleView>
Everyghing works as expected when the page shows up but, if switch to actual tab (Tab0) to another tab (Tab1) an then i go back to Tab0, the title of the page is shown with the system default font.
"FontFamily="VisbyRegular"" is no longer taking effect.
This is happening only on Android, in iOS everything is working as expected.
I've also tried to force in the class in the on appearing method the custom font to "labelTitle" but with no effect on the label itself.
Is it a known issue or am i diong something wrong?
Any way i can workaround this problem?
Thanks
A: As you can see in here https://github.com/xamarin/Xamarin.Forms/issues/14248
It is an open issue in xamarin.forms.
You can try use the custom renderers :
[assembly: ExportRenderer(typeof(AppShell), typeof(MyShellRenderer))]
namespace App30.Droid
{
public class MyShellRenderer : ShellRenderer
{
public MyShellRenderer(Context context) : base(context)
{
}
protected override IShellBottomNavViewAppearanceTracker CreateBottomNavViewAppearanceTracker(ShellItem shellItem)
{
return new CustomBottomNavAppearance();
}
}
public class CustomBottomNavAppearance : IShellBottomNavViewAppearanceTracker
{
public void Dispose()
{
}
public void ResetAppearance(BottomNavigationView bottomView)
{
}
public void SetAppearance(BottomNavigationView bottomView, ShellAppearance appearance)
{
IMenu menu = bottomView.Menu;
for (int i = 0; i < bottomView.Menu.Size(); i++)
{
IMenuItem menuItem = menu.GetItem(i);
var title = menuItem.TitleFormatted;
SpannableStringBuilder sb = new SpannableStringBuilder(title);
int a = sb.Length();
//here I set fontsize 20
sb.SetSpan(new AbsoluteSizeSpan(20,true), 0, a, SpanTypes.ExclusiveExclusive);
menuItem.SetTitle(sb);
}
}
}
}
| |
doc_23533245
|
Can it be done by Facebook canvas? I have an access to a public web server where I can host my app and handle POST requests
A: Yes, canvas and page tab apps display external content in an iframe.
You’ll need your content to be available via HTTPS (because facebook.com is served via HTTPS, and so your iframed content needs to be as well, otherwise the browser will block it as mixed/insecure content.)
| |
doc_23533246
|
public class A{
Map map;
public getValue(String key){
return map.get(key);
}
public remove(String key){
synchronized(map){
map.remove(key);
}
}
}
A is a singleton. getValue is heavily accessed throughout the app by multiple threads. I am adding a new method, remove, that removes a key from the map. If remove is implemented as above,
*
*When a thread is in the synchronized block of the remove method, I assume it will acquire a lock on the map object. Does that mean other threads trying to access the map via the getValue method will be blocked? (I'd like them to.)
*When no thread is in the synchronized block of remove method, will threads accessing the getValue method function as usual i.e. not block each other? (I'd like that too).
I want the getValue threads to block only if there is a thread performing the remove operation.
A:
When a thread is in the synchronized block of the remove method, I assume it will acquire a lock on the map object. Does that mean other threads trying to access the map via the getValue method will be blocked?
No. Which means you've got a problem, unless you happen to be using a thread-safe map implementation.
When no thread is in the synchronized block of remove method, will threads accessing the getValue method function as usual i.e. not block each other? (I'd like that too).
They won't block each other, no. Again, you'll need to make sure that's okay with whichever Map implementation you're using, although it's much more likely to be okay than reading at the same time as writing.
You should consider using a ConcurrentMap implementation (e.g. ConcurrentHashMap), at which point you don't need any synchronization at all.
If you can't use that, I'd recommend synchronizing in both getValue and remove - and measuring the performance. Acquiring an uncontended lock is reasonably cheap - do you really need to go lock-free? (Using ConcurrentHashMap is a fairly simple way of avoiding the issue, of course, but you should always consider whether extra complexity is needed to achieve the performance you require before you start micro-optimizing.)
A: If I understood correctly your need, you could take a look at the ConcurrentMap and of course ConcurrentHashMap which I believe was introduced with Java 5.0 and supports a level of concurrency.
A: You don't show how the Map instance is instantiated, but assuming it is not a thread-safe collection instance, this code is not thread-safe.
A: 1. getValue() is not synchronized, when a thread acquires the lock of an object, it has the control over all the synchronized blocks... not the Non-synchronized one..
So other threads can access the getValue() when as thread is in sychronized(map) block
2. Use HashTable, which is a sychronized MAP
A: You have a chicken and egg problem with
I want the getValue threads to block only if there is a thread performing the remove operation.
Without some sort of inter-thread interaction, you cannot figure out whether there is some other thread performing a remove.
The correct way to implement getValue(...) is to synchronize on the map.
I recommend dropping your own locking, and using a ConcurrentHashMap and utilizing a lot of work into concurrent performance.
A: The one rule of synchronized is that only one thread can be in a synchronized(foo) block at a time, for the same foo. That's the only rule of synchronized.
Well, there's some complicated stuff about memory barriers and the like, and a single thread can be in several nested synchronized(foo) blocks for the same foo at the same time:
void thing() {
synchronized(foo) {
stuff(); // this works fine!
}
}
void stuff() {
synchronized(foo) {
doMoreStuff();
}
}
... but the rule stated above is basically the key to understanding synchronized.
| |
doc_23533247
|
Clients gain access to the webapp via Client-Cert authentication. The username
extracted from the certificate is of the form "CN=..., OU=.., O=.., L=.., ST=.."
Since we are only interested in the value of CN we wrote a X509UsernameRetrieverClass, placed the jar in $TOMCAT_HOME/lib and
configured it in the JNDIRealm:
<Context docBase="MYApp.war" path="/myapp" reloadable="true">
<Realm
X509UsernameRetrieverClassName="com.my.custom.X509UsernameRetriever"
authentication="simple"
className="org.apache.catalina.realm.JNDIRealm"
connectionName="ldapuser"
connectionPassword="****"
connectionURL="ldap://localhost:389"
roleBase="xxxx"
roleName="cn"
roleSearch="(member={0})"
roleSubtree="true"
userBase="xxxx"
userSearch="(cn={0})"
userSubtree="true"/>
</Context>
However, when tomcat boots up we get the following warning:
WARNING: [SetPropertiesRule]{Server/Service/Engine/Host/Context/Realm} Setting property 'X509UsernameRetrieverClassName' to 'com.my.custom.X509UsernameRetriever' did not find a matching property.
The configuration is ignored and users aren't authenticated, presumably because the defaut retriever is used returning the long winded username which isn't found
in the underlying ldap system...
The same configuration works fine for tomcat 7. I can't really see what we did wrong. The X509UsernameRetrieverClassName is documented. https://tomcat.apache.org/tomcat-6.0-doc/config/realm.html#JNDI_Directory_Realm_-_org.apache.catalina.realm.JNDIRealm
does anyone have any ideas ?
thanks,
Michael
A: solved the problem.
the X509UsernameRetriever class doesn't appear in tomcat 6 until version 6.0.36. hence the attribute being ignored in version 6.0.32
The workaround was to override the JNDIRealm.getPrincipal(X509Certificate usercert) in a custom class.
| |
doc_23533248
|
I know typescript supports decorators but I've been having a bit of trouble understanding them.
I created an easy test code to modify with lodash wired to make explaining the solution easier:
https://codepen.io/thinkbonobo/pen/XKyaKY?editors=0010
I'd like to memoize run so that it returns the answer without the forced wait. If it is successfully memoized it will return "MEMOIZED!!! :)"
run() {
return this.doSomeProcessing();
}
(N.B., I would suggest while coding to comment out the wait function so it doesn't give the synchronous lag from it as the program tries to run)
A: You can easily memoize run with the once function https://lodash.com/docs#once:
run = _.once(() => {
return this.doSomeProcessing();
});
Of course this makes it a member instead of a method but that's okay.
A: A simple and elegant solution for have @memoize() decorator:
function memoize() {
return function (target: any, functionName: string) {
target[functionName] = _.memoize(target[functionName]);
};
}
Live example: http://codepen.io/fabien0102/pen/KgPrOy?editors=0012
A: fabien0102's solution can easily be improved to support getters:
export function memoize() {
return function (target: any, functionName: string, descriptor: PropertyDescriptor) {
if (descriptor.get) descriptor.get = _.memoize(descriptor.get, function<T>(this: T):T { return this; });
else descriptor.value= _.memoize(descriptor.value);
};
}
| |
doc_23533249
|
ID Column1 Column2 Column3 Column4 Column5
001 A C D A B
002 A D A B A
003 B K Q C Q
004 A K E E B
I want to create a new column in a view which gives me the count of "A"s across the 5 source columns for each row. The result should look like this:
ID Column1 Column2 Column3 Column4 Column5 SumOfA
001 A C D A B 2
002 A D A B A 3
003 B K Q C Q 0
004 A K E E B 1
I've seen several examples here but they return instances of "A" across records - I want the count of "A"s across the columns, not aggregating across rows. Any thoughts?
A: You can use multiple CASE expressions to count the A values:
select id,
column1,
column2,
column3,
column4,
column5,
case when column1 = 'A' then 1 else 0 end +
case when column2 = 'A' then 1 else 0 end +
case when column3 = 'A' then 1 else 0 end +
case when column4 = 'A' then 1 else 0 end +
case when column5 = 'A' then 1 else 0 end TotalA
from yourtable
See SQL Fiddle with Demo
A: For a situation like this, I prefer using CROSS APPLY to do an intermediate logical UNPIVOT.
SELECT
*
FROM
dbo.YourTable T
CROSS APPLY (
SELECT Count(*)
FROM (VALUES (Column1), (Column2), (Column3), (Column4), (Column5)) C (Val)
WHERE Val = 'A'
) A (Cnt)
This also has the advantage of being able to change which value you're counting in just one place instead of in 5 places, and it is extremely easy to add more columns if necessary.
See this working in a Sql Fiddle
However, that you are struggling to build this query shows that the table design is probably not best. If the column names themselves contain data (such as time periods, regions, or other kinds of similar values) then it is almost certain that the design is suboptimal. I strongly recommend that you store the data unpivoted instead--use one row and column per value, with a new column denoting which original column it came from.
If you can't change the schema for whatever reason, you might consider creating a view:
CREATE VIEW dbo.YourTableUnpivoted
SELECT
T.ID,
C.*
FROM
dbo.YourTable T
CROSS APPLY (VALUES
('Column1', Column1),
('Column2', Column2),
('Column3', Column3),
('Column4', Column4),
('Column5', Column5)
) C (Col, Val);
(You can also use the PIVOT operator, too.) Then you can use this as if it were the redesigned table:
SELECT
ID,
Count(*)
FROM
dbo.YourTableUnpivoted
WHERE
Val = 'A'
GROUP BY
ID;
A: SELECT
(CASE WHEN Column1 = 'A' THEN 1 ELSE 0 END
+ CASE WHEN Column2 = 'A' THEN 1 ELSE 0 END
+ CASE WHEN Column3 = 'A' THEN 1 ELSE 0 END
+ CASE WHEN Column4 = 'A' THEN 1 ELSE 0 END
+ CASE WHEN Column5 = 'A' THEN 1 ELSE 0 END) AS SumOfA
FROM myTable
A: You can use CASE statement to achieve this:
SELECT
column1, column2, column3, column4, column5,
(CASE WHEN Column1 = 'A' THEN 1 ELSE 0 END
+ CASE WHEN Column2 = 'A' THEN 1 ELSE 0 END
+ CASE WHEN Column3 = 'A' THEN 1 ELSE 0 END
+ CASE WHEN Column4 = 'A' THEN 1 ELSE 0 END
+ CASE WHEN Column5 = 'A' THEN 1 ELSE 0 END) AS TotalSum
FROM yourTable
| |
doc_23533250
|
I was able to get elasticsearch loaded into angular as a module and was following along with this sample repo: https://github.com/spalger/elasticsearch-angular-example/blob/master/README.md
When I tried to run the cluster.state call, I received the following error: Request header field Authorization is not allowed by Access-Control-Allow-Headers in preflight response.
According to the sample repo, I need to configure the elasticsearch.yml file in order to allow CORS. I couldn't seem to find this file so I created my own but how do I now get my js files to "require" or "read" from it?
A: This is the configuration file for elasticsearch. It does not belong to the client.
Read about elasticsearch installation in their site and setup your development elasticsearch server to work against.
Good luck.
| |
doc_23533251
|
Thanx in advance.
A: You can mute the sound when your app starts and unmute when it finishes using AudioManager
@override
public void onResume(){
super.onResume();
// this mute the Sound
AudioManager mgr = (AudioManager)getSystemService(Context.AUDIO_SERVICE);
mgr.setStreamMute(AudioManager.STREAM_SYSTEM, true);
}
@override
public void onPause(){
super.onPause();
AudioManager mgr = (AudioManager)getSystemService(Context.AUDIO_SERVICE);
mgr.setStreamMute(AudioManager.STREAM_SYSTEM, false);
}
A: You can do this in two ways 1) In XML by setting tag android:soundEffectsEnabled true or false 2) In Activity by pragmatically setSoundEffectsEnabled(true/false).
A: Yes, there is a way. There is an attribute and a method if you want to enable/disable.
android:soundEffectsEnabled
Boolean that controls whether a view should have sound effects enabled for events such as clicking and touching.
Here you go.
No need for audio manager or muting for sound stream of system.
| |
doc_23533252
|
https://imgur.com/w0JmSyb
But... when I have downloaded the Bootswatch Lumen theme and then updated the CSS file referenced in the App_Start/BundleConfig.cs file and then rebuild and refresh, the menu at the top renders as the Application Title with an empty button which expands the menu items.
https://imgur.com/7463kkF
https://imgur.com/UQ8ZkCY
This is not what happens in the tutorials. The menu items remain as a menu bar at the top and it looks like a basic theme change only.
Apologies if this is an easy one (I hope it is!) I'm a veteran WebFormer of 12 years finally making the MVC jump.
A: MVC 5 template has bootstrap 3 included. Botswatch files are not bootstrap 4.
| |
doc_23533253
|
Since May, 13th 2015 I'm finding a problem when using the list method.
If I programmatically create a file in the appfolder, I receive the response: 200 OK and the file created. If then I use the list method to list the files in the appfolder, the file just created is not listed. It happened with several files yesterday. This morning, the files were listed normally, but with any file I create today it happens the same (correctly created but not listed).
Three screens follow: the 1st one is creating a test file in the appfolder using the extension's code. the image shows the server response (200 OK, file created). The second screen shows the list request (list all files whose title contains 'test', it should include the file just created). The third screen shows the response from the server (an empty items list).
There is a way to get them listed: If I create a file, it returns (among other data) the file Id. If I make a simple GET request for that Id, then it is listed from then on.
All other methods are working as expected (as usual), but the list method is giving me this problem since yesterday. Since there was no change in the extension's code, I assume there must have been a change in the API code.
A: Apparently, it was a caching issue in Google Drive, which has been resolved as of May 15th 2015: Google Drive developers community post
A: Yes, I noticed the same issue from today. I'm using Google Drive Java API to create files in hidden Application data folder (appfolder). Files are created correctly, but "Hidden data size" i 0. I noticed that my files appears after few hours! I reported this issue to Google several hours ago by Google Developers Console, but the issue still occurs. I think more users should do this to get their attention to this critical issue.
| |
doc_23533254
|
Now with Xcode 11.3.1 , i am trying to upgrade it but it gives me below error.
Any process to direct migrate without using Xcode 10.1
| |
doc_23533255
|
<div class="content">
<h2>Question</h2>
<p class="author">bla bla <strong>אחת</strong></p>
<dl class="">
<dt><label for="vote_1">option1</label></dt>
<dd style="width: auto;"><input type="radio" name="vote_id[]" id="vote_1" value="1" /></dd><dd class="resultbar"><div class="pollbar" style="width:77%">10</div></dd>
<dd>77%</dd>
</dl>
<dl class="">
<dt><label for="vote_2">Option 2</label></dt>
<dd style="width: auto;"><input type="radio" name="vote_id[]" id="vote_2" value="2" /></dd><dd class="resultbar"><div class="pollbar" style="width:23%">3</div></dd>
<dd>23%</dd>
</dl>
<dl class="">
<dt><label for="vote_3">Option 3</label></dt>
<dd style="width: auto;"><input type="radio" name="vote_id[]" id="vote_3" value="3" /></dd><dd class="resultbar"><div class="pollbar" style="width:0%">0</div></dd>
<dd>No votes</dd>
</dl>
<dl>
<dt> </dt>
<dd class="resultbar">Total : 13</dd>
</dl>
<dl style="border-top: none;">
<dt> </dt>
<dd class="resultbar"><input type="submit" name="update" value="submit" class="button1" /></dd>
</dl>
</div>
<span class="corners-bottom"><span></span></span></div>
<input type="hidden" name="creation_time" value="1326810654" />
The problem is this: in phpbb, the elements have the same width as the "content" div, and so they are displayed line-by-line. However, in my code the elements have width 0 (what is the exact meaning of width 0? On-screen they have width larger than 0) and all the -s are displayed in the same line.
I looked around the css files, but as far as I can see there is no difference. What is the relevant property here? Setting 's "width" style option in the css file to 100% and "inherit" has no effect.
A: The code looks like a strange presentation of tabular data (something that should be presented using <table> elements), and probably the intent is to use CSS for the purpose but somehow the relevant CSS code has gone missing.
Without CSS, each dt and dd element is rendered each on a line of its own, because this is the default rendering in HTML.
The percentage widths are presumably meant to act as graphic representation of percentages, so that e.g. 73% corresponds to a 73% wide block. This cannot be seen, however, without some CSS that makes the blocks visible with e.g. border or background color. The idea fails e.g. for 0%, for obvious reasons.
Without more context, including CSS being applied, it’s difficult to say much more.
| |
doc_23533256
|
*? I'm trying to turn off the hover for the current page in a navigation menu.
div.nav {
width: 100%;
padding: 6px;
height: 40px;
}
.nav li {
color: #FFFFFF;
display: inline;
list-style-type: none;
text-align: center;
text-transform: uppercase;
padding: 20px;
margin: 0px;
height: 40px;
}
li.current {
background-color: #424242
}
li.current:hover {
background-color: inherit;
}
.nav li:hover {
background-color: #737373;
<div class="nav">
<ul>
<li class="current">Home</li>
<li><a href="null">About</a>
</li>
<li><a href="null">Contact</a>
</li>
<li><a href="null">Gallery</a>
</li>
</ul>
</div>
Here is the jsfiddle:
https://jsfiddle.net/swordams/jk6z5aqj/
I want the li for home to stay dark and not change on hover. I've tried setting the hover background color to "inherit", but that doesn't work.
Thanks!
A: You can use the CSS :not() pseudo-class:
.nav li:hover:not(.current) {
background-color: #737373;
}
jsFiddle example
A: You can use the solution by j08691 but ultimately, the problem with your css is that .nav li:hover is more specific than li.current:hover. Tacking a .nav will do the trick.
.nav li.current:hover {
background-color: inherit;
}
A: just make the active/current li background color important
li.current {
background-color: #424242 !important;
}
| |
doc_23533257
|
╔═══╦════════════╦═══════════════╗
║ ║ col_1 ║ col_2 ║
╠═══╬════════════╬═══════════════╣
║ 1 ║ 106 ║ I am Alex. ║
║ 2 ║ 106 ║ I'm a student ║
║ 3 ║ 106 ║ I like apple ║
║ 4 ║ 1786 ║ Dog is a pet ║
║ 5 ║ 1786 ║ Jack is my pet║
╚═══╩════════════╩═══════════════╝
and I would like to first groupby "col_1" and then join the string in "col_2" with the if-else condition of finding the last character in the string whether it is ended with "."
If it is ended with a fullstop, join the next string of the same group with " ".join (join them with a space).
Else, join them with a fullstop.
End result will look something like this:
╔═══╦════════════╦══════════════════════════════════════════╗
║ ║ col_1 ║ col_2 ║
╠═══╬════════════╬══════════════════════════════════════════╣
║ 1 ║ 106 ║ I am Alex. I'm a student. I like apple ║
║ 2 ║ 1786 ║ Dog is a pet. Jack is my pet ║
╚═══╩════════════╩══════════════════════════════════════════╝
My code is stated as below:
new_df = df.groupby(['col_1'])['col_2'].apply(lambda x: ' '.join(x) if x[-1:] == '.' else '. '.join(x)).reset_index()
However I got this error instead:
ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().
Your help is much appreciated!
A: Assuming none of your strings have trailing spaces, why not just apply '. '.join(...) and remove the doubled results?
df = pd.DataFrame({
'col1': [106,106,106,1786,1786],
'col2': ['I am Alex.','I\'m a student','I like apple','Dog is a pet','Jack is my pet']
})
result = df.groupby('col1', as_index=False).agg({'col2': lambda x: '. '.join(x)})
result['col2'] = result['col2'].str.replace('.. ', '. ', regex=False)
This gets you, as expected:
col1 col2
0 106 I am Alex. I'm a student. I like apple
1 1786 Dog is a pet. Jack is my pet
A: df.groupby('col_1')['col_2'].apply(lambda x: x.str.cat()).reset_index()
| |
doc_23533258
|
Thanks
A: Declare one static variable and increment the same inside your method. After that you can see the count.
Below is the sample code:-
static int i=0; //declare outside your method
-(void)yourMethod{
++i;
NSLog(@"%d",i);
}
| |
doc_23533259
|
@Repository("userRepository")
public class UserRepositoryImpl implements UserRepository {
@PersistenceContext
private EntityManager em;
That is traditional - no problems here. Just assume I refer to @PersistenceContext as to @Autowired which is practically the same except for JPA. Now consider another example where my head explodes.
Instead of injecting entityManger inside UserRepositoryImpl, we can inject it inside UserRepository.
public interface UserRepository {
@PersistenceContext
private EntityManager em;
User save(User user);
Please explain whether there are benefits of doing this and are there any traps I have to be aware.
The motivation is: if we code to interfaces then we should do it 100%. Yet when we do it(in the second approach), we enforce dependencies for other implementations not necessarily requiring the same dependencies in their implementations , what if I want to use jdbcTemplate in my other implementation? Of course you can retaliate with ORM native querying but that's not the point.
What I have shown is not interface dependency injection. Just upfront declaration. :-)
Thanks!
I just realised I may have been wrong slightly - in interface it should be setter injection not shorthand notation, anyway the question remains the same.
| |
doc_23533260
|
I've got an Ajax call sending some data using POST to the server:
$.ajax({
cache: false,
type: 'POST',
url: Config.API_ENDPOINT_REGISTRATION,
dataType : 'json',
data: info,
success: this.successHandler.bind(this)
});
Everything is behaving as expected in all modern browsers, except IE8 and IE9.
To make the Ajax call possible from jQuery I had to use the XDomainRequest script provided here: https://github.com/MoonScript/jQuery-ajaxTransport-XDomainRequest
Before adding this script the call wasn't happening.
The issue now is that the request.body I get in Express is always empty if the data is coming from IE8/IE9.
I suppose something is going on with the bodyParser unable to parse the data received from IE8/IE9: the request.body is always empty.
I've been trying to resolve this issue for an entire day now, with no success.
Any idea or something that could point me in the right direction?
A: So after digging a bit more into this issue (a while ago) I noticed that the content-type header was empty on both IE8 and IE9.
Because empty is not the default value, I've checked if something was interfering with my XDomainRequest settings, and I found that the jQuery-ajaxTransport-XDomainRequest was altering the xdr.contentType setting it to "" (empty).
The solution I created for this issue is an Express.js middleware to set/override the content-type header for the requests to a specific route. Here is a usage example:
var express = require('express');
var app = express();
var bodyParser = require('body-parser');
var registrationController = require('./controllers/registrationController');
server.use( '/registration', contentTypeOverride({
contentType: 'application/x-www-form-urlencoded'
}));
server.use( bodyParser.json() );
server.use( bodyParser.urlencoded({
extended: true
}));
app.post( '/registration', registrationController.index );
app.listen( 3000 );
You can find the middleware here: express-content-type-override
A: In IE9 the XDomainRequest doesn't set a content-type, which bodyparser requires in order to read the request body as json.
I got around this by setting the content-type explicitly before passing the request through to body parser, like so:
app.use(function(req, res, next) {
// IE9 doesn't set headers for cross-domain ajax requests
if(typeof(req.headers['content-type']) === 'undefined'){
req.headers['content-type'] = "application/json; charset=UTF-8";
}
next();
})
.use(bodyParser.json());
| |
doc_23533261
|
I understand some of the syntax but I'm kind of stuck at the line of code beginning "On Error Resume Next" I don't know which function to use that Java uses.
This part is throwing me off
( Not sure what "Next" needs to be in Java )
any help would be appreciated
<% <!-- each string had DIM in front ex. DIM fEmptyRecordset So i believe I changed what was needed --->
<%
Boolean fEmptyRecordset = ""; ' as Boolean
Boolean fFirstPass = ""; ' as Boolean
Boolean fNeedRecordset = ""; ' as Boolean
Command cmdTemp = ""; ' as Command
Double dblUnits = ""; ' as Double
Double dblRateFrom = ""; ' as Double
Double dblRateTo = ""; ' as Double
Double newRate = ""; ' as Double
Double newUnits = ""; ' as Double
String NewCurrName = ""; ' as String
String NewCurrName2 = ""; ' as String
' Convert currency
fEmptyRecordset = true;
fFirstPass = true;
fNeedRecordset = true;
<!--- here is where I'm stuck, I don't quite understand the "On Error Resume Next" function that ASP uses. I'm just looking for gudance in which function this is related to in Java --->
On Error Resume Next
if (fNeedRecordset) {
Set con_currency_sdo = Server.CreateObject("ADODB.Connection")
con_currency_sdo.ConnectionTimeout = con_currency_sdo_ConnectionTimeout
con_currency_sdo.CommandTimeout = con_currency_sdo_CommandTimeout
con_currency_sdo.Open con_currency_sdo_ConnectionString, con_currency_sdo_RuntimeUserName, con_currency_sdo_RuntimePassword
Set cmdTemp = Server.CreateObject("ADODB.Command")
Set GetRate = Server.CreateObject("ADODB.Recordset")
' Find out what the common base currency is and get the corresponding rates for both
' the From and To currencies. Order desc to get USD, GBP, EUR. (USD is preferred)
cmdTemp.CommandText = "SELECT FROMCURR.BASE_CURR_CODE, " & _
"FROMCURR.CURR_RATE_BASE_FC FROMRATE, TOCURR.CURR_RATE_BASE_FC TORATE, " & _
"CURRNAMEFROM.BLMBG_CURR_NAME FROMCURRNAME, CURRNAMETO.BLMBG_CURR_NAME TOCURRNAME " & _
"FROM AON_CURR_DAILY_EXCH_RATE_SDO FROMCURR, AON_CURR_DAILY_EXCH_RATE_SDO TOCURR, " & _
"AON_CURRENCY_SDO CURRNAMEFROM, AON_CURRENCY_SDO CURRNAMETO " & _
"WHERE FROMCURR.CURR_CODE='" & Request.Form("selBaseCurr") & _
"' AND TOCURR.CURR_CODE='" & Request.Form("selTargetCurr") & _
"' AND FROMCURR.BASE_CURR_CODE=TOCURR.BASE_CURR_CODE" & _
" AND FROMCURR.CURR_CODE=CURRNAMEFROM.CURR_CODE" & _
" AND TOCURR.CURR_CODE=CURRNAMETO.CURR_CODE" & _
" AND FROMCURR.CURR_DATE='" & dateString & _
"' AND TOCURR.CURR_DATE='" & dateString & _
"' ORDER BY FROMCURR.BASE_CURR_CODE DESC;"
cmdTemp.CommandType = 1
Set cmdTemp.ActiveConnection = con_currency_sdo
GetRate.Open cmdTemp, , 0, 1
' Place all error codes in comments
if (Err.number <> 0) {
fEmptyRecordSet = true;
out.println("<!-- ADO Errors Begin -->" & "\r\n")
for (Object objError : con_currency_sdo.Errors) {
out.println("<!-- ADO Error.Number = " & objError.Number & "-->" & "\r\n")
out.println("<!-- ADO Error.Description = " & objError.Description & "-->" & "\r\n")
out.println("<!-- ADO Error.Source = " & objError.Source & "-->" & "\r\n")
out.println("<!-- ADO Error.SQLState = " & objError.SQLState & "-->" & "\r\n")
out.println("<!-- ADO Error.NativeError = " & objError.NativeError & "-->" & "\r\n")
Next <!--- Not sure what "Next" needs to be in Java --->
out.println("<!-- ADO Errors End -->" & "\r\n")
out.println("<!-- VBScript Errors Begin -->" & "\r\n")
out.println("<!-- Err.number = " & Err.number & "-->" & "\r\n")
out.println("<!-- Err.description = " & Err.description & "-->" & "\r\n")
out.println("<!-- Err.source = " & Err.source & "-->" & "\r\n")
out.println("<!-- VBScript Errors End -->" & "\r\n")
if (checkDate = true) {
%>
A: The answer to this question really comes down to:
*
*(1) Do you want to replicate this EXACTLY.. or...
*(2) Do you want to Do-The-Right-Thing (tm)
If the answer is #1, congrats. It's as simple as try{} catch(e) all of the code between
On Error Resume Next
and
if (Err.number <> 0) {
Err.number and Err.Description were the error code/description. You can just use the Exception components instead.
If the answer is #2, then you'll want to only try {} catch(e) the code that can exception out, and act on each exception correctly and according to what the failure was. Each method has different exceptions that it can throw. Looks like the only method is the actual ADO Open() call, so that's the only one I'd wrap in a try{} catch().
Either way, you aren't going to have all of these ADO Properties to log. I'd log whatever exception/exception text/stacktrace gets spit out as part of the exception object.
A: It's the vb script way to handle exceptions, tells the program to continue execution to next line when an error occurs.
Answered here What does the "On Error Resume Next" statement do?
| |
doc_23533262
|
I have been finding difficulty in creating one particular type of table structure as shown in the Image below.
The problem is how to have same rowspan for '1','2' and '6' , in this case it is equal to 2.
Any kind of help would be appreciated.
edit-1 I have succeeded with basic rowspan and colspan, but I am finding difficulty with this complex table.
A: There is an example in the samples that is quite close to what you are doing: https://github.com/PHPOffice/PHPWord/blob/develop/samples/Sample_09_Tables.php
and modifying that a bit to achieve your example:
$cellRowSpan = array('vMerge' => 'restart');
$cellRowContinue = array('vMerge' => 'continue');
$cellColSpan = array('gridSpan' => 2);
$table->addRow();
$table->addCell(2000, $cellRowSpan)->addText("1");
$table->addCell(2000, $cellRowSpan)->addText("2");
$table->addCell(4000, $cellColSpan)->addText("3");
$table->addCell(2000, $cellRowSpan)->addText("6");
$table->addRow();
$table->addCell(null, $cellRowContinue);
$table->addCell(null, $cellRowContinue);
$table->addCell(2000)->addText("4");
$table->addCell(2000)->addText("5");
$table->addCell(null, $cellRowContinue);
$table->addRow();
$table->addCell(2000);
$table->addCell(2000);
$table->addCell(2000);
$table->addCell(2000);
$table->addCell(2000);
tested with 0.13.0 version (for some reason libreoffice didn't like the two adjecent cells with vMerge continue definitions and didn't display them as expected, but word did display them nicely as expected)
| |
doc_23533263
|
<template>
<modal-window v-model="show" v-on:keyup="keyHandler($event)" @ok="submit()" @cancel="cancel()" @closed="close()" ... >
...
</modal-window>
...
</template>
<script>
...
methods: {
keyHandler (event) {
console.log(event);
}
},...
</script>
I want handle key press when that modal is opened and ensure submit modal when enter pressed or close modal when esc pressed.
I added custom function keyHandler which is unfortunately never fired. Can you tell me how to fix code to handle key press in that function? Or when is better way how to close and submit vue strap modal I will be grateful for advice. Thank you.
A: easiest way
<input v-on:keyup.13="whatkey()" type="text"> <br>
looks for if the enter key is pressed then fires a method called whatkey.
A: You can attach your event handler to window, that way you can receive all key events and act accordingly depending on your modal's state:
Vue.component('modal', {
template: '<div>test modal</div>',
});
new Vue({
el: "#app",
created() {
window.addEventListener('keydown', (e) => {
if (e.key == 'Escape') {
this.showModal = !this.showModal;
}
});
},
data: {
showModal: true
}
})
<script src="https://cdnjs.cloudflare.com/ajax/libs/vue/2.5.16/vue.min.js"></script>
<div id="app">
<modal v-show="showModal"></modal>
</div>
A: Alternatively, you may want to consider using the v-hotkey directive for key input in Vue (docs, github). This will keep your code relatively clean and simple if you must consider several different key inputs.
1. Install it:
npm i --save v-hotkey
* Have Vue 'use' it:
import VueHotkey from "v-hotkey";
Vue.use(VueHotkey);
3. Apply it to your Vue components, like so:
<template>
<modal-window ... v-hotkey="keymap">
...
</modal-window>
</template>
<script>
...
data() {
return {
showModal: false
};
},
computed: {
keymap() {
return {
"esc": this.toggleModal
};
}
},
methods: {
toggleModal() {
this.showModal = !this.showModal;
}
}
</script>
| |
doc_23533264
|
http://www.stat.osu.edu/computer-support/programming/background-jobs
I am performing the loop:
for ((i = 1; i <= 5; i++)); do
echo $i>i.txt;
matlab -nodesktop -nodisplay <script.m &> dummy.out &
done
in the script there is a part :
fid=fopen( 'a:\folder\i.txt');'];
iter=str2double(fgets(fid))
myfunction(iter,a,b,c)
the function line at myfunction.m is
myfunction(num,a,b,c)
this function saves a file with a name that is also changing according to the value of 'num'
meaning, the output will be: myfile1.mat for the 1st command, myfile2 for the 2nd command etc.
when I'm running the commands without the loop
echo 1>i.txt;
matlab -nodesktop -nodisplay <script.m &> dummy.out &
echo 2>i.txt;
matlab -nodesktop -nodisplay <script.m &> dummy.out &
etc...
there are no problems and the output is good
when I'm running the loop the only file i'm getting is myfile5.mat
I've changed the code so that the input will be myfunction(i1,a,b,c), myfunction(i2,a,b,c)... but the results remain the same.
I think that beacuse the saving part is at the end of the function (which runs for a long time) somehow for all the functions 'num' is 5 (the loop is finished much faster than the calculations).
any ideas? tnx
A: The thing is that you run jobs with the same file, as it runs in background mode, first the file contains "1", after that "2", "3", "4", "5", and only after that the first script start evaluating (and see already "5" in the file, not "1").
You are now trying to pass a parameter to a function through file, right? I just wonder, why not passing the parameter to the function itself? Running a number of functions in Matlab in parallel (in background mode) described here, for example: http://www.mathworks.ch/matlabcentral/newsreader/view_thread/166876
A: The problem is the ampersand (&) sign after the MATLAB call - what is happening is that the loop starts running, puts the value 1 into i.txt, then forks off a MATLAB process, then the loop runs again, puts the value 2 into i.txt, then forks off another MATLAB process, and so on. Now MATLAB takes a while to start, and this loop is really quick since it's not waiting around for the MATLAB call to finish, so by the time the first MATLAB instance finally finishes, the loop is long finished and the value in i.txt is 5 for all the calls.
Short version: Remove the & sign :)
That will make MATLAB run and finish before continuing with the loop.
A: I found a solution to the problem,
it is quite simple, all I had to do is export the variables to the enviroment and them read them in the MATLAB script
for ((i = 1; i <= 5; i++)); do
export i
matlab -nodesktop -nodisplay <script.m &> dummy.out &
done
and in the script.m
iter=str2double(getenv('i'))
myfunction(iter,a,b,c)
works good!
| |
doc_23533265
|
DBSnapshotIdentifier : rds:xyz-new-2015-08-17-03-43
DBSnapshotIdentifier : xyz-new-2015-08-17-04-43
DBSnapshotIdentifier : rds:abc-2015-08-17-03-43
My code:
foreach($line in $lines)
{
$del = $line -split ':' | Select-Object -Last 1
}
I am trying to read the entire string (left of :), it's reading the entire string in 2nd content. But it fails for 1 and 3. For 1 and 3, the read value is xyz-new-2015-08-17-03-43 and abc-2015-08-17-03-43 respectively (note that rds:) is also omitted.
So, I want to get the entire string after first :, even if multiple :'s are found in that string. I tried -Unique with that, but no luck. Also tried -Last 2, but it is coming as 2 single words. I need the exact string rds:blah_blah_blah
Can someone please help me on this.?
A: why not?
"DBSnapshotIdentifier : rds:xyz-new-2015-08-17-03-43" -split "DBSnapshotIdentifier : "
A: You could try changing the line
$del = $line -split ':' | Select-Object -Last 1
to
$del = ($line -split ':' | Select -Skip 1) -Join ':'
This will effectively skip the first string in the array after the split. We then use the -join operator to rejoin the remaining parts.
Even though it hasn't been done above, I suggest you trim the final string.
You could also achieve this by changing the line to
$del = $line -split ':',2 | Select-Object -Last 1
| |
doc_23533266
|
A: A UITextView does not magically size itself to its text. A UILabel does! If you can use a label instead, use it. Otherwise, it is up to you to keep resizing the UITextView so that its height (or, if you're using self-sizing cells based on internal constraints, its height constraint) matches the height of its contentSize.
| |
doc_23533267
|
Traceback (most recent call last):
File "/Users/alexanderniewiarowski/anaconda3/envs/fenics2018/lib/python3.6/site-packages/qtpy/QtWebEngineWidgets.py", line 22, in
from PyQt5.QtWebEngineWidgets import QWebEnginePage
ImportError: dlopen(/Users/alexanderniewiarowski/anaconda3/envs/fenics2018/lib/python3.6/site-packages/PyQt5/QtWebEngineWidgets.so, 2): Library not loaded: @rpath/libQt5WebEngineWidgets.5.dylib
Referenced from: /Users/alexanderniewiarowski/anaconda3/envs/fenics2018/lib/python3.6/site-packages/PyQt5/QtWebEngineWidgets.so
Reason: image not found
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/alexanderniewiarowski/anaconda3/envs/fenics2018/bin/spyder", line 11, in
sys.exit(main())
File "/Users/alexanderniewiarowski/anaconda3/envs/fenics2018/lib/python3.6/site-packages/spyder/app/start.py", line 179, in main
from spyder.app import mainwindow
File "/Users/alexanderniewiarowski/anaconda3/envs/fenics2018/lib/python3.6/site-packages/spyder/app/mainwindow.py", line 92, in
from qtpy import QtWebEngineWidgets # analysis:ignore
File "/Users/alexanderniewiarowski/anaconda3/envs/fenics2018/lib/python3.6/site-packages/qtpy/QtWebEngineWidgets.py", line 26, in
from PyQt5.QtWebKitWidgets import QWebPage as QWebEnginePage
ModuleNotFoundError: No module named 'PyQt5.QtWebKitWidgets'
Similar questions have been asked here, but the error is different. This is my config. I have the most recent version of PyQt.
active environment : fenics2018
active env location : /Users/alexanderniewiarowski/anaconda3/envs/fenics2018
shell level : 1
user config file : /Users/alexanderniewiarowski/.condarc
populated config files : /Users/alexanderniewiarowski/.condarc
conda version : 4.5.9
conda-build version : 3.4.1
python version : 3.6.4.final.0
base environment : /Users/alexanderniewiarowski/anaconda3 (writable)
channel URLs : https://conda.anaconda.org/conda-forge/osx-64
https://conda.anaconda.org/conda-forge/noarch
https://repo.anaconda.com/pkgs/main/osx-64
https://repo.anaconda.com/pkgs/main/noarch
https://repo.anaconda.com/pkgs/free/osx-64
https://repo.anaconda.com/pkgs/free/noarch
https://repo.anaconda.com/pkgs/r/osx-64
https://repo.anaconda.com/pkgs/r/noarch
https://repo.anaconda.com/pkgs/pro/osx-64
https://repo.anaconda.com/pkgs/pro/noarch
package cache : /Users/alexanderniewiarowski/anaconda3/pkgs
/Users/alexanderniewiarowski/.conda/pkgs
envs directories : /Users/alexanderniewiarowski/anaconda3/envs
/Users/alexanderniewiarowski/.conda/envs
platform : osx-64
user-agent : conda/4.5.9 requests/2.18.4 CPython/3.6.4 Darwin/17.7.0 OSX/10.13.6
UID:GID : 501:20
netrc file : None
offline mode : False
A: Not sure what was the source of the problem but spyder --reset fixed it.
| |
doc_23533268
|
I have added a AutoCompleteExtender to my webpage, and have implemented the webservice method for it to call. So far, so good.
Using Fiddler, I have checked that, when debugging, the webservice method is being called and is returning the results I'd expect to see.. but nothing gets rendered to the screen, there is no drop down?
Can anyone here suggest what I might have done wrong, or offer a suggestion for something to try as I am currently stumped:
Declaration of the AutoCompleteExtender in the webpage:
<cc1:AutoCompleteExtender runat="server" ID="lookupAgencyAppSettingName"
TargetControlID="txtAgencyAppSettingName" ServiceMethod="GetListOfSettings"
ServicePath="~/Authenticated/AJAXMethods.asmx" MinimumPrefixLength="1"
CompletionInterval="500" EnableCaching="true" />
For completeness, here is the Webservice Method:
[System.Web.Services.WebMethod]
[System.Web.Script.Services.ScriptMethod]
public string[] GetListOfSettings(string prefixText, int count)
{
string[] suggestedSettings = new string[0];
List<string> settingNames = new List<string>();
List<AgencyApplicationClientSetting> settings = AgencyApplicationClientSetting.All().ToList<AgencyApplicationClientSetting>();
foreach(AgencyApplicationClientSetting setting in settings)
{
if((setting.SettingName.ToLower().StartsWith(prefixText.ToLower())) && (!settingNames.Contains(setting.SettingName)))
{
settingNames.Add(setting.SettingName);
}
}
if(settingNames.Count > 0)
{
suggestedSettings = settingNames.ToArray();
}
return suggestedSettings;
}
A: Okay, in the end this turned out (somewhat annoyingly) a z-index time.
I'm not sure what z-index a dialog that is displayed using the AjaxControlToolkit's ModalPopupExtender is given by default (the highest z-index I could see on the page was 10001).. but somewhere behind the scenes it was being given an attribute that meant that my popup suggestions (from the AutoCompleteExtender) always rendered behind the dialog (couldn't see this however until I returned enough results to get the suggestions apepar from beneath the dialog).
In the end I had to override the z-index of the panel used as a dialog by the ModalPopupExtender AND the CompletionListCssClass of the AutoCompleteExtender thus:
.popUpDialog
{
z-index: 99 !important;
}
.autoComplete_listMain
{
z-index: 2147483647 !important;
background: #ffffff;
border: solid 2px #808080;
color: #000000;
}
Anyway, annoyingly simple in the end, but thought I'd share just in case anyone else runs into a similar problem!
| |
doc_23533269
|
My security group that is associated with my instance has:
ICMP Allow All
TCP 0-65535
TCP 22 (SSH)
TCP 80 (HTTP)
TCP 443 (HTTPS)
UDP 0-65535
I am running a Bitnami-Wordpress 3.2.1-0 Ubuntu AMI
My Question is: How do I host a simple file on my instance?
UPDATE: so I was able to login using SFTP by simply filling in my instance Public DNS as my host, and the PuTTY Gen key as the private key, the username I had to use was Bitnami. So now I have access to the server, how or where do I put a file so that it will come out www.mywebsite.com/myfile.file???
I am assuming that I need to SSH into the server using putty, and add it into the WWW directoroy?
What I have tried:
I tried logging in using WinSCP with host name being my instance's Public DNS, and my private key file the converted PuTTY Gen file that was originally the key pair for the instance.
*
*Using SFTP, pressing login it asks me for a user name, entering "user" or "ec2-user" I get an error saying:
"disconnected, no supported authentication methods available (server sent: public key), Server >refused our key. Authentication failed.
Using root for the username, it asks for a passphrase that I created for my keypair using PuTTY Gen, It accepts it, but then I get this error:
"Received too large (1349281121 B) SFTP packet. Max supported packet size is 1024000 B. The error >is typically caused by message printed from startup script (like .profile). The message may start >with ""Plea"". Cannot initialize SFTP protocol. Is the host running a SFTP server?
If in WinSCP I put the username as "user" and the password as "bitnami" (before I press login) (default wordpress password for bitnami AMI) it gives me this error:
Disconnected: No supported authentication methods available (server sent: publickey). Authentication log (see session log for details):Using username: "user". Server refused ourkey. Authentication failed.
*
*I get the same errors using SCP instead of SFTP in WinSCP except when I use SCP and I press login, and I use username "root" it asks me for my passphrase, after entering that I get this error:
Connection has been unexpectedly closed. Server sent command exit status 0. Error skipping startup message. Your shell is probably incompatible with the application (BASH is recommended).
A: If you are already able to connect using SFTP. Now you just need to copy the file. Where you need to copy it depend on what you are trying to do.
BitNami Wordpress AMI has the following directory structure (I only include the relevant directories for this question):
/opt/bitnami
|
|-- apache2/htdocs
|-- apps/wordpress/htdocs
You mentioned that you want to www.mywebsite.com/myfile.file. If you didn't modify the default apache configuration you will need to copy file in /opt/bitnami/apache2/htdocs (this is the DocumentRoot for the BitNami WordPress AMI.
If you want that file to be accessed from www.mywebsite.com/wordpress/myfile.file, then you need to copy it in /opt/bitnami/apps/wordpress/htdocs.
If what you are trying to do is to manually install a theme or plugin you can follow the WordPress documentation taking into account that the wordpress installation directory is /opt/bitnami/apps/wordpress/htdocs.
Also, you can find below some links to the BitNami Wiki explaining how to connect to the AMIs. I just include them as a reference for other users that find the same connection issues.
Further reading:
*
*How to connect to your amazon instance
*How to upload files from Windows
A: Also, if you want to remove wordpress from the URL, you can use the following instructions I posted on my blog (travisnelson.net):
$ sudo chmod 777 /opt/bitnami/apache2/conf/httpd.conf
$ vi /opt/bitnami/apache2/conf/httpd.conf
changed DocumentRoot to be: DocumentRoot “/opt/bitnami/apps/wordpress/htdocs”
$ sudo chmod 544 /opt/bitnami/apache2/conf/httpd.conf
$ sudo apachectl -k restart
Then in WordPress, change the Site address (URL) in General Settings to not have /wordpress.
Hope this helps
A: I had a similar problem recently. Having setup Bitnami Wordpress on AmazonAWS I was unable to modify, add, or remove themes from within the Wordpress admin interface even though all of my permissions were setup appropriately according to Wordpress recommended settings. However, I did not want to have to resort to turning FTP access on.
I was able to resolve the issue by:
*
*Setting the file access method for Bitnami Wordpress to 'direct'.
*Changing all users to Apache Bitnami.
*Adding Bitnami to Apache group and Apache to Bitnami group.
| |
doc_23533270
| ||
doc_23533271
|
*
*Every single unit test should be self-contained and not depend on others.
*Don't repeat yourself.
To be more concrete, I've got an importer which I want to test. The Importer has a "Import" function, taking raw data (e.g. out of a CSV) and returning an object of a certain kind which also will be stored into a database through ORM (LinqToSQL in this case).
Now I want to test several things, e.g. that the returned object returned is not null, that it's mandatory fields are not null or empty and that it's attributes got the correct values. I wrote 3 unit tests for this. Should each test import and get the job or does this belong into a general setup-logic? On the other hand, believing this blog post, the latter would be a bad idea as far as my understanding goes. Also, wouldn't this violate the self-containment?
My class looks like this:
[TestFixture]
public class ImportJob
{
private TransactionScope scope;
private CsvImporter csvImporter;
private readonly string[] row = { "" };
public ImportJob()
{
CsvReader reader = new CsvReader(new StreamReader(
@"C:\SomePath\unit_test.csv", Encoding.Default),
false, ';');
reader.MissingFieldAction = MissingFieldAction.ReplaceByEmpty;
int fieldCount = reader.FieldCount;
row = new string[fieldCount];
reader.ReadNextRecord();
reader.CopyCurrentRecordTo(row);
}
[SetUp]
public void SetUp()
{
scope = new TransactionScope();
csvImporter = new CsvImporter();
}
[TearDown]
public void TearDown()
{
scope.Dispose();
}
[Test]
public void ImportJob_IsNotNull()
{
Job j = csvImporter.ImportJob(row);
Assert.IsNotNull(j);
}
[Test]
public void ImportJob_MandatoryFields_AreNotNull()
{
Job j = csvImporter.ImportJob(row);
Assert.IsNotNull(j.Customer);
Assert.IsNotNull(j.DateCreated);
Assert.IsNotNull(j.OrderNo);
}
[Test]
public void ImportJob_MandatoryFields_AreValid()
{
Job j = csvImporter.ImportJob(row);
Customer c = csvImporter.GetCustomer("01-01234567");
Assert.AreEqual(j.Customer, c);
Assert.That(j.DateCreated.Date == DateTime.Now.Date);
Assert.That(j.OrderNo == row[(int)Csv.RechNmrPruef]);
}
// etc. ...
}
As can be seen, I'm doing the line Job j = csvImporter.ImportJob(row);
in every unit test, as they should be self-contained. But this does violate the DRY principle and may possibly cause performance issues some day.
What's the best practice in this case?
A: Your test classes are no different from usual classes, and should be treated as such: all good practices (DRY, code reuse, etc.) should apply there as well.
A: That depends on how much of your scenario that's common to your test. In the blog post you refered to the main complaint was that the SetUp method did different setup for the three tests and that can't be considered best practise. In your case you've got the same setup for each test/scenario and then you should use a shared SetUp instead of duplicating the code in each test. If you later on find that there are more tests that does not share this setup or requires a different setup shared between a set of tests then refactor those test to a new test case class. You could also have shared setup methods that's not marked with [SetUp] but gets called in the beginning of each test that needs them:
[Test]
public void SomeTest()
{
setupSomeSharedState();
...
}
A way of finding the right mix could be to start off without a SetUp method and when you find that you're duplicating code for test setup then refactor to a shared method.
A: You could put the
Job j = csvImporter.ImportJob(row);
in your setup. That way you're not repeating code.
you actually should run that line of code for each and every test. Otherwise tests will start failing because of things that happened in other tests. This will become hard to maintain.
The performance problem isn't caused by DRY violations. You actually should setup everything for each and every test. These aren't unit tests, they're integration tests, you rely on external files to run the test. You could make ImportJob read from a stream instead of it directly opening a file. Then you could test with a memorystream.
A: Whether you move
Job j = csvImporter.ImportJob(row);
into the SetUp function or not, it will still be executed before every test is executed. If you have the exact same line at the top of each test, well then it is just logical that you move that line into the SetUp portion.
The blog entry that you posted complained about the setup of the test values being done in a function disconnected (possibly not on the same screen as) from the test itself -- but your case is different, in that the test data is being driven by an external text file, so that complaint doesn't match up with your specific use case either.
A: In one of my projects we agreed with team that we will not implement any initialization logic in unit tests constructors. We have Setup, TestFixtureSetup, SetupFixture (since version 2.4 of NUnit) attributes. They are enough for almost all cases when we need initialization. We force developers to use one of these attributes and to explicitly define whether we will run this initialization code before each test, before all tests in a fixture or before all tests in a namespace.
However I will disagree that unit tests should always confirm to all good practices supposed for a usual development. It is desirable, but it is not a rule. My point is that in real life customer doesn't pay for unit tests. Customer pays for the overall quality and functionality of the product. He is not interested to know whether you provide him a bug-free product by covering 100% of code by unit test/automated GUI tests or by employing 3 manual testers per one developer that will click on every piece of the screen after each build.
Unit tests don't add business value to the product, they allow you to save on development and testing efforts and force developers to write better code. So it is always up to you - will you spend additional time on UT refactoring to make unit tests perfect? Or will you spend the same amount of time to add new features for the customers of your product? Do not also forget that unit-tests should be as simple as possible. How to find a golden section?
I suppose this depends on the project, and PM or team lead need to plan and estimate quality of unit tests, their completeness and code coverage as if they estimate all other business features of your product. My opinion, that it is better to have copy-paste unit tests that cover 80% of production code then to have a very well designed and separated unit tests that cover only 20%.
| |
doc_23533272
|
OS running:(http://rcn-ee.net/deb/rootfs/precise/ubuntu-12.04-r4-minimal-armhf-2012-07-16.tar.xz)
A: The best way to do this is to use a function that gives you more control over the resulting process than system() does. However, this would be platform specific.
*
*For Windows, use CreateProcess() which returns a HANDLE which you can use later in TerminateProcess() to kill the process.
*For Unix, use fork() and exec() which gives you the pid of the child process, which you can use later in kill() to kill the process.
| |
doc_23533273
|
I have many Mysql requests to do in Node, and it's done asynchronously.
In the following example, I would like to wait for the checkExists function to finish one way or another (and populate my input variable) before the function doStuffWithInput starts. I don't see any other way than pasting doStuffWithInput multiple times in the various possible callbacks (after each 'input=keys;') ... I'm sure there is a better way though. Any ideas?
var input;
db.checkExists_weekParents(id,function(count){ //check table existence/number of rows
if(count!==='err'){ //if doesnt exist, create table
db.create_weekParents(id,function(info){
if(info!=='err'){ //as table is empty, create input from a full dataset
db.makeFull_weekParents(id,function(keys){
input = keys;
});
}
});
}else{ //if exists, check number of entries and create input keys as a subset of the full dataset
db.makeDiff_weekParents(id,function(keys){
if(keys.length!==0){
input = keys;
}else{ //if the table already has full dataset, we need to export and start again.
db.export_weekParents(id,function(info){
db.create_weekParents(id,function(info){
if(info!=='err'){
db.makeFull_weekParents(id,function(keys){
input = keys;
});
}
});
});
}
});
}
});
Once all this is done, we have lots of stuff to do (spawn child processes, more db operations, etc...)
doStuffWithInput(input,function(output){
//Tons of stuff here
console.log(output);
})
I really hope this is clear enough, I'll clarify if needed.
EDIT
Trying to rewrite using promises seems the best way to go, and I imagine it can be a great example for others like me struggling with pyramid of doom.
So far I have :
var Q = require('q');
function getInput(){
var dfd = Q.defer();
db.check_weekParents(id,function(count){
console.log('count '+count);
if(count==='err'){
db.create_weekParents(id,function(info){
if(info!=='err'){
console.log('created table');
db.makeDiff_weekParents(id,function(keys){
input = keys;
dfd.resolve(input);
});
}
});
}else{
db.makeDiff_weekParents(id,function(keys){
input=keys;
dfd.resolve(input);
});
}
});
return dfd.promise;
}
getInput().then(function (input) {
console.log(input);
});
It is magic!!
A: You can use promises rather than callbacks. There are many possibilities in node, and the mysql library you are using may even support them. For example with Q:
function getInput(){
var dfd = Q.defer();
if(count!==='err'){
db.create_weekParents(id,function(info){
/* after everything completes */
dfd.resolve(input);
/* snip */
return dfd.promise;
}
Then you can do
getInput().then(function (input) {
doStuffWithInput(input ...
});
A: You should look into using the async library.
For your case you may want to look at using the waterfall pattern. The functions will be executed in series, with the result of each being passed as input to the next. From here, you can check the results of previous functions, etc.
You are also able to combine the different control flow structures in any way you want. (ie, parallel operations at one stage of a waterfall flow)
| |
doc_23533274
|
import os
import dbf
x1 = '/home/beata/Documents/Bias_coorection/Power_pr/CNRM_pr_power1965'
x2 = '/home/beata/Documents/Bias_coorection/Power_pr/CNRM_pr_power1966'
x3 = '/home/beata/Documents/Bias_coorection/Power_pr/CNRM_pr_power1967'
x = [x1, x2, x3]
def conv(x):
file_name = os.path.basename(x)
dbf_file_name = file_name + ".dbf"
return dbf_file_name
def dbf_from_csv(x):
dbf.from_csv(x, conv(x))
from multiprocessing import Pool
p = Pool(2)
p.map(dbf_from_csv, x)
p.close()
p.join()
When I run this scrip I do not get back prompt in the console after the run is finished. When I type top in terminal it seems python is stacked. Could someone suggest me a solution how I can get to know if the conversations are ready? Thank you for your help in advance!
A: if you use the IPython console you get something which looks loke this:
In [2]:
in the console while if you use a Python console you will get
">>>"
after the script has been run. Otherwise you can of course just add a line like
print "FINISHED"
as the last line of your code
| |
doc_23533275
|
Why does the actorOf function NOT require a function input that has a Actor<_> as a parameter?
It appears that the actorOf2 function DOES require a Actor<_> parameter.
The context of how these functions are called are the following:
let consoleWriterActor = spawn myActorSystem "consoleWriterActor" (actorOf Actors.consoleWriterActor)
let consoleReaderActor = spawn myActorSystem "consoleReaderActor" (actorOf2 (Actors.consoleReaderActor consoleWriterActor))
let consoleReaderActor (consoleWriter: IActorRef) (mailbox: Actor<_>) message =
...
let consoleWriterActor message =
...
The signature of actorOf is the following:
('Message -> unit) -> Actor<Message> -> Cont<'Message,'Returned>
The signature of actorOf2 is the following:
(Actor<Message> -> 'Message -> unit) -> Actor<Message> -> Cont<'Message,'Returned>
Conclusion:
I am new to Akka.net.
Thus, I don't understand why the "Actor<_>" parameter (which I believe represents a mailbox) would not be useful for the actorOf function.
A: actorOf2 function takes an Actor<_> parameter, which represents an actor execution context (from F# MailboxProcessor it's often called mailbox). It allows for things such as changing actor lifecycle, creating child actors or communicating with message sender.
However sometimes you may want to create actor, that is designed to work as a simple sink for your data i.e. processing the messages and push the result into some external service. This is where actorOf may be useful.
| |
doc_23533276
|
This is the input:
<View style={styles.inputContainer}>
<TextInput style={styles.input} placeholder="Email"></TextInput>
</View>
This is the style:
input: {
height: 50,
borderBottomColor: 'gray',
borderBottomWidth: 1,
marginVertical: 10,
}
This is how it looks:
https://i.stack.imgur.com/FWmYe.jpg
A: Please remove the height: 50 from the styles: input or keep the space as you want.
| |
doc_23533277
|
Here is the first line of the game that we are doing.
SET /p quest1= 1. What was the Six-Million-Dollar Man's first name?
A: set /p a=what is 2 + 2:
if "%a%" NEQ "4" (echo sorry, the answer is 4)
A: @echo off
set /p quest1=" 1. What was the Six-Million-Dollar man's first name? "
echo %quest1%|find /i "Steve">nul
if "%errorlevel%"=="0" (echo Right answer!) else (echo Incorrect answer, was Steve Austin.)
pause>nul
| |
doc_23533278
|
Scenario
One device is advertising, all others (8 devices in total) are discovering. On successful connection, I am sending simple Files of 20B, 200B and 33KB sizes for 30 secs straight to each connected device.
I am using android Samsung S6 SM-G920F devices with android version: 6.0.1 and playservices version 12.8.74
I have following issues/questions
Q1: First of all at max 3 to 4 devices could be connected simulatenously more than this results into disconnect event of other devices. Even if only 3 devices are connected, and I am continuously sending message for 30seconds, one of them disconnected ? In simpler words, cannot sustain connectivity with any device for more than 45 secs. usually disconnection occur between 25 - 45 secs
Q2: I cannot send message/file continuously for 30 seconds like we can do with the Wifi like this
While(30sec){
bluetoothSocket.outputStream.write(bytes)
}
Because if I try to do this then I got the exception of too much work.I have to wait for the the callback in onTransferPayLoadUpdate()
Q3: If I try to send the file of 1MB or more to other peers, peer received the file successfully in onPayloadReceived callback but server/sender receive the successful status after too much delay. In my case it's between 1 mins to 5 mins after client callback. And I cannot send new file until I got the success callback on server. If I try to send it before getting the callback, nothing happens. Literally nothing. So In essence I can only send file of 1MB once then I have to resent both the devices to send another file.
A: This should be broken up into 3 separate questions. It helps future developers search easier. So if you get the time to do that, let me know and I'll split up my answer as well. But anyway, let's get into it!
A1: Nearby Connections has 3 separate strategies. The more limited the strategy, the more types of mediums we can use. So with that in mind, and with no router involved, P2P_CLUSTER will only use Bluetooth. It's the most general strategy, so it has the fewest mediums available.
All Android devices use mobile Bluetooth chips, which are unfortunately weak (but small and power sensitive), and that causes them to have a theoretical 7 device limit but a practical 3~4 device limit. To make things worst, that limit is eaten up by smart watches and paired headphones as well. That's why you're running into problems.
P2P_STAR and P2P_POINT_TO_POINT are both much more limited, because you can't connect in any direction. You need to choose who the host is beforehand and have everyone scan for and connect to that host. But you get the added benefit of WiFi hotspots, which have higher bandwidth and a larger number of simultaneous devices supported. I've seen 7 devices happily connected to a Lollipop device.
If you want to go beyond that, into the 10s and 100s, and a router isn't available, you'll have to build a mesh network. I can link you to examples of how to do that if you're interested. We don't offer support for that within Connections, but others have built meshes on top of us so we can point you in the right direction.
A2: Can you include a stack trace of the error you're seeing? Payload.Type.STREAM was built for continuously sending data. The other payload types should also work too, baring some rare but potential issues like BYTE payloads filling up the phones RAM.
A3: Both devices need to wait for onPayloadTransferUpdate(SUCCESS). onPayloadReceived is only a header, and means that there's an incoming file or stream but that the data hasn't been received yet. For byte payloads, we actually send the full byte payload inside the header so that's the only time data is immediately available.
| |
doc_23533279
|
AND ((@includeExpired = 0 AND lic.[DateExpiresUtc] > GETDATE())
OR
(@includeExpired = 1 AND lic.[DateExpiresUtc] <> GETDATE())
)
Which just looks ugly, I've tried just including a simpler version of the statement like:
(@includeExpired = 0 AND lic.[DateExpiresUtc] > GETDATE())
But when @includeExpired is 1 it all fails to select anything.
Is there a better way of doing this?
A: According to your variabel name @includeExpired, which implies that this variable decides about additionally including expired records in the result set, the check in the second part, lic.[DateExpiresUtc] <> GETDATE(), is not necessary and the other check in the first part is not necessary.
Try this:
AND (lic.[DateExpiresUtc] > GETDATE()) OR (@includeExpired = 1)
A: Just one idea.
(CASE
WHEN @includeExpired = 1 THEN 1
WHEN @includeExpired = 0 AND lic.[DateExpiresUtc] > GETUTCDATE() THEN 1
ELSE 0 END) = 1
| |
doc_23533280
|
*
*terraform v0.10.7
*google cloud platform
*various .tf files for creating backend, variables etc
issue:
i am able to create multiple vm instances and also multiple additional disks (boot_disk is working fine on each instance) but I want to be able to attach those additional disks to each vm accordingly without having to have individual adds for each vm (if that makes sense!).
the code I have so far is (which works ok for building multiple compute instances and also multiple additional disks): note (I have commented out the attached_disk which errors atm)
# vm1.tf
variable "node_count" {
default = "3"
}
resource "google_compute_disk" "test-node-1-index-disk-" {
count = "${var.node_count}"
name = "test-node-1-index-disk-${count.index}-data"
type = "pd-standard"
zone = "${var.zone}"
size = "5"
}
resource "google_compute_instance" "test-node-" {
count = "${var.node_count}"
name = "test-node-${count.index}"
machine_type = "${var.machine_type}"
zone = "${var.zone}"
boot_disk {
initialize_params {
image = "${var.image}"
}
}
# attached_disk {
# source = "${google_compute_disk.test-node-1-index-disk-0}"
# device_name = "${google_compute_disk.test-node-1-index-disk-0}"
# }
network_interface {
network = "default"
access_config {
// Ephemeral IP
}
}
}
If i do individual .tf the attached_disk works no problem.
My desired end state, is to be able to build multiple vm's, multiple additional disks using count and attach/assign each added disk to each vm instance with a relationship of 1:1 but preferable within a single .tf and block...
I guess, I could look to apply a post gcloud compute command to attach (knowing the expected naming convention) but i'd like it to be more dynamic and done at point of creation.
Am I approaching this wrong?
Any help/pointers greatly appreciated!
Thx
Bry
A: # vm1.tf
variable "node_count" {
default = "3"
}
resource "google_compute_disk" "test-node-1-index-disk-" {
count = "${var.node_count}"
name = "test-node-1-index-disk-${count.index}-data"
type = "pd-standard"
zone = "${var.zone}"
size = "5"
}
resource "google_compute_instance" "test-node-" {
count = "${var.node_count}"
name = "test-node-${count.index}"
machine_type = "${var.machine_type}"
zone = "${var.zone}"
boot_disk {
initialize_params {
image = "${var.image}"
}
}
attached_disk {
source = "${element(google_compute_disk.test-node-1-index-disk-.*.self_link, count.index)}"
device_name = "${element(google_compute_disk.test-node-1-index-disk-.*.name, count.index)}"
}
network_interface {
network = "default"
access_config {
// Ephemeral IP
}
}
}
A: if you want static ip
variable "node_count" {
default = "3"
}
resource "google_compute_address" "static-ip-address" {
count = "${var.node_count}"
name = "${var.tag}-static-ip-${count.index + 1}"
}
resource "google_compute_disk" "test-node-1-index-disk-" {
count = "${var.node_count}"
name = "test-node-1-index-disk-${count.index}-data"
type = "pd-standard"
zone = "${var.zone}"
size = "5"
}
resource "google_compute_instance" "test-node-" {
count = "${var.node_count}"
name = "test-node-${count.index}"
machine_type = "${var.machine_type}"
zone = "${var.zone}"
boot_disk {
initialize_params {
image = "${var.image}"
}
}
attached_disk {
source = "${element(google_compute_disk.test-node-1-index-disk-.*.self_link, count.index)}"
device_name = "${element(google_compute_disk.test-node-1-index-disk-.*.name, count.index)}"
}
network_interface {
network = "default"
access_config {
nat_ip = "${element(google_compute_address.static-ip-address.*.address, count.index)}"
}
}
| |
doc_23533281
| ||
doc_23533282
|
Everytime I do an update plugin such as GitHub and restart the jenkins, Jenkins start crashing. when I try to access jenkins through URL in my browser, the error page shows up with some exception "class not found jenkins/model jenkins" . and then in order to make jenkins working again , I have to delete the plugin entirely from the plugins directory. I wish someone has an answer for this because I really need to make my github projects available to jenkins for build. Thanks
A: It is a very usual problem with Jenkins. My workaround was to either update the jenkins (where ever it is installed) and restarting the hudson service.
sometimes even restarting the service works.
often you might need to remove those plugins from within (/var/lib/hudson/plugins/) and restart the service.
| |
doc_23533283
|
I'm trying to understand one thing in Oracle database Roles and privileges:
So, i am trying to create two Roles: Programmer and Manager.
The idea for Programmer role users, is to create and insert into tables.
The idea for Manager role users, is to have acess to Programmer role privileges, PLUS update records.
And i thought that if i granted the Programmer role to Manager role, this last one could:
*
*Create a table (from programmer role);
*Insert into a table (from programmer role);
*Update a record in the table (privilege set to the Manager role);
But through SQL developer, i grant Programmer role to Manager role, and when i connect to the database using a Manager User, i can't find tables created on SYSTEM.A (for example).
Do i need to grant explicity on Manager role also can create and insert? If so, what's the point of the inheritance?
SOLUTION
Programmer role: Insert && create table privileges;
Manager role: Update && Select any table privileges;
Since my goal was to put Manager users inherit Programmer role privileges, this can be achieved like this:
(After setting the roles and privileges):
*
*DBA > ROLES > EDIT MANAGER ROLE > GRANTED ROLES > SELECT PROGRAMMER ROLE.
*Disconnect any manager session, and re-connect.
Open a Manager SQL sheet and try to create a table and select, insert and update it. You'll see that Manager has adopter privileges from "programmer" role.
A: SQL is a non-procedural language. Having that said,you don't need to think about inheritance here. Instead, Grant create,insert and update privileges on manager role explicitly.
Here's what you can do:
1- Create programmer and manager role:
SQL> CREATE ROLE role_name IDENTIFIED by pass_word;
2- Then GRANT privileges (your requirement here) to each role:
SQL> GRANT privilege TO role_name;
3- Grant users(programmers & managers) privileges by granting each user(depends whether he is a manager or a programmer) to a particular role.
SQL> GRANT role_name TO user_name;
You may find the following link useful for more details:
http://docs.oracle.com/cd/B10501_01/server.920/a96521/privs.htm#21065
| |
doc_23533284
|
.data
prompt1: .asciiz "Enter the array size \n"
prompt2: .asciiz "Enter integers to fill the array \n"
prompt3: .asciiz "Sorted Array: "
.text
main:
li $t0, 0
la $a0, prompt1
li $v0, 4
syscall
li $v0, 5
syscall
move $t0, $v0 #array size
add $t9, $t0, $t0
li $v0, 9
la $a0, ($t0)
syscall
move $s0, $v0 #array address w/ size in memory
add $s1, $zero, $s0
loop:
la $a0, prompt2
li $v0, 4
syscall
li $v0, 5
syscall
beqz $t0, sort
sw $v0, 0($s1)
addiu $s1, $s1, 4
subiu $t0, $t0, 1
bnez $t0, loop
sort:
lw $t4, 0($s1)
lw $t5, 4($s1)
blt $t5, $t4, swap
addiu $s1, $s1, 4
subiu $t9, $t9, 1
bnez $t9, sort
j exit
swap:
sw $t4, 4($s1)
sw $t5, 0($s1)
exit:
| |
doc_23533285
|
uninitialized constant IdeasController::Delayed
I have already started the delayed_job by using rake jobs:work. I have the following delayed_job code in SampleController.rb
Delayed::Job.enqueue(DelayedWorker.new({:model=>object.class.to_s,:object_id=>object.id,:meth=>:create_suggestion}))
delayed_worker.rb contains the following code:
class DelayedWorker < Struct.new(:options)
def perform
if obj=Object.const_get(options[:model]).find(options[:object_id])
if (options[:para] ? obj.send(options[:meth],options[:para].first) : obj.send(options[:meth]))
puts "Successfull"
else
puts "Failed"
end
end
end
end
Any one please help me for resolving this.
Thanks...
A: Change
Delayed::Job.enqueue(DelayedWorker.new({:model=>object.class.to_s,:object_id=>object.id,:meth=>:create_suggestion}))
to
::Delayed::Job.enqueue(DelayedWorker.new({:model=>object.class.to_s,:object_id=>object.id,:meth=>:create_suggestion}))
| |
doc_23533286
|
$("#petrolGauge .fuelBar .slider").draggable({
containment: "parent",
axis: "x",
drag:function(){
updValues();
},
start:function(){
$(this).css("background-color","#666");
},
stop:function(){
//checkForm();
$(this).css("background-color","#AAA");
}
});
This is for the following markup:
<div id="petrolGauge">
<input id="endPet" name="endPet" type="hidden" value="0">
How much fuel was left in the tank when you were finished? (Use the slider) <b>(~<span class="petLeft">0</span>%)</b>
<span class="mandatory">*</span><br />
<div class="fuelBar">
<div title="Drag" class="slider"></div>
</div>
This works a treat, when I click on the slider. But I'd like it so that when I click the fuel bar (the slider's parent) the slider not only starts dragging but also jumps to the cursor. I've achieved it by doing this:
$("#petrolGauge .fuelBar").on("mousedown",function(e){
slider = $("#petrolGauge .fuelBar .slider");
left = e.pageX-($(this).offset().left)-(slider.width()/2);
updValues();
slider.css("left",left).trigger(e);
});
Two problems with this:
Firstly, when clicking on the parent I get a couple of second's delay before the slider starts to drag? I've tried and tested this in Chrome and IE and both do it. Secondly if the cursor is less than half of the slider's width away from the edge of the parent, the slider will move to the outside of the parent. Wouldn't be hard to fix this with a couple of checking, but was wondering if there was another way? I'm suprised that draggable() doesn't have any parameters for this to be honest. I didn't want to use slider() if I could help it but if it's the only way, then it's the only way.
Here's a fiddle to work with.
A: The reason you get the delay is because you use .trigger() inside the .on() event which creates a big loop. As a result the loop slows down the moving process.
$("#petrolGauge .fuelBar").click(function (e) { // use click instead of mousedown
slider = $("#petrolGauge .fuelBar .slider");
left = e.pageX - ($(this).offset().left) - (slider.width() / 2);
if(left > 570) { left = 570; } else if(left < 0) { left = 0; }
// it looks like a draggable bug due to the manual position change, so use a small check
slider.css("left", left); // change the position first
updValues(); // then calculate and update the div
// no need to trigger the event a second time because it will loop until jQuery exceeds it's trigger limit.
});
Here's an updated FIDDLE
Updated answer
To make .slider move accordingly to the mouse movement when not directly dragged, bind a mousemove event to the mousedown and unbind it when mouseup. Then in .mousemove() you change the position of .slider.
var move = function (e) {
left = e.pageX - ($('#petrolGauge .fuelBar').offset().left) - (slider.width() / 2);
if (left > 570) {
left = 570;
} else if (left < 0) {
left = 0;
}
slider.css("left", left);
updValues();
};
var slider = $("#petrolGauge .fuelBar .slider");
$("#petrolGauge .fuelBar").mousedown(function (e) {
e.preventDefault();
left = e.pageX - ($(this).offset().left) - (slider.width() / 2);
if (left > 570) {
left = 570;
} else if (left < 0) {
left = 0;
}
slider.css("left", left)
$(this).bind('mousemove', move);
updValues();
}).mouseup(function () {
$(this).unbind('mousemove');
});
| |
doc_23533287
|
I am getting a call to another person using my java application (Already done & works fine).
Then I am playing a recording, for example "Please press 1 one to continue in english" (Already done & works fine).
Now I want to detect that person press one, As per my researches in google search I got that this can do using DTMF.If the person press 1 I want to do the actions according to my condition.
My question is how to detect that number using DTMF in java (J2SE). I am using ZTE USB dongle to make the call. Dialing, Hangup, And other controls did by using AT commands + Java IO.
package tracenumber;
import java.io.IOException;
import javax.sound.sampled.AudioFormat;
import javax.sound.sampled.AudioSystem;
import javax.sound.sampled.DataLine;
import javax.sound.sampled.LineUnavailableException;
import javax.sound.sampled.TargetDataLine;
public class DTMFDetect {
/**
* @param args
*/
float[] lowFreq = new float[]{697.0F, 770.0F, 852.0F, 941.0F};
float[] highFreq = new float[]{1209.0F, 1336.0F, 1477.0F, 1633.0F};
float[] dtmfTones = new float[]{697.0F, 770.0F, 852.0F, 941.0F, 1209.0F, 1336.0F, 1477.0F, 1633.0F};
int dtmfBoard[][] = { { 1, 2, 3, 12 }, { 4, 5, 6, 13 }, { 7, 8, 9, 14 }, { 10, 0, 11, 15} };
//byte[] buffer = new byte[2000];
static final char FRAME_SIZE = 160;
AudioFormat format = getAudioFormat();
//int[] buf;
public boolean wait = false;
static boolean continueParsingDtmf = false;
public DTMFDetect()
{
}
public AudioFormat getAudioFormat()
{
// float sampleRate = 8000.0F;
float sampleRate = 44100.0F;
//int sampleSizeInBits = 16;
int sampleSizeInBits = 8;
int channels = 1;
boolean signed = true;
boolean bigEndian = true;
return new AudioFormat(sampleRate, sampleSizeInBits, channels, signed, bigEndian);
}
public static void main(String args[]) throws LineUnavailableException, IOException
{
// start seaching for audio signals
new DtmfCapture().start();
// decode for a minute
try { Thread.sleep(60000);
} catch (Exception e){
}
continueParsingDtmf = false;
}
public class DtmfCapture extends Thread {
public void run() {
continueParsingDtmf = true;
try {
int tone = 0;
DataLine.Info info = new DataLine.Info(TargetDataLine.class, format);
TargetDataLine out= (TargetDataLine) AudioSystem.getLine(info);
int[] buf ;
out.open(format);
out.drain();
out.start();
int count = 0;
while (continueParsingDtmf) {
byte[] buffer = new byte[2000];
//grab audio data
count = out.read(buffer,0,buffer.length);
if(count > 0){
DecodeDtmf dtmf = new DecodeDtmf(buffer);
if (!wait){
dtmf.start(); //look for dtmf
Thread.sleep(100);
} else
{
Thread.sleep(4000); // wait before searching again
System.out.println(System.currentTimeMillis());
wait = false;
}
}
}
out.close();
} catch(Exception e){
e.printStackTrace();
}
}
}
public class DecodeDtmf extends Thread {
//
byte[] buffer;
DecodeDtmf(byte[] buffer) {
this.buffer = buffer;
}
public void run() {
int[] buf;
buf = new int[buffer.length/2];
for(int j = 0; j<buffer.length/2-1; j++)
{
buf[j] = (int) ((buffer[j*2+1] & 0xFF) + (buffer[j*2] << 8));
}
int tone = findDTMF(buf);
if (tone >=0)
{ wait = true;
//System.out.print(time);
if ( tone <10)
{System.out.println(" THE TONE IS : " + tone);
}
if (tone ==12)
{ System.out.println(" THE TONE IS : A" );
}
if (tone ==13)
{ System.out.println(" THE TONE IS : B" );
}
if (tone ==14)
{ System.out.println(" THE TONE IS : C" );
}
if (tone ==15)
{ System.out.println(" THE TONE IS : D" );
}
if (tone ==10)
{ System.out.println(" THE TONE IS : *" );
}
if (tone ==11)
{ System.out.println(" THE TONE IS : #" );
}
}
}
/*
Check if sample has dtmf tone
*/
public int findDTMF(int[] samples)
{
double[] goertzelValues = new double[8];
double lowFreqValue = 0;
int lowFreq = 0;
double sumLow = 0;
double highFreqValue = 0;
int highFreq = 0;
double sumHigh = 0;
for(int i = 0; i<8; i++)
{
goertzelValues[i] = goertzel(samples,dtmfTones[i]);
}
for(int i = 0; i<4; i++) // Find st?rste low frequency
{
sumLow += goertzelValues[i]; // Sum til signal-test
if(goertzelValues[i] > lowFreqValue)
{
lowFreqValue = goertzelValues[i];
lowFreq = i;
}
}
for(int i = 4; i<8; i++) // Find st?rste high frequency
{
sumHigh += goertzelValues[i]; // Sum til signal-test
if(goertzelValues[i] > highFreqValue)
{
highFreqValue = goertzelValues[i];
highFreq = i-4;
}
}
if(lowFreqValue < sumLow/2 || highFreqValue < sumHigh/2) // Test signalstyrke
{
return -1;
}
return dtmfBoard[lowFreq][highFreq]; // Returner DTMF tone
}
}
public double goertzel(int[] samples, float freq)
{
double vkn = 0;
double vkn1 = 0;
double vkn2 = 0;
for(int j = 0; j<samples.length -1; j++)
{
vkn2 = vkn1;
vkn1 = vkn;
vkn = 2 * Math.cos(2 * Math.PI * (freq * samples.length / format.getSampleRate() ) / samples.length) * vkn1 - vkn2 + samples[j];
}
double WNk = Math.exp(-2 * Math.PI * (freq * samples.length / format.getSampleRate() ) / samples.length);
return Math.abs(vkn - WNk * vkn1);
}
}
| |
doc_23533288
|
When I try to compile anything, even simplest C program, I get the following build error:
1>------ Build started: Project: Project1, Configuration: Debug Win32 ------
1>Object of type 'System.ComponentModel.Composition.ExportFactory`1[System.IO.StreamReader]' cannot be converted to type 'System.ComponentModel.Composition.ExportFactory`1[System.IO.TextWriter]'.
1>Error: Object of type 'System.ComponentModel.Composition.ExportFactory`1[System.IO.StreamReader]' cannot be converted to type 'System.ComponentModel.Composition.ExportFactory`1[System.IO.TextWriter]'.
========== Build: 0 succeeded, 1 failed, 0 up-to-date, 0 skipped ==========
Any ideas what's the problem here?
EDIT: Sample code:
int main(int argc, char** argv) {
return 0;
}
| |
doc_23533289
|
Some points are duplicates and this prevents the diagram from being drawn correctly. How can I remove duplicate points? And draw a diagram with X and Z axes?
Please download the coordinate file via the link below:
https://s21.picofile.com/d/8445324542/15c1902a-0828-4692-b0ce-a65651306111/Coordinates.rar
A: There are actually no duplicate points in your dataset. The problem is that your data is essentially sorted by the wrong axis. You can re-sort and plot them like this:
read_tuple ('Rows.tup', Rows)
read_tuple ('Columns.tup', Cols)
* dev_inspect_ctrl (['plot_xy', Cols, Rows])
Indices := sort_index(Cols)
Rows2 := Rows[Indices]
Cols2 := Cols[Indices]
dev_get_window (WindowHandle)
plot_tuple (WindowHandle, Cols2, Rows2, [], [], [], [], [])
| |
doc_23533290
|
here is my code .. all help is really appreciated :)
import java.util.Scanner;
public class Main{
public static void main(String[] args){
String carName;
String carType;
String engineType;
int limit;
Scanner in = new Scanner(System.in);
System.out.print("Enter the number of Cars you want to add - ");
limit = in.nextInt();
for(int i = 0; i <limit; i++){
Cars cars[i] = new Cars();
System.out.print("Enter the number of Car Name - ");
carName = in.nextLine();
System.out.print("Enter the number of Car Type - ");
carType = in.nextLine();
System.out.print("Enter the Engine Type - ");
engineType = in.nextLine();
cars[i].setCarName(carName);
cars[i].setCarType(carType);
cars[i].setEngineeSize(engineType);
String a = cars[i].getCarName();
String b = cars[i].getCarType();
String c = cars[i].getEngineeSize();
System.out.println(a,b,c);
}
}
}
The cars class looks like this ..
public class Cars{
public String carName;
public String carType;
public String engineeSize;
public void Cars(){
System.out.println("The Cars constructor was created ! :-) ");
}
public void setCarName(String cn){
this.carName = cn;
}
public void setCarType(String ct){
this.carType = ct;
}
public void setEngineeSize(String es){
this.engineeSize = es;
}
public String getCarName(){
return this.carName;
}
public String getCarType(){
return this.carType;
}
public String getEngineeSize(){
return this.engineeSize;
}
}
A: You're on the right track, there are a few errors and unnecessary bits though.
The Cars Class
Your Cars class was mostly fine, (though in my opinion Car would have made more sense) however your constructor didn't make sense, you had public void Cars(), void means "this method returns nothing", but you want to return a Cars object, meaning your constructor needs to become:
public Cars()
{
System.out.println("The Cars constructor was created ! :-) ");
}
Your Main Class
You were very close here as well, your primary issue was creating the cars array limit times:
for(int i = 0; i < limit; i++)
{
Cars cars[i] = new Cars();
//Other code
}
The array needs to be made outside the for loop.
Here is the revised Main class in full, the comments should explain fairly well what I did and why.
import java.util.Scanner;
public class Main{
public static void main(String[] args){
//The strings here were unnecessary
int limit;
Scanner in = new Scanner(System.in);
System.out.print("Enter the number of Cars you want to add - ");
limit = in.nextInt();
in.nextLine(); //nextInt leaves a newLine, this will clear it, it's a little strange, but it makes sense seeing as integers can't have newlines at the end
//Make an array of Cars, the length of this array is limit
Cars[] cars = new Cars[limit];
//Iterate over array cars
for(int i = 0; i < limit; i++)
{
//Read all the properties into strings
System.out.println("Enter the number of Car Name - ");
String carName = in.nextLine();
System.out.println("Enter the number of Car Type - ");
String carType = in.nextLine();
System.out.println("Enter the Engine Type - ");
String engineType = in.nextLine();
//Set the object at current position to be a new Cars
cars[i] = new Cars();
//Adjust the properties of the Cars at this position
cars[i].setCarName(carName);
cars[i].setCarType(carType);
cars[i].setEngineeSize(engineType);
//We still have the variables from the scanner, so we don;t need to read them from the Cars object
System.out.println(carName+carType+engineType);
}
in.close(); //We don't need the scanner anymore
}
}
Finished typing this and realized that the question is two years old :)
| |
doc_23533291
|
$item_id = $each_item['item_id'];
$sql = mysqli_query($mysqli, "SELECT * FROM prod WHERE
id='$item_id' LIMIT 1");
while ($row = mysqli_fetch_array($sql)) {
$price = $row["prod_price"];
$stocks = $row["stocks"];
}
$sqlu = mysqli_query($mysqli, "UPDATE prod SET stocks = $stocks
WHERE id = 'item_id'");
}
Im planning to create an inventory system but I failed mostly . When I add lot of items in the cart I can only update the last record. Im planning to update all of the items I added in the cart
for instances I add 5 products lets say for example 20 X apple , 10 X banana, 10 X grapes, 40 X sugar, 60 X oranges. I want to update the stocks with these 5 items not just the last item I added which is oranges ..
Here's the download link of the system that Im using as a reference. It has an add to cart function but no checkout and it doesnt have an inventory system. please help me
http://www.developphp.com/ExampleSites/Ecommerce_Series_Source/MyOnlineStore.zip
Im trying to store the data in array but no luck the key always stuck with [0]
A: What is the delimiter for $_SESSION["cart_array"]?
I think that you simply need to convert it from a string into an object or array so foreach can iterate all of the items in the collection.
foreach (explode("YOUR-DELIMITER", $_SESSION["cart_array"]) as $each_item)
| |
doc_23533292
|
var options = {
lang: 'deu',
};
var image = require("path").join(__dirname, 'lib/images/ocr-test-text.png');
var Tesseract = require('tesseract.js')
Tesseract.recognize(image, options)
.progress(function (info) {
console.log(info);
})
.then(function (data) {
console.log('done', data);
process.exit();
})
triggers the following error:
> node index.js
{ status: 'loading tesseract core' }
{ status: 'loaded tesseract core' }
{ status: 'initializing tesseract', progress: 0 }
pre-main prep time: 68 ms
{ status: 'initializing tesseract', progress: 1 }
{ status: 'downloading deu.traineddata.gz',
loaded: 116,
progress: 0.00011697604814572795 }
events.js:182
throw er; // Unhandled 'error' event
^
Error: incorrect header check
at Gunzip.zlibOnError (zlib.js:146:15)
Github issue: https://github.com/naptha/tesseract.js/issues/129
Any idea what's happening?
Update:
After following the instruction from the first answer, and download the "deu" traineddata, following error comes up:
export TESSDATA_PREFIX=/opt/TESSDATA && node get-text-from-image.js /opt/app/out/image.png
params [ '/opt/app/out/image.png' ]
progress { status: 'loading tesseract core' }
progress { status: 'loaded tesseract core' }
progress { status: 'initializing tesseract', progress: 0 }
pre-main prep time: 62 ms
progress { status: 'initializing tesseract', progress: 1 }
progress { status: 'loading deu.traineddata', progress: 0 }
progress { status: 'loading deu.traineddata', progress: 1 }
progress { status: 'initializing api', progress: 0 }
Failed loading language 'deu'
Tesseract couldn't load any languages!
progress { status: 'initializing api', progress: 0.3 }
progress { status: 'initializing api', progress: 0.6 }
progress { status: 'initializing api', progress: 1 }
progress { status: 'recognizing text', progress: 0 }
AdaptedTemplates != NULL:Error:Assert failed:in file ../classify/adaptmatch.cpp, line 190
/opt/app/node_modules/tesseract.js-core/index.js:4
function f(a){throw a;}var h=void 0,i=!0,j=null,k=!1;function aa(){return function(){}}function ba(a){return function(){return a}}var n,Module;Module||(Module=eval("(function() { try { return TesseractCore || {} } catch(e) { return {} } })()"));var ca={},da;for(da in Module)Module.hasOwnProperty(da)&&(ca[da]=Module[da]);var ea=i,fa=!ea&&i;
^
abort() at Error
at Na (/opt/app/node_modules/tesseract.js-core/index.js:32:26)
at Object.ka [as abort] (/opt/app/node_modules/tesseract.js-core/index.js:507:108)
at _abort (/opt/app/node_modules/tesseract.js-core/index.js:373:173)
at $L (/opt/app/node_modules/tesseract.js-core/index.js:383:55709)
at jpa (/opt/app/node_modules/tesseract.js-core/index.js:388:22274)
at lT (/opt/app/node_modules/tesseract.js-core/index.js:387:80568)
at mT (/opt/app/node_modules/tesseract.js-core/index.js:387:80700)
at Array.BS (/opt/app/node_modules/tesseract.js-core/index.js:387:69011)
at bP (/opt/app/node_modules/tesseract.js-core/index.js:383:110121)
at jT (/opt/app/node_modules/tesseract.js-core/index.js:387:80280)
If this abort() is unexpected, build with -s ASSERTIONS=1 which can give more information.
A: It's failing to unpack deu.traineddata.gz - no idea why. You might want to download the file yourself and try to gunzip it by hand. This isn't the way suggested by the module creator though; here's something else you can try.
Download the language files on the machine running node.js and place them somewhere.
https://github.com/tesseract-ocr/tessdata
In the environment, make sure the variable TESSDATA_PREFIX points to that location. For example, you may put them in /opt/tessdata. If you do so, you can set TESSDATA_PREFIX like this:
export TESSDATA_PREFIX=/opt/tessdata
Try again; this time it shouldn't try to download and unpack them itself.
| |
doc_23533293
|
When I run webpack I get this error
ERROR in ./src/js/components/header.jsx
Module build failed: SyntaxError: /home/hakeem/Documents/webapp/app/src/js/components/header.jsx: Unexpected token (46:39)
44 | return (
45 | <div className="col-lg-12">
> 46 | <li className='pull-left'><Link to "app">Home</Link></a></li>
| ^
47 | <li className="pull-left"><a onClick={this.handleClick}>Notifications</a></li>
48 | <li className="pull-left"><a onClick={this.handleClick}>Find friends</a></li>
49 | <input id="site-search" type="text" onChange={this.handleChange} value={this.setState.typed} placeholder="search..." aria-label="..." className="col-lg-2 pull-right"/>
at Parser.pp.raise (/home/hakeem/Documents/webapp/app/node_modules/babylon/lib/parser/location.js:22:13)
at Parser.pp.unexpected (/home/hakeem/Documents/webapp/app/node_modules/babylon/lib/parser/util.js:91:8)
at Parser.pp.jsxParseIdentifier (/home/hakeem/Documents/webapp/app/node_modules/babylon/lib/plugins/jsx/index.js:214:10)
at Parser.pp.jsxParseNamespacedName (/home/hakeem/Documents/webapp/app/node_modules/babylon/lib/plugins/jsx/index.js:225:19)
at Parser.pp.jsxParseAttribute (/home/hakeem/Documents/webapp/app/node_modules/babylon/lib/plugins/jsx/index.js:307:20)
at Parser.pp.jsxParseOpeningElementAt (/home/hakeem/Documents/webapp/app/node_modules/babylon/lib/plugins/jsx/index.js:319:31)
at Parser.pp.jsxParseElementAt (/home/hakeem/Documents/webapp/app/node_modules/babylon/lib/plugins/jsx/index.js:341:29)
at Parser.pp.jsxParseElementAt (/home/hakeem/Documents/webapp/app/node_modules/babylon/lib/plugins/jsx/index.js:354:30)
at Parser.pp.jsxParseElementAt (/home/hakeem/Documents/webapp/app/node_modules/babylon/lib/plugins/jsx/index.js:354:30)
at Parser.pp.jsxParseElement (/home/hakeem/Documents/webapp/app/node_modules/babylon/lib/plugins/jsx/index.js:390:15)
at Parser.parseExprAtom (/home/hakeem/Documents/webapp/app/node_modules/babylon/lib/plugins/jsx/index.js:402:21)
at Parser.pp.parseExprSubscripts (/home/hakeem/Documents/webapp/app/node_modules/babylon/lib/parser/expression.js:265:19)
at Parser.pp.parseMaybeUnary (/home/hakeem/Documents/webapp/app/node_modules/babylon/lib/parser/expression.js:245:19)
at Parser.pp.parseExprOps (/home/hakeem/Documents/webapp/app/node_modules/babylon/lib/parser/expression.js:176:19)
at Parser.pp.parseMaybeConditional (/home/hakeem/Documents/webapp/app/node_modules/babylon/lib/parser/expression.js:158:19)
at Parser.pp.parseMaybeAssign (/home/hakeem/Documents/webapp/app/node_modules/babylon/lib/parser/expression.js:121:19)
at Parser.pp.parseParenAndDistinguishExpression (/home/hakeem/Documents/webapp/app/node_modules/babylon/lib/parser/expression.js:583:26)
at Parser.<anonymous> (/home/hakeem/Documents/webapp/app/node_modules/babylon/lib/plugins/flow.js:1000:26)
at Parser.<anonymous> (/home/hakeem/Documents/webapp/app/node_modules/babylon/lib/plugins/flow.js:1000:26)
at Parser.<anonymous> (/home/hakeem/Documents/webapp/app/node_modules/babylon/lib/plugins/flow.js:1000:26)
at Parser.parseParenAndDistinguishExpression (/home/hakeem/Documents/webapp/app/node_modules/babylon/lib/plugins/flow.js:1000:26)
at Parser.pp.parseExprAtom (/home/hakeem/Documents/webapp/app/node_modules/babylon/lib/parser/expression.js:469:19)
at Parser.<anonymous> (/home/hakeem/Documents/webapp/app/node_modules/babylon/lib/plugins/jsx/index.js:404:22)
at Parser.<anonymous> (/home/hakeem/Documents/webapp/app/node_modules/babylon/lib/plugins/jsx/index.js:404:22)
at Parser.<anonymous> (/home/hakeem/Documents/webapp/app/node_modules/babylon/lib/plugins/jsx/index.js:404:22)
at Parser.<anonymous> (/home/hakeem/Documents/webapp/app/node_modules/babylon/lib/plugins/jsx/index.js:404:22)
at Parser.parseExprAtom (/home/hakeem/Documents/webapp/app/node_modules/babylon/lib/plugins/jsx/index.js:404:22)
at Parser.pp.parseExprSubscripts (/home/hakeem/Documents/webapp/app/node_modules/babylon/lib/parser/expression.js:265:19)
at Parser.pp.parseMaybeUnary (/home/hakeem/Documents/webapp/app/node_modules/babylon/lib/parser/expression.js:245:19)
at Parser.pp.parseExprOps (/home/hakeem/Documents/webapp/app/node_modules/babylon/lib/parser/expression.js:176:19)
@ ./src/js/main.jsx 17:13-47
webpack.config.js
var LiveReloadPlugin = require('webpack-livereload-plugin');
var path = require('path');
var webpack = require('webpack');
module.exports={
//Entry point/Location of the main jsx file to be transformed to JS
entry: "./src/js/main.jsx",
//Location where the compiled JS code will be saved
output: {
//Location where the output file is to be saved
path: './build',
//Name of the output file
filename: "bundle.js"
},
//Modules
module: {
loaders: [
{
test: /\.jsx$/,
exclude: /(node_modules)/,
loader: 'babel',
query: {presets:[ 'es2015', 'react', 'stage-0' ]}
},
]
},
plugins: [
new LiveReloadPlugin()
],
//Resolves using extensions and absolute path
resolve:
extensions = ['', '.js']
};
main.jsx
import React from 'react';
import ReactDOM from 'react-dom';
//Leftbar
var Leftbar = require('./components/leftbar');
//Header
var Header = require('./components/header');
//Sidebar
var Leftbar = require('./components/leftbar');
//Posts
var Posts = require('./components/posts');
//Rightbar
var Rightbar = require('./components/rightbar');
var App = React.createClass({
render() {
return(
<div>
<Leftbar />
<aside id="maincontent" className="col-lg-10">
<Header />
<main>
<div id="main-container" className="col-lg-10 row grid">
<h3 id="page-title">News Feed</h3>
<Posts className="pull-left col-lg-10" postimgsrc="http://tedxtargumures.com/wp-content/themes/TEDxTheme-develop/images/defaults/video-placeholder.jpg" imgsrc="https://upload.wikimedia.org/wikipedia/en/7/70/Shawn_Tok_Profile.jpg" name="Hakeem" textcontent="Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vivamus placerat tellus ac magna imperdiet, ut gravida orci commodo. Curabitur sagittis dui velit, vel vehicula erat aliquet sed. Vestibulum pharetra tortor in velit iaculis placerat. Vestibulum sed rutrum tortor. Vivamus finibus libero nisl, nec fermentum justo vestibulum a. Morbi sed ante ullamcorper, luctus nulla nec, convallis justo. Etiam tortor metus, porttitor eu fringilla nec, tempor ut mauris. Donec lacinia arcu at tellus sollicitudin, vel dictum eros vulputate. Nam eu arcu ut purus volutpat condimentum quis vel arcu."/>
</div>
<Rightbar />
</main>
</aside>
</div>
);
}
});
ReactDOM.render(<App />, document.getElementById("page"));
header.jsx
import React from 'react';
import Router from 'react-router';
import { DefaultRoute, Link, Route, RouteHandler } from 'react-router';
let navigationActionCreators = require('../actions/navigation-action-creators.jsx');
let header = React.createClass({
getInitialState: function () {
return{typed: ''};
},
//Event that gets fired when user enters text
handleChange: function(event) {
this.setState({typed: event.target.value});
//Search page to be rendered
var SearchPage = React.createClass({
render: function() {
return <div>You typed:{this.state.typed}</div>
}
});
//Render search page in real time as user enters input
ReactDOM.render(<p>You searched for<h3>{this.state.typed}</h3></p>, document.getElementById("main-container"));
},
//Event that gets fired when navigation item is clicked
handleClick: function() {
navigationActionCreators.navigateTo({
component:componentname
});
var Page = React.createClass({
render: function() {
return (
<div>
Page
</div>
);
}
});
ReactDOM.render(<Page/>, document.getElementById("main-container"));
},
render:function() {
return (
<div className="col-lg-12">
<li className='pull-left'><Link to "app">Home</Link></a></li>
<li className="pull-left"><a onClick={this.handleClick}>Notifications</a></li>
<li className="pull-left"><a onClick={this.handleClick}>Find friends</a></li>
<input id="site-search" type="text" onChange={this.handleChange} value={this.setState.typed} placeholder="search..." aria-label="..." className="col-lg-2 pull-right"/>
<RouteHandler/>
</div>
);
}
});
let routes = (
<Route name="app" path="/" handler={App}>
<Route name="login" path="/login" handler={LoginHandler}/>
</Route>
)
Router.run(routes, function(Handler){
React.render(<Handler/>, document.body);
});
module.exports = header;
My package.json looks like this...
{
"name": "app",
"version": "0.0.0",
"private": true,
"scripts": {
"start": "node ./bin/www"
},
"dependencies": {
"body-parser": "~1.13.2",
"cookie-parser": "~1.3.5",
"debug": "~2.2.0",
"express": "~4.13.1",
"jade": "~1.11.0",
"morgan": "~1.6.1",
"react": "^0.14.3",
"react-dom": "^0.14.3",
"serve-favicon": "~2.3.0"
},
"devDependencies": {
"alt": "^0.18.1",
"babel-core": "^6.3.21",
"babel-loader": "^6.2.0",
"babel-plugin-react-transform": "^1.1.1",
"babel-preset-es2015": "^6.3.13",
"babel-preset-react": "^6.3.13",
"babel-preset-stage-0": "^6.3.13",
"browser-sync-webpack-plugin": "^1.0.1",
"flux": "^2.1.1",
"jsx-loader": "^0.13.2",
"react": "^0.14.3",
"react-transform-hmr": "^1.0.1",
"webpack": "^1.12.9",
"webpack-livereload-plugin": "^0.4.0"
}
}
I've been battling this bug for the last 24 hours and had no luck. If anyone could help it would be greatly appreciated
| |
doc_23533294
|
here is the code
users = ratings.userId.unique()
items = ratings.movieId.unique()
user_id_input = Input(shape=[1], name='user')
item_id_input = Input(shape=[1], name='item')
embedding_size = 64
user_embedding = Embedding(output_dim=embedding_size,
input_dim=users.shape[0]+1,
input_length=1,
name='user_embedding')(user_id_input)
item_embedding = Embedding(output_dim=embedding_size,
input_dim=items.shape[0]+1,
input_length=1,
name='item_embedding')(item_id_input)
user_vecs = Reshape([embedding_size])(user_embedding)
item_vecs = Reshape([embedding_size])(item_embedding)
y = Dot(1, normalize=False)([user_vecs, item_vecs])
model = Model(inputs=[user_id_input, item_id_input], outputs=y)
model.compile(loss='mae',
optimizer="adam"
)
weights = model.get_weights()
#this now gives the max user_id/item_id plus 1
print("weights shapes",[w.shape for w in weights])
I used the movielens dataset and in this particular dataset the number of unique users is 610 and the number of unique items is 9724 but the shape of the weights is [(611, 64), (9725, 64)], so 611 and 9725. Why?
| |
doc_23533295
|
text-align: center;
position: absolute;
top: 200;
right: 100;
animation-name: textAnim;
animation-duration: 5s;
animation-delay: 2s;
}
@keyframes textAnim {
0% {opacity:1;}
90% {opacity:1;}
100% {opacity:0;}
}
You can see in the example vid. I attempt to fade the text with the code above but it just comes right back to 1 opacity and stays in the scene. I do NOT want this. I need the text to transition out "permanently" one way or another.
How is this done properly in CSS? There are built-in enter/exit animations in streamlabs but all their code is tucked away and unviewable.
A: Use animation-fill-mode: forwards; to retain its state at the end of the animation.
https://developer.mozilla.org/en-US/docs/Web/CSS/animation-fill-mode
| |
doc_23533296
|
Is this notice still valid in 2022?
The reason for asking this is that, currently my Apache is configured with prefork, and I use persistent connections.
However, it happens that at a certain time of day, on alternate days, the connections are no longer reused, creating hundreds of new persistent connections in less than a minute, reaching the user database limit.
apache2ctl -M | grep prefork
mpm_prefork_module (shared)
| |
doc_23533297
|
dependencies {
compile fileTree(dir: 'libs', include: ['*.jar'])
androidTestCompile('com.android.support.test.espresso:espresso-core:2.2.2', {
exclude group: 'com.android.support', module: 'support-annotations'
})
compile 'com.android.support:appcompat-v7:26.+'
compile 'com.android.support.constraint:constraint-layout:1.0.0-alpha7'
compile 'com.android.support:design:26.+'
compile 'org.apache.flink:flink-streaming-java_2.11:1.4.0'
}
but whenever I compile the code it gave me the following error
FAILURE: Build failed with an exception.
* What went wrong: Execution failed for task ':app:transformResourcesWithMergeJavaResForDebug'.
> com.android.build.api.transform.TransformException: com.android.builder.packaging.DuplicateFileException: Duplicate files copied in APK reference.conf File1: /Users/amar/.gradle/caches/modules-2/files-2.1/com.typesafe.akka/akka-actor_2.11/2.4.20/251b1d970698b81dad5aa8b84eec3eea835259d2/akka-actor_2.11-2.4.20.jar File2: /Users/amar/.gradle/caches/modules-2/files-2.1/org.apache.flink/flink-runtime_2.11/1.4.0/c55676f3ca4c7edd82374659471e98b2384868a8/flink-runtime_2.11-1.4.0.jar File3: /Users/amar/.gradle/caches/modules-2/files-2.1/com.typesafe.akka/akka-stream_2.11/2.4.20/7545a4f86cbb372c337dbdb2846110df86a8cc70/akka-stream_2.11-2.4.20.jar File4: /Users/amar/.gradle/caches/modules-2/files-2.1/com.typesafe/ssl-config-core_2.11/0.2.1/3d2e6a36a7427d6f9d3921c91d6ac1f57dc47b57/ssl-config-core_2.11-0.2.1.jar
A: This problem was solved by following these steps
step # 1 using the dependency from https://mvnrepository.com instead of https://search.maven.org.
step # 2 Also, I found out that as these files are large which crosses the limit of 64K classes that can be included in DEX files, so I have to configure for Multidex details of which are given here
step # 3 most of the issue was being created by Jacktoolchain, so we can disable that and replace it with retro-lambda for which you can follow this tutorial. Make sure to disable Jacktoolchain by commenting or deleting as shown below
// jackOptions { enabled true }
// dexOptions { incremental true }
step # 4 Exclude these packages from packagingOptions as shown below
packagingOptions {
exclude 'META-INF/DEPENDENCIES.txt'
exclude 'META-INF/DEPENDENCIES'
exclude 'META-INF/dependencies.txt'
exclude 'META-INF/LICENSE.txt'
exclude 'META-INF/LICENSE'
exclude 'META-INF/license.txt'
exclude 'META-INF/LGPL2.1'
exclude 'META-INF/NOTICE.txt'
exclude 'META-INF/NOTICE'
exclude 'META-INF/notice.txt'
}
step # 5 You may find it weird, but this hack works like
compile group: 'org.apache.flink', name: 'link-java', version: '1.4.0'
was not working and was giving some error, but it worked when compile is replaced by provided. I don't know why, please let me know if you know
provided group: 'org.apache.flink', name: 'flink-java', version: '1.4.0'
Luckily flink-streaming-java_2.11 library already had provided mentioned in https://mvnrepository.com
provided group: 'org.apache.flink', name: 'flink-streaming-java_2.11', version: '1.4.0'
Final Module build.gradle has these contents
apply plugin: 'com.android.application'
apply plugin: 'me.tatarka.retrolambda'
repositories {
mavenCentral()
}
android {
packagingOptions {
exclude 'META-INF/DEPENDENCIES.txt'
exclude 'META-INF/DEPENDENCIES'
exclude 'META-INF/dependencies.txt'
exclude 'META-INF/LICENSE.txt'
exclude 'META-INF/LICENSE'
exclude 'META-INF/license.txt'
exclude 'META-INF/LGPL2.1'
exclude 'META-INF/NOTICE.txt'
exclude 'META-INF/NOTICE'
exclude 'META-INF/notice.txt'
}
configurations.all {
// Enforces Gradle to only compile the version number you state for all dependencies, no matter which version number the dependencies have stated.
resolutionStrategy.force 'com.google.code.findbugs:jsr305:1.3.9'
}
compileSdkVersion 26
buildToolsVersion "26.0.1"
defaultConfig {
// jackOptions { enabled true }
// dexOptions { incremental true }
multiDexEnabled true
applicationId "com.example.amar.testing"
minSdkVersion 26
targetSdkVersion 26
versionCode 1
versionName "1.0"
testInstrumentationRunner "android.support.test.runner.AndroidJUnitRunner"
}
buildTypes {
customDebugType {
debuggable true
}
release {
minifyEnabled false
proguardFiles getDefaultProguardFile('proguard-android.txt'), 'proguard-rules.pro'
}
}
compileOptions {
sourceCompatibility JavaVersion.VERSION_1_8
targetCompatibility JavaVersion.VERSION_1_8
}
}
dependencies {
// not sure if adding compatibility is required here
sourceCompatibility = 1.7
targetCompatibility = 1.7
compile fileTree(dir: 'libs', include: ['*.jar'])
// androidTestCompile('com.android.support.test.espresso:espresso-core:2.2.2', {
// exclude group: 'com.android.support', module: 'support-annotations'
// })
compile 'com.android.support:appcompat-v7:26.+'
compile 'com.android.support.constraint:constraint-layout:1.0.0-alpha7'
compile 'com.android.support:design:26.+'
testCompile 'junit:junit:4.12'
// for multidex
compile 'com.android.support:multidex:1.0.0'
// for CEP
provided group: 'org.apache.flink', name: 'flink-java', version: '1.4.0'
provided group: 'org.apache.flink', name: 'flink-streaming-java_2.11', version: '1.4.0'
}
Final Application build.gradle has these contents
// Top-level build file where you can add configuration options common to all sub-projects/modules.
buildscript {
repositories {
jcenter()
}
dependencies {
classpath 'com.android.tools.build:gradle:2.3.3'
classpath 'me.tatarka:gradle-retrolambda:3.2.3'
apply plugin: 'java'
apply plugin: 'eclipse'
apply plugin: 'idea'
sourceCompatibility = 1.7
targetCompatibility = 1.7
// NOTE: Do not place your application dependencies here; they belong
// in the individual module build.gradle files
}
}
allprojects {
repositories {
jcenter()
}
}
| |
doc_23533298
|
etc/nginx/sites-available
etc/nginx/sites-enabled
etc/nginx/conf.d
Do I really need these if I just want to work directly in the etc/nginx/nginx.conf file and remove the include lines that include these items in nginx.conf? Are these directories used for anything else that would mess things up if I delete them?
A: No, they are not needed if you define your server blocks properly in nginx.conf, but it's highly suggested. As you noticed, they are only used because of the include /etc/nginx/sites-enabled/*; in nginx.conf.
For curiosity, is there a reason why you do not want to use them? They are very useful; easier to add new sites, disabling sites, etc. Rather than having one large config file. This is a kind of a best practice of nginx folder layout.
A: Important information:
You should edit files only in sites-available directory.
Never edit files inside the sites-enabled directory, otherwise you can have problems if your editor runs out of memory or, for any reason, it receives a SIGHUP or SIGTERM.
For example: if you are using nano to edit the file sites-enabled/default and it runs out of memory or, for any reason, it receives a SIGHUP or SIGTERM, then nano will create an emergency file called default.save, inside the sites-enabled directory. So, there will be an extra file inside the sites-enabled directory. That will prevent Apache or NGINX to start. If your site was working, it will not be anymore. You will have a hard time until you find out, in the logs, something related to the default.save file and, then, remove it.
In the example above, if you were editing the file inside the sites-available directory, nothing bad would have happened. The file sites-available/default.save would have been created, but it wouldn't do any harm inside the sites-available directory.
A: I saw below comment in The Complete NGINX Cookbook in NGINX official site !.
The /etc/nginx/conf.d/ directory contains the default HTTP
server configuration file. Files in this directory ending in .conf
are included in the top-level http block from within the /etc/
nginx/nginx.conf file. It’s best practice to utilize include state‐
ments and organize your configuration in this way to keep your
configuration files concise. In some package repositories, this
folder is named sites-enabled, and configuration files are linked
from a folder named site-available; this convention is deprecated.
A: It is not a must, but a best practise if you host more than one sites on your box.
It will be easier to manage by keep http context and common directives (such as ssl_dhparam, ssl_ciphers, or even gzip settings, etc.) on the nginx.conf so that it applied across all sites.
Keep site-specific server context (such as ssl-certificate, location directives, etc.) at etc/nginx/sites-available/ and name the configuration file as your-domain.conf. The file in etc/nginx/sites-enabled can be just a link to the file to the etc/nginx/sites-available.
| |
doc_23533299
|
Last 30 Days =
VAR _TodaysDate = Today()
VAR _StartDate = dateadd (Cutomer[FormSignedDTS], -30, Day)
return
calculate (distinctcount (Customer[CustID]), keepfilters (Customer[FormSignedDTS] >= _StartDate))
But it get the following error...
"a date column containing duplicate dates was specified in the call to function dateadd"
Any help I receive on this will be greatly appreciated!
Thanks
A: You created the variable TodaysDate, but never used it anywhere within your measure.
This will give you a distinct count of every CustID within your table...
Distinct Count of Customers:=DISTINCTCOUNT(Customer[CustID])
But you only want customers within the last 30 days. So, we just need to figure out what today minus 30 is, and then filter to items newer than that.
Last 30 Days:=VAR startDate = Today() - 30
RETURN
CALCULATE(DISTINCTCOUNT(Customer[CustID]),Customer[FormSignedDTS]>=startDate)
A: if you want a distinct count of customers of the last 30 days, you don't manipulate the signed date. Rather, you calculate a distinct count of the table where the signed date is greater than Today() minus 30 days.
So, change the variable StartDate to refer to Today minus 30 days and use that in your filter.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.