id
stringlengths 5
11
| text
stringlengths 0
146k
| title
stringclasses 1
value |
|---|---|---|
doc_23534600
|
Now it takes all emails which are marked with specific category in that inbox, so it merge all pdf's from all emails to one file.
But I want that it take emails one by one, that after download pdf's from one email it will merge and send them, delete them from folder and just after that it will take second email.
How to make such loop for this code?
import datetime
import os
import win32com.client as win32
from PyPDF2 import PdfFileMerger
from pathlib import Path
path = ('C:\\Users\\Desktop\\Work')
today = datetime.date.today()
outlook = win32.Dispatch("Outlook.Application").GetNamespace("MAPI")
inbox = outlook.GetDefaultFolder(6)
subFolder = inbox.Folders("Test")
messages = subFolder.Items
def save_attachments(subject):
for message in messages:
if message.Categories == "Red Category":
for attachment in message.Attachments:
print(attachment.FileName)
attachment.SaveAsFile(os.path.join(path, str(attachment)))
if __name__ == "__main__":
save_attachments('PB report - next steps')
#Merge PDF's
merger = PdfFileMerger()
path_to_files = r'C:\Users\Desktop\Work/'
for root, dirs, file_names in os.walk(path_to_files):
for file_name in file_names:
merger.append(path_to_files + file_name)
merger.write(r"C:\Users\Desktop\Work\merged.pdf")
merger.close()
#Send PDF with outlook
# construct Outlook application instance
olApp = win32.Dispatch('Outlook.Application')
olNS = olApp.GetNameSpace('MAPI')
# construct the email item object
mailItem = olApp.CreateItem(0)
mailItem.Subject = 'Test'
mailItem.BodyFormat = 1
mailItem.Body = "Pdf merged"
mailItem.To = 'email'
path = (os.path.join('C:\\Users\\Desktop\\Work\\merged.pdf'))
mailItem.Attachments.Add(path)
mailItem.Display()
mailItem.Save()
mailItem.Send()
#Delete PDF's from folder
[f.unlink() for f in Path("C:\\Users\\Desktop\\Work").glob("*") if f.is_file()]
A: Iterating over all items in the folder is not really a good idea:
for message in messages:
if message.Categories == "Red Category":
Instead, you need to use the Find/FindNext or Restrict methods of the Items class from the Outlook object model. So, in that case you will get all items that correspond to your search criteria and iterate over them only. Read more about these methods in the following articles:
*
*How To: Use Find and FindNext methods to retrieve Outlook mail items from a folder (C#, VB.NET)
*How To: Use Restrict method to retrieve Outlook mail items from a folder
Second, there is no need to create a new Outlook Application instance:
# construct Outlook application instance
olApp = win32.Dispatch('Outlook.Application')
olNS = olApp.GetNameSpace('MAPI')
Re-use the existing application instance instead. Moreover, Outlook is a singleton, you can't have two instances running at the same time.
Third, there is no need to display and save the item created before sending:
mailItem.Attachments.Add(path)
mailItem.Send()
| |
doc_23534601
|
Is there an alternative to rgba() that lets you specify the desired opacity of a keyword-based colour?
A: Agree with @LGSon, but you can use opacity: alpha its css3 syntax, but it will cause the content to have inherit the opacity as well.
| |
doc_23534602
|
- (void)panWasRecognized:(UIPanGestureRecognizer *)panner {
{
UIView *draggedView = panner.view;
CGPoint offset = [panner translationInView:draggedView.superview];
CGPoint center = draggedView.center;
draggedView.center = CGPointMake(center.x + offset.x, center.y + offset.y);
draggedView.layer.borderColor = [UIColor blueColor].CGColor;
draggedView.layer.borderWidth = 4.0f;
// Reset translation to zero so on the next `panWasRecognized:` message, the
// translation will just be the additional movement of the touch since now.
[panner setTranslation:CGPointZero inView:draggedView.superview];
}
}
- (IBAction)addRepButton:(UIBarButtonItem *)newRep {
buttonCount ++;
if (buttonCount > 1 )
{
UILabel *textField = [[UILabel alloc]initWithFrame:CGRectMake(100, 100, 100, 100)];
textField.userInteractionEnabled = YES;
textField.layer.cornerRadius = 20;
[textField setBackgroundColor: [UIColor whiteColor]];
textField.font = [UIFont systemFontOfSize:20];
textField.layer.borderColor = [UIColor blackColor].CGColor;
textField.textColor = [UIColor blackColor];
textField.layer.borderWidth = 4.0f;
textField.text = @"1";
textField.textAlignment = NSTextAlignmentCenter;
[self.view addSubview:textField];
UIPanGestureRecognizer *panner = [[UIPanGestureRecognizer alloc]
initWithTarget:self action:@selector(panWasRecognized:)];
[textField addGestureRecognizer:panner];
}
}
@end
A: What you need is a property on the class, which keeps a count of the number of labels. So in the interface (header file, or you can add an interface at the top of the main file) add an integer property, like this :
@property (nonatomic, assign) NSInteger labelCounter;
Initialize the counter to zero in your viewDidLoad method, eg :
self.labelCounter = 0;
Now in your addRepButton method, set the label text to be the value of that number, like this :
textField.text = [NSString stringWithFormat:@"%i", self.labelCounter];
And within the same method, increment your counter for the next label :
self.labelCounter++;
That should do it.
| |
doc_23534603
|
As I don't want to break my existing installation: Can this warning be ignored? Do I have to follow the advice, what ist the risk of doing so?
docker container exec -it kiwi_web /Kiwi/manage.py migrate
Operations to perform:
Apply all migrations: admin, attachments, auth, contenttypes, core, django_comments, kiwi_auth, linkreference, management, sessions, sites, testcases, testplans, testruns
Running migrations:
No migrations to apply.
Your models have changes that are not yet reflected in a migration, and so won't be applied.
Run 'manage.py makemigrations' to make new migrations, and then re-run 'manage.py migrate' to apply them.
A: Duplicate of
Some warning about manage.py migrate after upgrading to 6.4
Fixed in 6.5 (problem coming from a dependent library), see changelog
| |
doc_23534604
|
Can this be accomplished in the swift 5 programming language? Perhaps calling layoutSubviews()?
is it possible to preload a webView before transitioning to the next view controller? OR is the only way to show this is loading is through activity indicators?
First VC:
override func viewDidLoad() {
super.viewDidLoad()
SecondVC.loadWebView() //PSUEDO of what I am trying to do
}
SecondVC:
func loadWebView() {
// WKWebView nit
webView.navigationDelegate = self
webView.addObserver(self, forKeyPath: "URL", options: .new, context: nil)
view.addSubview(webView)
view.backgroundColor = .systemBackground
guard let url = URL(string: "https://connect.stripe.com/oauth/authorize?response_type=code&client_id=\(live_client_id)&scope=read_write") else {
return
}
webView.load(URLRequest(url: url))
}
A: Let's say this is the view controller where you have your web view. You can put the code where you load the webpage inside its initializer.
class WebViewController: UIViewController {
lazy private var webView: WKWebView = {
let webView = WKWebView(frame: view.bounds)
return webView
}()
private var url: URL!
init(url: URL) {
self.url = url
super.init(nibName: nil, bundle: nil)
self.loadWebsite()
}
required init?(coder: NSCoder) {
super.init(coder: coder)
}
private func loadWebsite() {
view.addSubview(webView)
webView.load(URLRequest(url: url))
}
}
And in the previous view controller, you instantiate the WebViewController prior to navigating to the web view. When you instantiate it, the initializer of the WebViewController is called and in turn loadWebsite() method which loads the web page.
class ViewController: UIViewController {
let vc = WebViewController(url: URL(string: "https://stripe.com/")!)
override func viewDidLoad() {
super.viewDidLoad()
DispatchQueue.main.asyncAfter(deadline: .now() + 3) {
self.showWebView()
}
}
private func showWebView() {
navigationController?.show(vc, sender: nil)
}
}
So by the time you actually show the WebViewController, hopefully the webpage will be already loaded.
| |
doc_23534605
|
Here's an example query of what I'd like to do, but it's obviously not working:
SELECT DISTINCT p.* FROM wp_posts p
LEFT JOIN wp_term_relationships txrm ON p.ID = txrm.object_id
LEFT JOIN wp_term_taxonomy txm ON txrm.term_taxonomy_id = txm.term_taxonomy_id
LEFT JOIN wp_terms trm ON txm.term_id = trm.term_id
WHERE txm.taxonomy= 'mediums' AND ( trm.name LIKE '%Acrylic%' AND trm.name LIKE '%Oil%' )
AND p.post_status = 'publish'
AND p.post_type = 'gallery'
GROUP BY p.ID
ORDER BY p.post_date DESC
How do I make this work?
Thanks
A: Just in case anyone's looking for something similar, this is what I went with, wich works, but I'm not sure how efficient it is:
SELECT DISTINCT p.* FROM wp_posts p
INNER JOIN (
SELECT txrm.object_id
FROM wp_term_relationships txrm
LEFT JOIN wp_term_taxonomy txm ON txrm.term_taxonomy_id = txm.term_taxonomy_id
LEFT JOIN wp_terms trm ON txm.term_id = trm.term_id
WHERE txm.taxonomy = 'mediums'
AND ( trm.name LIKE '%Acrylic%' OR trm.name LIKE '%Oil%' )
GROUP BY txrm.object_id
HAVING count(trm.name) = 2
) AS trm ON p.ID = trm.object_id
WHERE p.post_status = 'publish'
AND p.post_type = 'gallery'
GROUP BY p.ID
ORDER BY p.post_date DESC
So the subquery returns a list of post_ids/object_ids that have a count of 2:
object_id|term.name
1 |oil
1 |acrylic
2 |oil
So in the above, only post 1 has a count of 2, since it matches both oil and acrylic.
| |
doc_23534606
|
The problem is I don't know how to make the mole stay visible for a few seconds. I used wait(1000) on the the holder of OnDraw(), but it made the game stack for this time. I tried to change SurfaceView back to View, but then the screen stopped refreshing.
Any advice?
Main activity Class
public class First_Stage extends Activity implements OnTouchListener{
private AllViews allViews;
private MoleView moleView;
private PointsView pointsView;
private TimerView timerView;
private StageView stageView;
private Mole mole;
private MoveMole moleMove;
private int points=0;
private StagePoints stagePoints;
private PointsSingelton poin;
private float x,y;
private Club club;
private ClubView clubView;
private PointsSingelton pointsCount;
@Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
requestWindowFeature(Window.FEATURE_NO_TITLE);
club=new Club();
clubView = new ClubView(this, club);
mole=new Mole();
stageView=new StageView(this);
moleView=new MoleView(this,mole);
pointsView=new PointsView(this);
timerView=new TimerView(this, "3:33");
allViews=new AllViews(this);
allViews.setViews(stageView, moleView, pointsView, timerView,clubView);
setContentView(allViews);
allViews.setOnTouchListener((View.OnTouchListener)this);
}
@Override
public boolean onTouch(View v, MotionEvent event) {
x=event.getX();
y=event.getY();
moleView.setX(x);
moleView.setY(y);
allViews.setX(x);
allViews.setY(y);
if ((x<100 && x>0)&&(y>0&&y<100)){
points=pointsCount.getInstance().nextPoint();
pointsView.setPoint(points);
moleView.setBool(true);
}
return true;
}
All Views Class
public class AllViews extends SurfaceView implements SurfaceHolder.Callback,Runnable {
private Club club;
private ClubView clubView;
private MoleView moleView;
private PointsView pointsView;
private TimerView timerView;
private StageView mainView;
private float x, y;
private Paint test;
private First_Stage first;
Thread drawThread = new Thread(this);
SurfaceHolder holder;
private Bitmap clubPic;
public AllViews(Context context) {
super(context);
test = new Paint();
first = new First_Stage();
holder = getHolder();
holder.addCallback(this);
}
public void setViews(StageView mainView, MoleView moleView,
PointsView pointsView, TimerView timerView,ClubView clubView)
{
this.mainView = mainView;
this.moleView = moleView;
this.pointsView = pointsView;
this.timerView = timerView;
}
@Override
protected void onDraw(Canvas canvas) {
super.onDraw(canvas);
mainView.onDraw(canvas);
moleView.onDraw(canvas);
pointsView.onDraw(canvas);
timerView.onDraw(canvas);
clubPic=BitmapFactory.decodeResource(getResources(), R.drawable.clubdown);
canvas.drawBitmap(clubPic, this.x-39,this.y-20, null);
}
@Override
public void run() {
Canvas c;
while (true) {
c = null;
try {
c = holder.lockCanvas(null);
synchronized (holder) {
onDraw(c);
}
} finally {
if (c != null) {
holder.unlockCanvasAndPost(c);
}
}
}
}
@Override
public void surfaceChanged(SurfaceHolder holder, int format, int width,
int height) {
// TODO Auto-generated method stub
}
@Override
public void surfaceCreated(SurfaceHolder holder) {
drawThread.start();
}
moleView Class
public class MoleView extends View {
private Mole mole;
private Bitmap molePic;
private float x,y;
private boolean bool=false;
public MoleView(Context context, Mole mole) {
super(context);
this.mole=mole;
}
public boolean isBamped(){
float xmin=mole.getX();
float xmax=mole.getX()+60;
float ymin=mole.getY();
float ymax=mole.getY()+46;
if ((this.x<xmax&&this.x>xmin)&&(this.y<ymax&&this.y>ymin)){
return true;
}
return false;
}
@Override
protected void onDraw(Canvas canvas) {
// TODO Auto-generated method stub
if (!bool){
molePic=BitmapFactory.decodeResource(getResources(), R.drawable.nest_full_mole);
canvas.drawBitmap(molePic, mole.getX(), mole.getY(), null);
mole.moveMole();
}else {
molePic=BitmapFactory.decodeResource(getResources(), R.drawable.ic_launcher);
canvas.drawBitmap(molePic, mole.getX(), mole.getY(), null);
molePic.recycle();
}
}
Mole Class
public class Mole {
private float x;
private float y;
private Positions myPositions;
public Mole() {
super();
myPositions=new Positions();
this.x = myPositions.getRandomX();
this.y = myPositions.getRandomY();
}
public float getX() {
return x;
}
public void setX(float x) {
this.x = x;
}
public float getY() {
return y;
}
public void setY(float y) {
this.y = y;
}
}
A: I can't really find the place in code where do you want to keep it on screen, because there is still a lot of code but try it like this:
Make a new Handler in the class where you are drawing the moles. After that, you can call the postDelayed() function, with which you can execute something in the period you define. Quite simple, in fact.
Links:
http://developer.android.com/reference/android/os/Handler.html
http://www.vogella.com/articles/AndroidPerformance/article.html
| |
doc_23534607
|
And when I copy paste some html content from wikipedia, it actually inserts lots of which are not present in the source.
Example, I select the following string from wikipedia:
trained professionals and paraprofessionals coming
From this page: http://en.wikipedia.org/wiki/Health_care
And It has the following source code:
trained <a href="/wiki/Professional" title="Professional">professionals</a> and <a href="/wiki/Paraprofessional" title="Paraprofessional">paraprofessionals</a> coming
Note: As we see there are no noob-spaces ( ).
Then when I paste it to the tinymce it produces the following html:
<h3 style="background-image: none; margin: 0px 0px 0.3em; overflow: hidden; padding-top: 0.5em; padding-bottom: 0.17em; border-bottom-style: none; font-size: 17px; font-family: sans-serif; line-height: 19.200000762939453px;"><span style="font-size: 13px; font-weight: normal;">trained </span><a style="text-decoration: none; color: #0b0080; background-image: none; font-size: 13px; font-weight: normal;" title="Professional" href="http://en.wikipedia.org/wiki/Professional">professionals</a><span style="font-size: 13px; font-weight: normal;"> and </span><a style="text-decoration: none; color: #0b0080; background-image: none; font-size: 13px; font-weight: normal;" title="Paraprofessional" href="http://en.wikipedia.org/wiki/Paraprofessional">paraprofessionals</a><span style="font-size: 13px; font-weight: normal;"> coming</span></h3>
Or, as a plain text it would look like this:
trained professionals and paraprofessionals coming together
Which actually breaks my layout because it all goes in one line (as one word).
Any ideas why it does it and how to prevent it?
A: Whenever you copy some content from websites, it copies the style of the text also. So all you need to do is you should paste the copied content into notepad first, then from there you can again copy the same content and then paste in tinymce.
(Notepad gives you the plain content without any inline style)
A: First Copy the content in any place ex (Wikipedia, google, etc). Past the all content in Notepad file. The total back links and spaces are deleted after copy the notepad content past the Tiny MCE Editor. It is the better way to use this type.
A: When copying content from a web page, use View Source in a browser and copy the relevant part from the source and then insert it in “raw mode” (source mode, HTML mode, whatever it is called—I presume TinyMce has got such a mode; if not, get a better tool). To make this easier, in Firefox, you can paint an area and then right-click and select the option of viewing the source of the selection. (Well this might need an add-on like DOM Inspector, I’m not sure.)
It’s possible that TinyMce converts spaces to something else even in “raw” mode. I have seen such things happen in a CMS (spuriously changing normal spaces to no-break spaces), with no explanation found, and I hope I won’t need to use such a CMS ever again.
| |
doc_23534608
|
My AJAX endpoint works fine, but the custom form type does not work:
class Select2AjaxDataCategoryType extends AbstractType
{
/**
* @var EntityManagerInterface
*/
private $entityManager;
/**
* @var RouterInterface
*/
private $router;
public function __construct(EntityManagerInterface $entityManager,
RouterInterface $router)
{
$this->entityManager = $entityManager;
$this->router = $router;
}
public function getParent()
{
return ChoiceType::class;
}
public function buildForm(FormBuilderInterface $builder, array $options)
{
$builder->resetModelTransformers();
$builder->resetViewTransformers();
$builder->addModelTransformer(new CallbackTransformer(
function (?DataCategory $dc) {
dump('model transform is called ' . ($dc ? $dc->getId()->toString() : 'null'));
return $dc ? $dc->getId()->toString() : '';
},
function ($id) : ?DataCategory{
dump('model reversetransform is called ' . $id);
$dc = $this->entityManager->getRepository(DataCategory::class)->find($id);
if($dc === null)
throw new TransformationFailedException("Konnte keine Datenkategorie mit ID $id finden");
return $dc;
}
));
$builder->addViewTransformer(new CallbackTransformer( // Identity !!!
function ($dc) {
dump('view transform is called ' . $dc);
return $dc;
},
function ( $id) {
dump('view reversetransform is called ' . $id);
return $id;
}
));
$builder->addEventListener(FormEvents::PRE_SUBMIT, function (FormEvent $event) { // makes validation pass
$data = $event->getData();
dump($data); // select2'd id, correct
dump($event->getForm()->getName()); // name of my form field
$event->getForm()->getParent()->add( // so this is lik "overwriting"? Documented nowhere :-/
$event->getForm()->getName(),
ChoiceType::class,
['choices' => [$data => $data]]);
$event->getForm()->getParent()->get($event->getForm()->getName())->setData($data);
});
}
public function configureOptions(OptionsResolver $resolver)
{
$resolver->setRequired('currentDataCategory');
$resolver->setAllowedTypes('currentDataCategory', [DataCategory::class]);
$resolver->setDefaults([
'attr' => [
'data-ajax' => '1',
'data-ajax-endpoint' => $this->router->generate('data-category-manage-select2')
]
]);
}
}
When using this form type, it seems to work, but finally no entity object is returned, but null. According to symfony debug toolbar however, the value is received:
Also the dumps indicate that the view and model transformers were called:
For the sake of completeness (I hope we'll find a perfect solution and help others), here is my js code (it works):
$('select[data-ajax=1]').select2({
theme: "bootstrap4",
placeholder: "Bitte wählen",
ajax: {
url: function() { return $(this).data('ajax-endpoint');},
dataType: 'json',
data: function (params) {
var query = {
search: params.term,
page: params.page || 0
}
// Query parameters will be ?search=[term]&page=[page]
return query;
}
}
});
A: I have solved the problem, here is my complete solution:
$('select[data-ajax=1]').select2({
theme: "bootstrap4",
placeholder: "Bitte wählen",
ajax: {
url: function() { return $(this).data('ajax-endpoint');},
dataType: 'json',
data: function (params) {
var query = {
search: params.term,
page: params.page || 0
}
// Query parameters will be ?search=[term]&page=[page]
return query;
}
}
});
The new form type is fixed for one class DataCategory, and works both for single and multiple select's.
I have build-in a distinction between select2 frontend and the standard EntityType (mainly for testing reasons, because the new select2 based approach does not allow PHPUnit tests that use symfony's Client (WebTestCase)): If there are less than 50 DataCategory entities in the DB, the field falls back to EntityType
class Select2AjaxDataCategoryType extends AbstractType
{
/**
* @var EntityManagerInterface
*/
private $entityManager;
/**
* @var RouterInterface
*/
private $router;
private $transformCallback;
public function __construct(EntityManagerInterface $entityManager,
RouterInterface $router)
{
$this->entityManager = $entityManager;
$this->router = $router;
$this->transformCallback = function ($stringOrDc) {
if (is_string($stringOrDc)) return $stringOrDc;
else return $stringOrDc->getId()->toString();
};
}
public function getParent()
{
if($this->entityManager->getRepository(DataCategory::class)->count([]) > 50)
return ChoiceType::class;
else
return EntityType::class;
}
public function buildForm(FormBuilderInterface $builder, array $options)
{
if($this->entityManager->getRepository(DataCategory::class)->count([]) > 50) {
$builder->addModelTransformer(new CallbackTransformer(
function ($dc) {
/** @var $dc DataCategory|DataCategory[]|string|string[] */
/** @return string|string[] */
dump('model transform', $dc);
if($dc === null) return '';
if(is_array($dc)) {
return array_map($this->transformCallback, $dc);
} else if($dc instanceof Collection) {
return $dc->map($this->transformCallback);
} else {
return ($this->transformCallback)($dc);
}
},
function ($id) {
dump('model reversetransform', $id);
if (is_string($id)) {
$dc = $this->entityManager->getRepository(DataCategory::class)->find($id);
if ($dc === null)
throw new TransformationFailedException("Konnte keine Datenkategorie mit ID $id finden");
dump($dc);
return $dc;
} else {
$ret = [];
foreach($id as $i){
$dc = $this->entityManager->getRepository(DataCategory::class)->find($i);
if ($dc === null)
throw new TransformationFailedException("Konnte keine Datenkategorie mit ID $id finden");
$ret[] = $dc;
}
return $ret;
}
}
));
$builder->resetViewTransformers();
$builder->addEventListener(FormEvents::PRE_SUBMIT, function (FormEvent $event) {
$dataId = $event->getData();
dump('presubmit', $dataId, $event->getForm()->getConfig()->getOptions()['choices']);
if(empty($dataId))
return;
$name = $event->getForm()->getName();
if (is_array($dataId)) { // multiple-true-case
if (!empty(array_diff($dataId, $event->getForm()->getConfig()->getOptions()['choices']))) {
$options = $event->getForm()->getParent()->get($name)->getConfig()->getOptions();
$options['choices'] = array_combine($dataId, $dataId);
$event->getForm()->getParent()->add($name, Select2AjaxDataCategoryType::class, $options);
$event->getForm()->getParent()->get($name)->submit($dataId);
$event->stopPropagation();
}
} else { // multiple-false-case
if($dataId instanceof DataCategory){
$dataId = $dataId->getId()->toString();
throw new \Exception('Hätte ich nicht erwartet, sollte string sein');
}
if (!in_array($dataId, $event->getForm()->getConfig()->getOptions()['choices'])) {
$options = $event->getForm()->getParent()->get($name)->getConfig()->getOptions();
$options['choices'] = [$dataId => $dataId];
$event->getForm()->getParent()->add($name, Select2AjaxDataCategoryType::class, $options);
$event->getForm()->getParent()->get($name)->submit($dataId);
$event->stopPropagation();
}
}
});
// $builder->addEventListener(FormEvents::PRE_SET_DATA, function(FormEvent $event){
// dump("pre set data", $event->getData());
// });
} else {
}
}
public function configureOptions(OptionsResolver $resolver)
{
if($this->entityManager->getRepository(DataCategory::class)->count([]) > 50) {
$resolver->setDefaults([
'attr' => [
'data-ajax' => '1',
'data-ajax-endpoint' => $this->router->generate('data-category-manage-select2')
],
'choices' => function (Options $options) {
$data = $options['data'];
dump('data', $data);
if($data !== null) {
if(is_array($data) || $data instanceof Collection){
$ret = [];
foreach ($data as $d) {
$ret[$d->description . ' (' . $d->name . ')'] = $d->getId()->toString();
}
dump($ret);
return $ret;
} else if ($data instanceof DataCategory){
return [$data->description . ' (' . $data->name . ')' => $data->getId()->toString()];
} else {
throw new \InvalidArgumentException("Argument unerwartet.");
}
} else {
return [];
}
}
]);
} else {
$resolver->setDefaults([
'class' => DataCategory::class,
'choice_label' => function ($cat, $key, $index) { return DataCategory::choiceLabel($cat);},
'choices' => function (Options $options) {
return $this->entityManager->getRepository(DataCategory::class)->getValidChildCategoryChoices($options['currentDataCategory']);
}
]);
}
}
}
It is very important to set the 'data' option when using this new type, otherwise the choices option is not correctly set:
$builder->add('summands', Select2AjaxDataCategoryType::class,[
'currentDataCategory' => $mdc,
'data' => $mdc->summands->toArray(),
'multiple' => true,
'required' => false,
'label' => 'Summierte Kategorien',
]);
| |
doc_23534609
|
# Set CurrentDirectory
$callingDir = Split-Path -Parent $MyInvocation.MyCommand.Path
[Environment]::CurrentDirectory = $callingDir
# Includes
$MainScriptName = "XXX.SharePoint.Powershell.YYY.ps1"
$MainScriptPath = Join-Path -Path $callingDir -ChildPath $MainScriptName
if (Test-Path $MainScriptPath)
{
# use file from local folder
. $MainScriptPath
}
else
{
# use central file (via PATH-Variable)
. $MainScriptName
}
Setup
$WebAppUrl = "NONE"
$SolutionPackageName = "Dataport.Survey.Webpart.wsp"
InstallSolution $SolutionPackageName $WebAppUrl
TearDown
After that the workflow is shown as "deployed" (solution is provided).
If I want to use the workflow (for example on a list) I can´t do this. In the
Websitesettings (under "Workflows") the workflow is shown as inactive.
But why? What can I do to use the workflow?
Thank you in advance!
A: All the script you posted is doing is installing (deploying) a solution InstallSolution $SolutionPackageName $WebAppUrl. Also this is not standard Powershell and you only posted half the script.
Maybe you will need to activate some features after deploying the workflow.
A: Ok - the problem is fixed now. I had accidentally set "hidden=true" on the feature.
If I set "hidden = false" on the feature the workflow is useable.
| |
doc_23534610
|
function myFunction()
{
const links = document.querySelector( '.sidebar-nav' ).querySelectorAll( 'li a' );
links.forEach( function( item ){
item.justifyContent = 'flex-start';
});
}
aside{
display: flex;
flex-direction: column;
width: var(--edge-side);
padding-top: 4rem;
}
.sidebar-nav li{
display: flex;
align-items: stretch;
height: 60px;
width: auto;
}
.sidebar-nav li a{
display: flex;
justify-content: center;
align-items: center;
width: 100%;
}
<aside>
<nav>
<ul class="sidebar-nav">
<li><a href="#">
<span class="material-icons">apps</span>
<span class="page-name">Page 1</span></a></li>
<li><a href="#">
<span class="material-icons">reorder</span>
<span class="page-name">Page 2</span></a></li>
</ul>
</nav>
</aside>
A: You need to set .style.justifyContent.
function myFunction() {
const links = document.querySelector('.sidebar-nav').querySelectorAll('li a');
links.forEach(function(item) {
item.style.justifyContent = 'flex-start';
});
}
myFunction();
aside {
display: flex;
flex-direction: column;
width: var(--edge-side);
padding-top: 4rem;
}
.sidebar-nav li {
display: flex;
align-items: stretch;
height: 60px;
width: auto;
}
.sidebar-nav li a {
display: flex;
justify-content: center;
align-items: center;
width: 100%;
}
<aside>
<nav>
<ul class="sidebar-nav">
<li>
<a href="#">
<span class="material-icons">apps</span>
<span class="page-name">Page 1</span></a>
</li>
<li>
<a href="#">
<span class="material-icons">reorder</span>
<span class="page-name">Page 2</span></a>
</li>
</ul>
</nav>
</aside>
| |
doc_23534611
|
I want a button in webview call a method in my java. This method should call facebook sdk's authorize() function and do SSO/Dialog way of authentication. The access token and the expire token are returned back to webview when i call a javascript method in webview.
Here's what I've created already.
In my onCreate() of activity I'm initializing the webview.
mFB = new Facebook(APP_ID);
wv = (WebView) findViewById(R.id.web_view);
wv.getSettings().setJavaScriptEnabled(true);
wv.addJavascriptInterface(new JSInterface(this), "JAVA");
wv.loadUrl("file:///android_asset/test.html");
The test.html in my assets folder is this -
<script type="text/javascript">
function authorizeFacebook() {
JAVA.authorizeFacebook();
}
function showData(token, expire) {
document.getElementById('result').innerHTML = token + " >>>> " + expire;
}
</script>
The interfacing between JS and Java are working fine. That I'm sure of. My JSInterface is -
public class JSInterface {
public Context mContext;
JSInterface(Context c) {
mContext = c;
}
public void authorizeFacebook() {
Log.e("FB", "authorizeFacebook() interface called");
authorizeFacebookSSO();
}
}
public void authorizeFacebookSSO() {
mFB.authorize(FBCMTestActivity.this, new DialogListener() {
@Override
public void onFacebookError(FacebookError e) {
Log.e("FBAUTH", "FB failed + " + e.getErrorCode());
Toast.makeText(getApplicationContext(), "FBFAIL:" + e.getMessage(), Toast.LENGTH_LONG).show();
}
@Override
public void onError(DialogError e) {
Log.e("FBAUTH", "FB failed + " + e.getMessage());
Toast.makeText(getApplicationContext(), "FBFAIL:" + e.getMessage(), Toast.LENGTH_LONG).show();
}
@Override
public void onComplete(Bundle values) {
Log.e("FBAUTH", "SUCCESS");
Log.e("FBAUTH:", mFB.getAccessToken() + " " + mFB.getAccessExpires());
wv.loadUrl("javascript:showData( '" + mFB.getAccessToken() + "' , '" + mFB.getAccessExpires() + "');");
}
@Override
public void onCancel() {
}
});
}
When I have the facebook app, this works great.
But when there is no facebook app, it should ideally show a Dialog with webview. But it fails and stops at - 'Loading...' screen.
It just stays there and doesn't even crash. There are no logs. After a while I get to either force close it or wait for it. Has anyone faced this issue before?
UPDATE
My onActivityResultCode() -
@Override
protected void onActivityResult(int requestCode, int resultCode, Intent data) {
super.onActivityResult(requestCode, resultCode, data);
mFB.authorizeCallback(requestCode, resultCode, data);
}
A: Well I figured this out myself.
When I call a Java function from WebView's javascript, the function runs in webview's thread. So making it run on UI thread fixed everything for me :)
Hope that helps others.
Here's the only change that I've made in the above code.
public class JSInterface {
public Context mContext;
JSInterface(Context c) {
mContext = c;
}
public void authorizeFacebook() {
runOnUiThread(new Runnable() {
@Override
public void run() {
try {
authorizeFacebookSSO();
} catch (JSONException e) {
e.printStackTrace();
}
}
});
}
}
| |
doc_23534612
|
#include <iostream>
#include <string>
using namespace std;
class point {
public:
int _x{ 0 };
int _y{ 0 };
point() {}
point(int x, int y) : _x{ x }, _y{ y } {}
operator string() const
{ return '[' + to_string(_x) + ',' + to_string(_y) + ']'; }
friend ostream& operator<<(ostream& os, const point& p) {
// Which one? Why?
os << static_cast<string>(p); // Option 1
os << p.operator string(); // Option 2
return os;
}
};
Should one call a conversion operator directly, or rather just call static_cast and let that do the job?
Those two lines will pretty much do exactly the same thing (which is to call the conversion operator), there's no real difference between their behavior as far as I can tell. So the real question here is whether that's true or not. Even though these seem the same to me, there could still be subtle differences that one might fail to pick up on.
So are there any practical differences between those approaches (including ones that might not apply to this example), other than the fact that the syntax for them different? Which one should be preferred and why?
A:
So are there any practical differences between those approaches
In this case, not that I know of, behaviour wise.
(including ones that might not apply to this example)
static_cast<X>(instance_of_Y) would also allow conversion if X has a converting constructor for the type Y. An explicit call to (possibly non-existent) conversion operator of Y could not use the mentioned converting constructor. In this case of course, std::string does not have a converting constructor for point.
So, the cast is more generic and that is what I would prefer in general. Also "convert this object to type string" is more meaningful than "call the operator string()". But if for some very strange reason you want to avoid the converting constructor, then explicit call to conversion operator would achieve that.
A: No you never need to call the conversion operator member function directly.
If you use an instance of the class where a std::string object is expected then the conversion operator will be called automatically by the compiler, as will it if you use e.g. static_cast to cast an instance to std::string.
Simple and stupid example:
void print_string(std::string const& s)
{
std::cout << s << '\n';
}
int main()
{
point p(1, 2);
print_string(p); // Will call the conversion operator
print_string(static_cast<std::string>(p)); // Will call the conversion operator too
}
The closest to call the function directly you will ever need is using something like static_cast.
In your specific case, with the output operator, then you should use static_cast. The reason is semantic and for future readers and maintainers (which might include yourself) of the code.
It will of course work to call the conversion member function directly (your option 2) but it loses the semantic information that says "here I'm converting the object to a string".
If the only use of the conversion operator is to use in the output operator, you might as well create a (private) function that takes the output stream reference as an argument, and writes directly to the stream, and call that function from the output operator.
| |
doc_23534613
|
Error in stl(timeseries[[1]]) :
series is not periodic or has less than two periods
here is my data:
> head(stations[[1]])
Date Unit Temp
1 0013-06-30 10:00:01 C 32.5
2 0013-06-30 10:20:01 C 32.5
3 0013-06-30 10:40:01 C 33.5
4 0013-06-30 11:00:01 C 34.5
5 0013-06-30 11:20:01 C 37.0
6 0013-06-30 11:40:01 C 35.5
which i have converted to time series class:
timeseries[[1]] = as.ts(stations[[1]]$Temp,freq=26280)
note : frequency is high as data is taken every 20 minutes
Is the error with stl() due to a disagreement of the frequency? I have a feeling that I may have done something wrong when making my data a time series and that this has thrown off the ability to calculate the period of the series
I do need all this data, as the entire set only covers 4 days worth of data (hence the high frequency)
Thank you for your help!
A: The error message tells you that in order to estimate a seasonal component of your time series, you need data for at least two seasons. If you have 4 days worth of temperature data, you probably want to have the seasonal component to be in days. Therefore you should set-up your time-series accordingly. You have 24*3 observations a day, so that should be the frequency.
timeseries[[1]] <- ts(stations[[1]]$Temp, frequency=24*3)
Then stl(timeseries[[1]], "periodic") should work, altough I cannot test it, since it requires data for at least 2 days, i.e. 2 hours isn't enough.
| |
doc_23534614
|
I have numerous background processes that I need to run, which are pretty intensive and I wanted to write them all via the CLI so they would not disturbe apache, and just wanted a little more information on how exactly it functioned, if it allowed multiple instances of it to run at a time with different items.
Thank you in advance for your kindness and help.
A: Yes, you can have multiple cronjobs running simultaneously - they run as different processes, do not interfere with eachother.
I have a datamining site with 7 different cronjobs, at least one of them is almost always running, but usually 2+ do at the same time.
| |
doc_23534615
|
I read on this and found that 401 means it requires a SERVER authentication. However, I do not receive any available authentication schemes. My winhttp config settings are on 'Auto-detect'.
int wmain()
{
std::string content;
curl_global_init(CURL_GLOBAL_ALL);
CURL *curl = curl_easy_init();
if (curl)
{
curl_easy_setopt(curl, CURLOPT_URL, "company internal URL");
curl_easy_setopt(curl, CURLOPT_USERPWD, "usr:pwd");
curl_easy_setopt(curl, CURLOPT_WRITEDATA, &content);
curl_easy_setopt(curl, CURLOPT_WRITEFUNCTION, writer);
CURLcode code = curl_easy_perform(curl);
curl_easy_cleanup(curl);
}
curl_global_cleanup();
std::cout << content;
std::cin.get();
return 0;
}
I am very new to using libcurl and have limited working experience with C/C++. Your help is appreciated to identify the problem. Thanks!
A: Try to add curl_easy_setopt(curl, CURLOPT_HTTPAUTH, CURLAUTH_ANY);
Also of course "usr:pwd"should be replaced, but I assume you did that.
Example below
https://curl.haxx.se/libcurl/c/CURLOPT_HTTPAUTH.html
CURL *curl = curl_easy_init();
if(curl) {
CURLcode ret;
curl_easy_setopt(curl, CURLOPT_URL, "https://example.com/");
/* allow whatever auth the server speaks */
curl_easy_setopt(curl, CURLOPT_HTTPAUTH, CURLAUTH_ANY);
curl_easy_setopt(curl, CURLOPT_USERPWD, "james:bond");
ret = curl_easy_perform(curl);
}
| |
doc_23534616
|
from dagster import job, op
@op
def input_string():
ret = input('Enter string')
print(ret)
@job
def my_job():
input_string()
if __name__ == '__main__':
my_job.execute_in_process()
I then run the following in console:
dagit -f test.py
When I finally "Launch Run" however, I don't get an opportunity to enter input, and instead get an EOFError with the following info:
dagster.core.errors.DagsterExecutionStepExecutionError: Error occurred
while executing op "input_string": File
"C:\Users\username\Anaconda3\lib\site-packages\dagster\core\execution\plan\execute_plan.py",
line 232, in dagster_event_sequence_for_step
for step_event in check.generator(step_events): File "C:\Users\username\Anaconda3\lib\site-packages\dagster\core\execution\plan\execute_step.py",
line 354, in core_dagster_event_sequence_for_step
for user_event in check.generator( File "C:\Users\username\Anaconda3\lib\site-packages\dagster\core\execution\plan\execute_step.py",
line 70, in _step_output_error_checked_user_event_sequence
for user_event in user_event_sequence: File "C:\Users\username\Anaconda3\lib\site-packages\dagster\core\execution\plan\compute.py",
line 170, in execute_core_compute
for step_output in yield_compute_results(step_context, inputs, compute_fn): File
"C:\Users\username\Anaconda3\lib\site-packages\dagster\core\execution\plan\compute.py",
line 138, in yield_compute_results
for event in iterate_with_context( File "C:\Users\username\Anaconda3\lib\site-packages\dagster\utils_init.py",
line 403, in iterate_with_context
return File "C:\Users\username\Anaconda3\lib\contextlib.py", line 137, in exit
self.gen.throw(typ, value, traceback) File "C:\Users\username\Anaconda3\lib\site-packages\dagster\core\execution\plan\utils.py",
line 73, in solid_execution_error_boundary
raise error_cls( The above exception was caused by the following exception: EOFError: EOF when reading a line File
"C:\Users\username\Anaconda3\lib\site-packages\dagster\core\execution\plan\utils.py",
line 47, in solid_execution_error_boundary
yield File "C:\Users\username\Anaconda3\lib\site-packages\dagster\utils_init.py",
line 401, in iterate_with_context
next_output = next(iterator) File "C:\Users\username\Anaconda3\lib\site-packages\dagster\core\execution\plan\compute_generator.py",
line 65, in _coerce_solid_compute_fn_to_iterator
result = fn(context, **kwargs) if context_arg_provided else fn(**kwargs) File "test.py", line 14, in input_string
ret = input('Enter string')
How can I get this to run?
A: ops are configured using a config schema. This allows you to provide configuration via the Dagit Launchpad
In your case you'd want to remove the input call from your @op code. You would then retrieve the input from the config object provided to your op using the context.op_config dictionary, something like this:
@op(config_schema={'input1': str})
def input_string(context):
ret = context.op_config['input1']
print(ret)
@job
def my_job():
input_string()
if __name__ == '__main__':
my_job.execute_in_process()
edit: To get your input to print in the Dagster job console use the built-in Dagster logger like this:
@op(config_schema={'input1': str})
def input_string(context):
ret = context.op_config['input1']
context.log.info(ret)
| |
doc_23534617
|
SelectedStartDate is 7/1/2013 & SelectedEndDate is 7/31/2013 then this code returns Sundays are 7/2,7/9,7/16,7/23,7/30 But my expect dates are 7/7,7/14,7/21,7/28
static IEnumerable<DateTime> SundaysBetween(DateTime SelectedStartDate, DateTime SelectedEndDate)
{
DateTime current = SelectedStartDate;
if (DayOfWeek.Sunday == current.DayOfWeek)
{
yield return current;
}
while (current < SelectedEndDate)
{
yield return current.AddDays(1);
current = current.AddDays(7);
}
if (current == SelectedEndDate)
{
yield return current;
}
}
}
A: static IEnumerable<DateTime> SundaysBetween(DateTime startDate, DateTime endDate)
{
DateTime currentDate = startDate;
while(currentDate <= endDate)
{
if (currentDate.DayOfWeek == DayOfWeek.Sunday)
yield return currentDate;
currentDate = currentDate.AddDays(1);
}
}
A: public IEnumerable<DateTime> SundaysBetween(DateTime start, DateTime end)
{
while (start.DayOfWeek != DayOfWeek.Sunday)
start = start.AddDays(1);
while (start <= end)
{
yield return start;
start = start.AddDays(7);
}
}
A: This can be accomplished pretty easily using AddDays without over complexing the issue too much. Here's a short snippet I wrote to demonstrate:
// Setup
DateTime startDate = DateTime.Parse("7/1/2013");
DateTime endDate = DateTime.Parse("7/31/2013");
// Execute
while (startDate < endDate)
{
if (startDate.DayOfWeek == DayOfWeek.Sunday)
{
yield return startDate;
}
startDate = startDate.AddDays(1);
}
| |
doc_23534618
|
/**
* Migrations
**/
// Create user__groups table
Schema::create('user__groups', function (Blueprint $table) {
$table->increments('id');
$table->string('name', 50);
});
// Create users table
Schema::create('users', function (Blueprint $table) {
$table->increments('id');
$table->timestamps(); // created_at, updated_at DATETIME
$table->string('username', 40);
$table->string('password', 20);
$table->integer('user_group_id')->unsigned();
$table->foreign('user_group_id')->references('id')->on('user__groups');
});
/**
* Models
**/
class User extends Ardent {
protected $table = 'users';
public $timestamps = true;
protected $hidden = array('password');
public $autoPurgeRedundantAttributes = true;
protected $fillable = array(*);
// Validation rules for fields in this entity
public static $rules = array(
'username' => 'required|unique:users',
'password' => 'required|alpha_dash|min:6|max:20|confirmed',
'password_confirmation' => 'required|alpha_dash|min:6|max:20'
);
// Relations
public static $relationsData = array(
'userGroup' => array(self::BELONGS_TO, 'UserGroup')
);
// Model mock data for test purposes
public static $factory = array(
'username' => 'string',
'password' => '123123',
'password_confirmation' => '123123',
'user_group_id' => 'factory|UserGroup'
);
}
class UserGroup extends Ardent {
protected $table = 'user__groups';
public $timestamps = false;
public $autoPurgeRedundantAttributes = true;
protected $fillable = array('*');
// Validation rules for fields in this entity
public static $rules = array(
'name' => 'required|unique:user__groups|alpha_dash'
);
// Relations
public static $relationsData = array(
'users' => array(self::HAS_MANY, 'User')
);
// Model mock data for test purposes
public static $factory = array(
'name' => 'string'
);
}
PHPUnit test
public function test_assignUserToGroup() {
/* @var $user User */
$user = FactoryMuff::instance('User');
// Test assigning user to group 1
$group1 = FactoryMuff::create('UserGroup');
$this->assertTrue($group1->users()->save($user) !== false, "User model did not save!".$user->errors());
// Test assigning user to group 2 (this fails)
$group2 = FactoryMuff::create('UserGroup');
$this->assertTrue($group2->users()->save($user) !== false, "User model did not update!".$user->errors()); // <-- The save method always returns false
}
The testrun will reflect that the user object will not be updated. Why? What am I doing wrong? I would expect the code below // Test assigning user to group 2 to perform an update on the existing object, but instead DB::getQueryLog() only displays selects and inserts. This is really annoying.
-- Edit --
It's actually validation that's stopping me. I added a call to Model->errors() in the test above. This pointed out that after save, the username contained in the User object no longer was unique - because it found itself in the database (WTF?). It also said that the password_confirmation field was marked required, but was empty - because it was removed upon last save.
This is just stupid and I do not know if it is Eloquent, Ardent or me who's in fault here. If it was valid before save, it should be valid after save also - right? I will change the title of the question to reflect the real cause of the problem.
A: The only solution I've found to this problem, is to not use the 'unique' and 'confirmed' validation rules. I will handle validation for these upon form submission only.
I'm actually not very happy with this solution, so I came up with a slightly different approach:
// Validation rules for fields in this entity
public static $rules = array(
'username' => 'required|alpha_dash|min:4|max:40',
'password' => 'required|alpha_dash|min:6|max:20',
'password_confirmation' => ''
);
// Extra validation for user creation
public static $onCreateRules = array(
'username' => 'required|alpha_dash|min:4|max:40|unique:users',
'password' => 'required|alpha_dash|min:6|max:20|confirmed'
);
Here I split the validation rules into two static arrays, and instead of extending the class from Ardent directly I extend it from a custom class:
use LaravelBook\Ardent\Ardent;
class ModelBase extends Ardent {
public function save(array $rules = array(), array $customMessages = array(), array $options = array(), Closure $beforeSave = null, Closure $afterSave = null) {
if (!$this->exists && isset(static::$onCreateRules)) {
$rules = array_merge($rules, static::$onCreateRules);
}
return parent::save($rules, $customMessages, $options, $beforeSave, $afterSave);
}
}
This overrides the Ardent save method and applies the $onCreateRules for validation ONLY if this entity does not previously exist. :-)
I still think Eloquents validation engine is broken. A call to validate() should only validate dirty fields for one. Secondly it should automatically exclude the current entity id from a unique check. So in any case the solution I present here is a workaround.
I hope someone from Laravel sees this and finds it in their heart to fix it.
| |
doc_23534619
|
<bean id="beansInst" factory="beanFactory" factory-method="getInstance" />
In factory bean:
Object getInstance() {
....
String beanName= ????;
}
How I can receive name of bean that in this moment calls this method?
And second question: Do I have this method (getInstance) synchronized?
Thanks.
A: You'd need to implement BeanNameAware. Then the container invokes the setBeanName methods and provide the name value. You can then set the beanName property in that method.
| |
doc_23534620
|
I am using the browser gem to detect and block older non modern browsers.
Rails.configuration.middleware.use Browser::Middleware do
include ApplicationHelper
redirect_to :controller => 'error', :action => 'browser-upgrade-required' if browser_is_not_supported
end
Helper method I am currently working with:
# test browser version
def browser_is_not_supported
return true unless browser.modern?
return true if browser.chrome? && browser.version.to_i < ENV['BROWSER_BASE_VERSION_GOOGLE'].to_i
return true if browser.firefox? && browser.version.to_i < ENV['BROWSER_BASE_VERSION_FIREFOX'].to_i
return true if browser.safari? && browser.version.to_i < ENV['BROWSER_BASE_VERSION_SAFARI'].to_i
return true if browser.opera? && browser.version.to_i < ENV['BROWSER_BASE_VERSION_OPERA'].to_i
return true if browser.ie? && browser.version.to_i < ENV['BROWSER_BASE_VERSION_MSFT'].to_i
end
A: This is one way to do it:
# lib/browser_util.rb
module BrowserUtil
def self.supported?(browser)
# your code ...
end
end
and wrap that from ApplicationHelper for use in views
module ApplicationHelper
def is_browser_supported?
BrowserUtil.supported?(browser)
end
end
in middleware
Rails.configuration.middleware.use Browser::Middleware do
unless BrowserUtil.supported?(browser)
redirect_to :controller => 'error', :action => 'browser-upgrade-required'
end
end
UPDATE: it does not need to be in a separate module (BrowserUtil)
module ApplicationHelper
def self.foo
"FOO"
end
def foo
ApplicationHelper.foo
end
end
in middleware use
ApplicationHelper.foo
in views it would use the included method
foo
A: Ofcourse you can but that's bad idea, i agree with that that logic supposed to somewhere in app but sometime you have to deal with it. I am saying this because you can block the request before old_browser request gets it and load rails stack.
Anyway, this is how can you can do it
Rails.configuration.middleware.use Browser::Middleware do
self.class.send(:include,ApplicationHelper)
redirect_to :controller => 'error', :action => 'browser-upgrade-required' unless browser_is_supported?
end
| |
doc_23534621
|
The instruction used is:
bazel-bin/tensorflow/tools/graph_transforms/transform_graph --in_graph=/tf-optimizations/pascalvoc_gtt/frozen_inference_graph.pb --out_graph=/tf-optimizations/pascalvoc_gtt/optimized_frozen_inference_graph.pb --inputs='image_tensor' --outputs='detection_boxes,detection_scores,detection_classes,num_detections' --transforms='
add_default_attributes
strip_unused_nodes(type=float)
remove_nodes(op=CheckNumerics)
fold_constants(ignore_errors=true)
fold_batch_norms
fold_old_batch_norms
fuse_resize_pad_and_conv
fuse_pad_and_conv
fuse_resize_and_conv
strip_unused_nodes
sort_by_execution_order'
When I try to query the optimized tensorflow graph,
(boxes, scores, classes, num) = sess.run([detection_boxes, detection_scores, detection_classes, num_detections], feed_dict={image_tensor: image_np_expanded})
I get the following error:
InvalidArgumentError: NodeDef mentions attr 'identical_element_shapes' not in Op<name=TensorArrayV3; signature=size:int32 -> handle:resource, flow:float; attr=dtype:type; attr=element_shape:shape,default=<unknown>; attr=dynamic_size:bool,default=false; attr=clear_after_read:bool,default=true; attr=tensor_array_name:string,default=""; is_stateful=true>; NodeDef: Preprocessor/map/TensorArray = TensorArrayV3[clear_after_read=true, dtype=DT_FLOAT, dynamic_size=false, element_shape=<unknown>, identical_element_shapes=false, tensor_array_name="", _device="/job:localhost/replica:0/task:0/device:GPU:0"](Preprocessor/map/TensorArrayUnstack/strided_slice). (Check whether your GraphDef-interpreting binary is up to date with your GraphDef-generating binary.).
[[Node: Preprocessor/map/TensorArray = TensorArrayV3[clear_after_read=true, dtype=DT_FLOAT, dynamic_size=false, element_shape=<unknown>, identical_element_shapes=false, tensor_array_name="", _device="/job:localhost/replica:0/task:0/device:GPU:0"](Preprocessor/map/TensorArrayUnstack/strided_slice)]]
Caused by op u'Preprocessor/map/TensorArray', defined at:
File "/usr/lib/python2.7/runpy.py", line 174, in _run_module_as_main
"__main__", fname, loader, pkg_name)
File "/usr/lib/python2.7/runpy.py", line 72, in _run_code
exec code in run_globals
File "/usr/local/lib/python2.7/dist-packages/ipykernel_launcher.py", line 16, in <module>
app.launch_new_instance()
File "/usr/local/lib/python2.7/dist-packages/traitlets/config/application.py", line 658, in launch_instance
app.start()
File "/usr/local/lib/python2.7/dist-packages/ipykernel/kernelapp.py", line 478, in start
self.io_loop.start()
File "/usr/local/lib/python2.7/dist-packages/zmq/eventloop/ioloop.py", line 177, in start
super(ZMQIOLoop, self).start()
File "/usr/local/lib/python2.7/dist-packages/tornado/ioloop.py", line 888, in start
handler_func(fd_obj, events)
File "/usr/local/lib/python2.7/dist-packages/tornado/stack_context.py", line 277, in null_wrapper
return fn(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/zmq/eventloop/zmqstream.py", line 440, in _handle_events
self._handle_recv()
File "/usr/local/lib/python2.7/dist-packages/zmq/eventloop/zmqstream.py", line 472, in _handle_recv
self._run_callback(callback, msg)
File "/usr/local/lib/python2.7/dist-packages/zmq/eventloop/zmqstream.py", line 414, in _run_callback
callback(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/tornado/stack_context.py", line 277, in null_wrapper
return fn(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/ipykernel/kernelbase.py", line 281, in dispatcher
return self.dispatch_shell(stream, msg)
File "/usr/local/lib/python2.7/dist-packages/ipykernel/kernelbase.py", line 232, in dispatch_shell
handler(stream, idents, msg)
File "/usr/local/lib/python2.7/dist-packages/ipykernel/kernelbase.py", line 397, in execute_request
user_expressions, allow_stdin)
File "/usr/local/lib/python2.7/dist-packages/ipykernel/ipkernel.py", line 208, in do_execute
res = shell.run_cell(code, store_history=store_history, silent=silent)
File "/usr/local/lib/python2.7/dist-packages/ipykernel/zmqshell.py", line 533, in run_cell
return super(ZMQInteractiveShell, self).run_cell(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/IPython/core/interactiveshell.py", line 2718, in run_cell
interactivity=interactivity, compiler=compiler, result=result)
File "/usr/local/lib/python2.7/dist-packages/IPython/core/interactiveshell.py", line 2822, in run_ast_nodes
if self.run_code(code, result):
File "/usr/local/lib/python2.7/dist-packages/IPython/core/interactiveshell.py", line 2882, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-7-982229c93a39>", line 7, in <module>
tf.import_graph_def(od_graph_def, name='')
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/importer.py", line 313, in import_graph_def
op_def=op_def)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 2956, in create_op
op_def=op_def)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 1470, in __init__
self._traceback = self._graph._extract_stack() # pylint: disable=protected-access
InvalidArgumentError (see above for traceback): NodeDef mentions attr 'identical_element_shapes' not in Op<name=TensorArrayV3; signature=size:int32 -> handle:resource, flow:float; attr=dtype:type; attr=element_shape:shape,default=<unknown>; attr=dynamic_size:bool,default=false; attr=clear_after_read:bool,default=true; attr=tensor_array_name:string,default=""; is_stateful=true>; NodeDef: Preprocessor/map/TensorArray = TensorArrayV3[clear_after_read=true, dtype=DT_FLOAT, dynamic_size=false, element_shape=<unknown>, identical_element_shapes=false, tensor_array_name="", _device="/job:localhost/replica:0/task:0/device:GPU:0"](Preprocessor/map/TensorArrayUnstack/strided_slice). (Check whether your GraphDef-interpreting binary is up to date with your GraphDef-generating binary.).
[[Node: Preprocessor/map/TensorArray = TensorArrayV3[clear_after_read=true, dtype=DT_FLOAT, dynamic_size=false, element_shape=<unknown>, identical_element_shapes=false, tensor_array_name="", _device="/job:localhost/replica:0/task:0/device:GPU:0"](Preprocessor/map/TensorArrayUnstack/strided_slice)]]
I do not know what is wrong, as the original frozen graph works fine.
A: Problem solved. Its a version issue. The model was generated in a different version than the GTT.
| |
doc_23534622
|
My question is what am I doing wrong? Or maybe this is restriction of embedded tomcat (though I found nothing about difference)?
Here is my context.xml:
<Context crossContext="true">
<WatchedResource>WEB-INF/web.xml</WatchedResource>
</Context>
Configuration of tomcat plugin (I wonder if it is important, but who knows):
<plugin>
<groupId>org.apache.tomcat.maven</groupId>
<artifactId>tomcat7-maven-plugin</artifactId>
<version>2.0</version>
<configuration>
<path>/</path>
<port>8080</port>
<addContextWarDependencies>true</addContextWarDependencies>
<addWarDependenciesInClassloader>true</addWarDependenciesInClassloader> <warSourceDirectory>${project.build.directory}/${project.build.finalName}/</warSourceDirectory>
<webapps>
<webapp>
<groupId>lfcms-several-webapps-proto</groupId>
<artifactId>webapp1</artifactId>
<version>1.0-SNAPSHOT</version>
<type>war</type>
<asWebapp>true</asWebapp>
</webapp>
<webapp>
<groupId>lfcms-several-webapps-proto</groupId>
<artifactId>webapp2</artifactId>
<version>1.0-SNAPSHOT</version>
<type>war</type>
<asWebapp>true</asWebapp>
</webapp>
</webapps>
</configuration>
</plugin>
Here is code of core servlet:
final ServletContext additionalContext = ctx.getContext("/webapp1");
if (additionalContext == null) throw new ServletException("can't get context of /webapp1");
final RequestDispatcher disp = additionalContext.getRequestDispatcher("/webapp1");
disp.include(req, resp);
| |
doc_23534623
|
In doc / module re-export...
/// The `Measurement` trait and the `implement_measurement!` macro
/// provides a common way for various measurements to be implemented.
///
/// # Example
/// ```
/// #[macro_use] // <-- Not sure this is correct / necessary...
/// use measurements::measurement::*;
///
/// struct Cubits {
/// forearms: f64
/// }
///
/// impl Measurement for Cubits {
/// fn get_base_units(&self) -> f64 {
/// self.forearms
/// }
///
/// fn from_base_units(units: f64) -> Self {
/// Cubits { forearms: units }
/// }
/// }
///
/// // Invoke the macro to automatically implement Add, Sub, etc...
/// implement_measurement! { Cubits }
/// ```
#[macro_use]
pub mod measurement;
In definition...
pub use std::ops::{Add,Sub,Div,Mul};
pub use std::cmp::{Eq, PartialEq};
pub use std::cmp::{PartialOrd, Ordering};
pub trait Measurement {
fn get_base_units(&self) -> f64;
fn from_base_units(units: f64) -> Self;
}
#[macro_export]
macro_rules! implement_measurement {
($($t:ty)*) => ($(
impl Add for $t {
type Output = Self;
fn add(self, rhs: Self) -> Self {
Self::from_base_units(self.get_base_units() + rhs.get_base_units())
}
}
// ... others ...
))
}
Update
This question did end up being a duplicate, but I feel a better example of how it was solved would help. Here are the changes that made my doc test work with a macro. As shown here, you must
*
*Add a main function. This does something to move your crate root.
*Reference the extern crate of your own module.
*Add a #[macro_use] tag to the crate reference. You can optionally choose which macros to import (see the docs).
Code:
/// The `Measurement` trait and the `implement_measurement!` macro
/// provides a common way for various measurements to be implemented.
///
/// # Example
/// ```
/// // Importing the `implement_measurement` macro from the external crate is important
/// #[macro_use]
/// extern crate measurements;
///
/// use measurements::measurement::*;
///
/// struct Cubits {
/// forearms: f64
/// }
///
/// impl Measurement for Cubits {
/// fn get_base_units(&self) -> f64 {
/// self.forearms
/// }
///
/// fn from_base_units(units: f64) -> Self {
/// Cubits { forearms: units }
/// }
/// }
///
/// // Invoke the macro to automatically implement Add, Sub, etc...
/// implement_measurement! { Cubits }
///
/// // The main function here is only included to make doc tests compile.
/// // You should't need it in your own code.
/// fn main() { }
/// ```
#[macro_use]
pub mod measurement;
| |
doc_23534624
|
CREATE TABLE #temp (
id int,
num int,
question varchar(50),
qversion int );
INSERT INTO #temp VALUES(1, 1, 'Question 1 v1', 1);
INSERT INTO #temp VALUES(2, 1, 'Question 1 v2', 2);
INSERT INTO #temp VALUES(3, 2, 'Question 2 v1', 1);
INSERT INTO #temp VALUES(4, 2, 'Question 2 v2', 2);
INSERT INTO #temp VALUES(5, 2, 'Question 2 v3', 3);
INSERT INTO #temp VALUES(6, 3, 'Question 3 v1', 1);
SELECT *
FROM #temp;
DROP TABLE #temp;
And I would like to get a table to display the three questions in their lastest version? This is in SQL Server 2005
A: CREATE TABLE #temp (
id int,
num int,
question varchar(50),
qversion int );
INSERT INTO #temp VALUES(1, 1, 'Question 1 v1', 1);
INSERT INTO #temp VALUES(2, 1, 'Question 1 v2', 2);
INSERT INTO #temp VALUES(3, 2, 'Question 2 v1', 1);
INSERT INTO #temp VALUES(4, 2, 'Question 2 v2', 2);
INSERT INTO #temp VALUES(5, 2, 'Question 2 v3', 3);
INSERT INTO #temp VALUES(6, 3, 'Question 3 v1', 1);
WITH latest AS (
SELECT num, MAX(qversion) AS qversion
FROM #temp
GROUP BY num
)
SELECT #temp.*
FROM #temp
INNER JOIN latest
ON latest.num = #temp.num
AND latest.qversion = #temp.qversion;
DROP TABLE #temp;
A: SELECT t1.id, t1.num, t1.question, t1.qversion
FROM #temp t1
LEFT OUTER JOIN #temp t2
ON (t1.num = t2.num AND t1.qversion < t2.qversion)
GROUP BY t1.id, t1.num, t1.question, t1.qversion
HAVING COUNT(*) < 3;
A: You're using SQL Server 2005, so it's worth at least exploring the over clause:
select
*
from
(select *, max(qversion) over (partition by num) as maxVersion from #temp) s
where
s.qversion = s.maxVersion
A: I would like to get a table to display the three latest versions of each question.
*
*I assume that that qversion is increasing with time. If this assumption is backwards, remove the desc keyword from the answer.
*The table definition does not have an explicit not null constraint on qversion. I assume that a null qversion should be excluded. (Note: Depending on settings, lack of an explicit null/not null in the declaration may result in a not null constraint.) If the table does have a not null contraint, than the text where qversion is not null should be removed. If qversion can be null, and nulls need to be included in the result set, then additional changes will need to be made.
CREATE TABLE #temp (
id int,
num int,
question varchar(50),
qversion int );
INSERT INTO #temp VALUES(1, 1, 'Question 1 v1', 1);
INSERT INTO #temp VALUES(2, 1, 'Question 1 v2', 2);
INSERT INTO #temp VALUES(3, 2, 'Question 2 v1', 1);
INSERT INTO #temp VALUES(4, 2, 'Question 2 v2', 2);
INSERT INTO #temp VALUES(5, 2, 'Question 2 v3', 3);
INSERT INTO #temp VALUES(7, 2, 'Question 2 v4', 4);
-- ^^ Added so at least one row would be excluded.
INSERT INTO #temp VALUES(6, 3, 'Question 3 v1', 1);
INSERT INTO #temp VALUES(8, 4, 'Question 4 v?', null);
select id, num, question, qversion
from (select *,
row_number() over (partition by num order by qversion desc) as RN
from #temp
where qversion is not null) T
where RN <= 3
| |
doc_23534625
|
The WIKI page for the new API recommends the following for DTO
In Service development your services DTOs provides your technology agnostic Service Layer which you want to keep clean and as 'dependency-free' as possible for maximum accessibility and potential re-use. Our recommendation is to keep your service DTOs in a separate largely dep-free assembly.
There is also this snippet
*But let's say you take the normal route of copying the DTOs (in either source of binary form) so you have something like this on the client:
[Route("/reqstars")]
public class AllReqstars : IReturn<List<Reqstar>> { }
The code on the client now just becomes:
var client = new JsonServiceClient(BaseUri);
List<Reqstar> response = client.Get(new AllReqstars());
Which makes a GET web request to the /reqstars route. When a custom route is not present on the client it automatically falls back to using ServiceStack's pre-defined routes.
My question is... does the "largely dep-free" assembly still require a dependency on ServiceStack due the the route attribute on the DTO classes?
A: The [Route] attribute exists in the ServiceStack.Interfaces project, so you still only need a reference to the dependency and impl-free ServiceStack.Interfaces.dll. This is by design, we want to ensure the minimum dependency as possible which is why we'll try to keep all metadata attributes you might use on DTO's in the Interfaces project.
The reason for wanting to keep your DTO's in a separate assembly is to reduce the dependencies required by your clients in order to use it. This makes it less invasive and more accessible for clients. Also your DTOs represent your Service Contract, keeping them separate encourages the good practice of decoupling them from the implementation, which you want to continue to be free to re-factor.
| |
doc_23534626
|
JSON data link is-json link
My HTML looks like this-
<div class="container" >
<div class="row searchbar">
<div class="col-xs-8 col-xs-offset-2">
<div class="input-group">
<div class="input-group-btn search-panel dropdown">
<button type="button" class="btn btn-default dropdown-toggle" data-toggle="dropdown">
<span id="search_concept">Sort by<span class="caret"></span>
</button>
<ul ng-model="sortColumn" class="dropdown-menu" role="menu">
<li><a >Team 1</a></li>
<li><a >Team 2</a></li>
<li><a >Score 1</a></li>
<li><a >Score 2</a></li>
</ul>
</div>
<input type="text" class="form-control" name="x" ng-model=filterField placeholder="Search term...">
<span class="input-group-btn">
<button class="btn btn-default" type="button"><span class="glyphicon glyphicon-search"></span></button>
</span>
</div>
</div>
</div>
<table class="table table-striped" id="myTable">
<thead >
<tr class="info ">
<th class="text-center">Match</th>
<th class="text-center">Team 1</th>
<th class="text-center">Score 1</th>
<th class="text-center">Team 2</th>
<th class="text-center">Score 2</th>
</tr>
</thead>
<div ng-controller="matchesController as matchCtrl">
<tbody ng-repeat="match in matchCtrl.matchesData ">
<tr ng-repeat="mydata in match.matches | filter:filterField | orderBy:matchCtrl.orderProperty">
<td class="text-center" >{{match.name |filter:matchname}}<br>
<span id="date">{{mydata.date | date:fullDate }}</span></td>
<td class="text-center" >{{mydata.team1.name | uppercase}}<br>
<span id="code">[{{mydata.team1.code}}]</span>
</td>
<td class="text-center">{{mydata.score1}}<span ng-show="mydata.score1 === null">Not Available</span></td>
<td class="text-center" >{{mydata.team2.name | uppercase}}<br>
<span id="code">[{{mydata.team2.code}}]</span>
</td>
<td class="text-center">{{mydata.score2}}<span ng-show="mydata.score2 === null">Not Available</span></td>
</tr>
</tbody>
</table>
</div>
</div>
My controller looks like this-
myApp.controller('matchesController',['$http',function($http) {
//create a context
var match = this;
this.matchesData=[];
this.loadAllMatches = function(){
$http({
method: 'GET',
url:'https://raw.githubusercontent.com/openfootball/football.json
/master/2016-17/en.1.json',
}).then(function successCallback(response) {
match.matchesData=response.data.rounds;
console.log(match.matchesData);
}, function errorCallback(response) {
alert("some error occurred. Check the console.");
console.log(response);
});
};// end load all blogs
this.loadAllMatches();
}]); // end controller
A: It would help if you create plunkers for these questions. Basically, just implement a method to change the value of a property using ng-model, and set the orderBy col to ng-model.
I've added '.' notation to ng-model to help with scoping issues as ng-repeat creates it's own scope.
Try this:
<select name="singleSelect" ng-model="sorting.orderCol">
<option value="name">Name</option>
<option value="age">Age</option>
<option value="phone">Phone</option>
</select>
<table class="friends">
<tr>
<th>Name</th>
<th>Phone Number</th>
<th>Age</th>
</tr>
<tr ng-repeat="friend in friends | orderBy:sorting.orderCol">
<td>{{friend.name}}</td>
<td>{{friend.phone}}</td>
<td>{{friend.age}}</td>
</tr>
</table>
Plunker: https://plnkr.co/edit/Rh6TbmbIUkAjwMeevdP1?p=preview
| |
doc_23534627
|
A: I did a quick search for you, would something like this fit your needs?
$(function() {
$('a[href*="#"]:not([href="#"])').click(function() {
if (location.pathname.replace(/^\//,'') == this.pathname.replace(/^\//,'') && location.hostname == this.hostname) {
var target = $(this.hash);
target = target.length ? target : $('[name=' + this.hash.slice(1) +']');
if (target.length) {
$('html, body').animate({
scrollTop: target.offset().top
}, 1000);
return false;
}
}
});
});
Found here: https://css-tricks.com/snippets/jquery/smooth-scrolling/
You can test it out there as well!
A: You may try something like below in jQuery:-
$('html,body').animate({
scrollTop: $(".targetDiv").offset().top},
'slow');
| |
doc_23534628
|
Now I have a form.
1)I set the layout to grid layout. grid is 2 X 3 (2 rows and 3 cols)
2)I add 6 buttons and these buttons occupy the 6 cells. Each button has an image and text associated with it.
3)I have styled the buttons in such a way, that they do not have borders.So now, the buttons don't really have the look and feel of a button. They just look like images with some text below them.
4)Now these images don't occupy the entire screen. So if I have an android with a very big screen, I see 3 images in the first row, a very big gap and then 3 images in the 2nd row.
5)I would expect that, if i accidently click anywhere between the first row and the second row (in the gap between the two rows of buttons/images) nothing should happen.
6)However, the thing is , the grid occupies the entire screen. SO even if i click in the gap within the two rows of buttons/images, individual cells are so huge that the whenever I click within the gap; I am actually still clicking inside a cell of the grid. Now this cell captures the event and transfers it to the button in that cell and some action happens.
7)I dont want that to happen.I want the action to happen ONLY WHEN the user puts his finger on the image.
How do i do this? The solution should work without issues on cellphones with small/big/medium size screens.
A: You have two options depending on what you want to achieve:
*
*Place the GridLayout Container within a NORTH section of a border layout container. This will align the images to the top. You can play with a hierarchy/type of layouts easily (which is why the GUI builder is really cool).
*Place each button within a flow layout container which will keep it in its preferred size. You can set the flow layout to center align etc.
| |
doc_23534629
|
ticks = [-5.0, -4, -3, -2, -1, -0.128]
(These strange values are calculated and will change dynamically.)
But there are 2 important details I want to realise: The first and the last value should be a float number to see the exact values at the beginning and the end. The values between shall only be integers to keep it readable.
To set the ticks, I did
axes = plt.subplot()
axes.set_xticks(ticks)
And I get what I want, but it looks bad because my integer values are still printed as float values with 3 decimals.
I want to get ticks -5.000, -4, -3, -2, -1, -0.128 and not -5.000, -4.000, ....
Any idea what I can do to solve this? Thank you! :)
A: Thanks to ImportanceOfBeingErnest, I was able to solve it quickly. Thank you very much!
For other people with the same problem:
ticks = [-5.0, -4, -3, -2, -1, -0.128]
tickLabels = map(str, ticks)
axes = plt.subplot()
axes.set_xticks(ticks)
axes.set_xticklabels(tickLabels)
| |
doc_23534630
|
The goal is to run multiple instances of the script at the same time. I tried parallel processing, this did not turn out that well. Therefore, I am simply using multiple kernels. I have tried a lot of methods to reduce memory usage, but nothing seems to work.
I am using Jupyter notebooks (Python 3.8.5) (anaconda) in VS code, have a 64 bit Windows system. 16GB of RAM and a Intel i7 8th gen.
First Cell calls the packages, loads the data and sets the parameters.
# import required packages
import matplotlib.dates as mpdates
import matplotlib.pyplot as plt
import mplfinance as mpf
import matplotlib as mpl
from PIL import Image
import pandas as pd
import math as math
import numpy as np
import io as io
import gc as gc
import os as os
#set run instance number
run=1
#timeframe
tf = 20
#set_pixels
img_size=56
#colors
col_up = '#00FF00'
col_down = '#FF0000'
col_vol = "#0000FF"
#set directory
direct = "C:/Users/robin/1 - Scriptie/images/"
#loading the data
data1 = pd.read_csv(r'D:\1 - School\Econometrics\2020 - 2021\Scriptie\Explainable AI\Scripts\Data\test_data.csv',header=[0, 1] , index_col = 0 )
data1.index=pd.to_datetime(data1.index)
#subsetting the data
total_symbols = math.floor(len(data1.columns.unique(level=0))/6)
symbols1 = data1.columns.unique(level=0)[(run-1)*total_symbols:run*total_symbols]
#set the plot parameters
mc = mpf.make_marketcolors(up = col_up ,down = col_down, edge='inherit', volume= col_vol, wick='inherit')
s = mpf.make_mpf_style(marketcolors=mc)
The second cell defines the function used to plot the charts:
# creating candlestick chart with volume
def plot_candle(i,j,data,symbols,s,mc,direct,img_size, tf):
#slicing data into 30 trading day windows
data_temp=data[symbols[j]][i-tf:i]
#creating and saving the candlestick charts
buf = io.BytesIO()
save = dict(fname= buf, rc = (["boxplot.whiskerprops.linewidth",10]),
pad_inches=0,bbox_inches='tight')
mpf.plot(data_temp,savefig=save, type='candle',style=s, volume=True, axisoff=True,figratio=(1,1),closefig=True)
buf.seek(0)
im = Image.open(buf).resize((img_size,img_size))
im.save(direct+"/"+str(symbols[j])+"/"+str(i-tf+1)+".png", "PNG")
buf.close()
plt.close("all")
The third cell loops through the data and calls the functions in the 2nd cell.
#check if images folder excists, if not, create it.
if not os.path.exists(direct):
os.mkdir("C:/Users/robin/1 - Scriptie/images")
for j in range(0,len(symbols1)):
#Check if symbol folder excists, if not, create it
if not os.path.exists(direct+"/"+symbols1[j]):
os.mkdir(direct + "/"+symbols1[j])
for i in range(tf,len(data1)) :
#check if the file has already been created
if not os.path.exists(direct+"/"+str(symbols1[j])+"/" +str(i-tf+1)+".png"):
#call the functions and create the
plot_candle(i , j , data1 , symbols1 ,s ,mc ,direct , img_size, tf)
gc.collect()
A: Promoting from a comment:
The issues is that by default Matplotlib tries to use a GUI based backend (it is making a GUI window for every plot). When you close them we tear down our side of things and tell the GUI to tear down its (c++ based) side of things. However, that teardown happens on the GUI event loop which is never being run in this case, hence the c++-side objects are accumulating in a "about to be deleted" state until it runs out of memory.
By setting the backend to 'agg' we do not try to make any GUI windows at all so there is no GUI objects to tear down (the best optimization is to not do the thing ;) ). I would expect it to also be marginally faster in wall time (because again, do not do work you do not need to do!).
See https://matplotlib.org/tutorials/introductory/usage.html#backends for more details on backends, see https://matplotlib.org/users/interactive.html and the links there in for how the GUI integration works.
| |
doc_23534631
|
We inherited this project from another team and are trying to figure out if the ThreadPoolTaskExecutors are being used correctly. Below is a configuration of TaskExecutors:
@Bean
public TaskExecutor businessTaskExecutor() {
ThreadPoolTaskExecutor pool = new ThreadPoolTaskExecutor();
pool.setCorePoolSize(30);
pool.setMaxPoolSize(Integer.MAX_VALUE);
pool.setQueueCapacity(Integer.MAX_VALUE);
return pool;
}
@Bean
public TaskExecutor eventTaskExecutor() {
ThreadPoolTaskExecutor pool = new ThreadPoolTaskExecutor();
pool.setCorePoolSize(30);
pool.setMaxPoolSize(Integer.MAX_VALUE);
pool.setQueueCapacity(Integer.MAX_VALUE);
return pool;
}
There are 5 TaskExecutor defined as indicate above. I'm not an expert but I know for sure they should be configured differently. These executors are used as follows:
@Bean
public MessageChannel inputChannel() {
return new PublishSubscribeChannel(businessTaskExecutor());
}
@Bean
public MessageChannel outputChannel() {
PublishSubscribeChannel outputChannel = new PublishSubscribeChannel(
businessTaskExecutor());
outputChannel
.addInterceptor(new WireTap(eventTrackerChannel()));
return outputChannel;
}
@Bean
public MessageChannel eventTrackerChannel() {
return new ExecutorChannel(eventTaskExecutor());
}
The input and output channels are used in some ServiceActivator. The eventTrackerChannel is used to split the Spring Integration flow and write some events on DB. These are just examples to understand how the project is structured.
Now the question is, are taskexecutors used correctly? If we eliminate the ThreadPoolTaskExecutors and they are not provided for the channels, should Spring manage the threads? Could there be problems in proceeding with this second approach?
I would like to learn Spring Integration as best I can and the answers to these questions would help a lot. I thank in advance who will help me understand this behavior.
A: ThreadPool is a helper for us to not hold the main thread while some process that not depend of order is running. But it's not mandatory to use theses pools, but I am afraid that you may lose performance or increase the chance of failure without them, since these mechanisms normally has some kind of control besides pooling, like queuing events when the pool is full or reusing threads. Of course each case needs to be checked to see what fits better, but I believe if you don't need these events happening in a specifically order, I suggest you let this how it's.
What I've found different is how big your pools were set. But I don't know your requirements, so I can't conclude anything about this. What I normally see is the PoolSize something like 5 and maxSize like 10. Another thing is maybe you don't need each pool for each thing, one for all maybe it's suitable. But as I said, I can't say what's right or wrong, since each system has its own requirements.
The last detail that could be better is not hardcoding this setup, put in your application.properties these information like maxThreadPoolSize=10 or threadPoolSize=5 and then access it in the moment you setup the bean.
| |
doc_23534632
|
Example URL:
http://www.example.org/training_book.asp?sInstance=1&EventID=139
What I have so far:
RewriteCond %{QUERY_STRING} ^training_book.asp\?sInstance=1&EventID=139
RewriteRule /clean-landing-url/ [NC,R=301,L]
So, what I want to happen is
http://www.site.org/training_book.asp?sInstance=1&EventID=139 301> http://www.site.org/clean-landing-url
but instead what is happening is this:
http://www.site.org/training_book.asp?sInstance=1&EventID=139 301> http://www.site.org/training_book.asp/?sInstance=1&EventID=139
It's appending a forward slash just before the querystring, and then resolving the full URL (obviously, 404ing.)
What am I missing? Is it a regex issue with the actual %{QUERY_STRING} parameter?
Thanks in advance!
EDIT -
Here's where I am so far.
Based upon the advice from @TerryE below, I've tried implementing the following rule.
I have a set of URLs with the following parameters:
http://www.example.org/training_book.asp?sInstance=1&EventID=139
http://www.example.org/training_book.asp?sInstance=2&EventID=256
http://www.example.org/training_book.asp?sInstance=5&EventID=188
etc.
which need to redirect to
http://www.example.org/en/clean-landing-url-one
http://www.example.org/en/clean-landing-url-two
http://www.example.org/en/clean-landing-url-three
etc.
This is the exact structure of the htaccess file I have currently, including the full examples of the "simple" redirects which are presently working fine (note - http://example.com > http://www.example.com redirects enforced in httpd.conf)
#301 match top level pages
RewriteCond %{HTTP_HOST} ^example\.org [NC]
RewriteRule ^/faq.asp /en/faqs/ [NC,R=301,L]
All URLs in this block are of this type. All these URLs work perfectly.
#Redirect all old dead PDF links to English homepage.
RewriteRule ^/AR08-09.pdf /en/ [NC,R=301,L]
All URLs in this block are of this type. All these URLs work perfectly.
The problem is here: I still can't get the URLs of the below type to redirect. Based upon advice from @TerryE, I attempted to change the syntax as below. The below block does not function correctly.
#301 event course pages
RewriteCond %{QUERY_STRING} sInstance=1EventID=139$
RewriteRule ^training_book\.asp$ /en/clean-landing-url-one? [NC,R=301,L]
The output of this is
http://staging.example.org/training_book.asp/?sInstance=1&EventID=139
(this is currently applying to staging.example.org, will apply to example.org)
(I had "hidden" some of the actual syntax by changing it to event_book from training_book in the initial question, but I've changed it back to be as real as possible.)
A: The the documentation. QUERY_STRING contains the request content after the ?. Your condition regexp should never match. This makes more sense:
RewriteCond %{QUERY_STRING} ^sInstance=1&EventID=139$
RewriteRule ^event_book\.asp$ /clean-landing-url/ [NC,R=301,L]
The forward slash is caused by a different Apache filter (DirectorySlash).
| |
doc_23534633
|
Calendar cal = Calendar.getInstance(TimeZone.getTimeZone("UTC"));
System.out.println(cal.get(Calendar.WEEK_OF_YEAR));
System.out.println(cal.getTime());
output:
28
Fri Jul 08 08:56:04 BST 2016
Below is the command I executed in MYSQL:
select week(CURDATE()), CURDATE();
Output:
27 2016-07-08
How to sync the both the week of the year value? I tried week(CURDATE(),0) still same result, without TimeZone also tried but getting same result.
A: The definition of the "first week in a year" is an abitrary one. You need to find out what the MySQL definition being used is and make Java match it, or find out what the default Java definition is and make MySQL match it.
On the Java side, it will be influenced by Calendar#setFirstDayOfWeek and Calendar#setMinimalDaysInFirstWeek (possibly others, check the Calendar docs).
| |
doc_23534634
|
Is there a way to add these extension methods to the already existing pure Java JAR or am I forced to create a new Kotlin specific module that has to be published separately? If so, will they be visible to Java users, in what way?
I don't mind using the Kotlin compiler to compile my library if this avoids the release of separate JARs just for literally 3 lines of code.
I need these extension methods to work around type-inference / method reference resolution differences between Java and Kotlin.
A: Extension methods are compiled to static Java methods, for example from app.kt into the class AppKt, i.e. they are available to Java as well using AppKt.method(), as explained in the documentation.
Both your Java code and Kotlin code compile to Java Bytecode class files and can go into the same jar, i.e., no need to ship multiple jars. My personal build system of choice for building Kotlin/Java code is Gradle, but this is up to you.
| |
doc_23534635
|
Obviously, using this current method is insecure as you could redirect the URL to the localhost and fake the result. I guess my real question is, using this method, how can I make it more secure? Is there a way for my C# application to identify that it is connecting to the real PHP file by signing it or something?
A: Your license server must give an important information to the software, not that simple "valid/invalid" flag, if you want to make sure that it should not be easily faked. Additionally, the answer of the server should change over time in a non-predictable way, otherwise it is barely better than the previous flag solution.
Note that on the other side, you must make your customers aware that their software will stop working if your server goes down. I bet you'll get some angry calls and threats of lawsuits if you didn't tell them, and things break in the middle of your night... Be prepared to guarantee 100% uptime of your license server 24/7.
| |
doc_23534636
|
import pandas as pd
raw_data = {'first_name': ['Jason', 'Molly', 'Jason', 'Jake', 'Molly'],
'last_name': ['Miller', 'Jacobson', 'Miller', 'Milner', 'Jacobson'],
'age': [42, 73, 42, 24, 73],
'point_1': [4, 24, 31, 2, 93],
'point_2': [25, 94, 57, 62, 70]}
df = pd.DataFrame(raw_data, columns = ['first_name', 'last_name', 'age',
'point_1', 'point_2'])
If I try:
df.groupby(['first_name','last_name','age']).sum()
I have:
point_1 point_2
first_name last_name age
Amy Cooze 73 3 70
Jake Milner 24 2 62
Jason Miller 42 4 25
Molly Jacobson 52 24 94
Tina Ali 36 31 57
and my columns are only:
df.groupby(['first_name','last_name','age']).sum().columns.values
array(['point_1', 'point_2'], dtype=object)
but I also need the three initial columns.
A: The cols you grouped by became the index. If you don't want that:
df.groupby(['first_name','last_name','age']).sum().reset_index()
A: Setting as_index to False on the groupby call should do it.
df.groupby(['first_name','last_name','age'],as_index=False).sum()
| |
doc_23534637
|
plugin then deleted this plugin using FTP.After that I again upload fresh same
plugin but it's configuration setting has a previous value which I was added like
css etc. I checked database but this plugin is not generating any table where it
stores the css data and why previous data is coming fresh plguin setting.
A: Normally good plugins always provide options to uninstall them completely(files and their database modifications) from the admin panel itself.
First of all please note that "It is a good practice to delete plugins from WordPress admin in spite of deleting their files from FTP".
Now for a case when you delete a plugin from WordPress admin and still it automatically fill your previous settings as before on their next install. It means that this plugin is not coded well.
So we need to check below mentioned things:
*
*Sometimes plugin create tables in database at the time of their installation, so check for them and delete them.
*Some plugin save their values in "wp_options" table in WordPress database. First check manually for them and then try few plugins that help you in doing this like: https://wordpress.org/plugins/plugins-garbage-collector and https://wordpress.org/plugins/clean-options
*Some plugins create few files or folders generally in "wp-content" directory. Check for them.
Hope this will solve your problem.
A: This is because some plugins store data in the wp_options table in your database, the correct way to uninstall a Plugin, is doing it from the WordPress Plugin Tab:
1.- Go To WordPress Dashboard
2.- Click on Plugins > Installed Plugins
3.- Select the Plugin that you want to delete and click on deactivate (if it's activated)
4.- Then click Delete
That way even the data that the plugin saves in the wp_options will be removed. Otherwise if you want to delete that data manually, you will need to have MySql Knowledge to find the data in that database.
| |
doc_23534638
|
<dependency>
<groupId>org.keycloak</groupId>
<artifactId>keycloak-services</artifactId>
<version>2.0.0.Final</version>
</dependency>
With complete documentation here. I cannot find the required api here to fetch all users with specific role mapped to them.
Problem Statement - I need to pick all users from keycloak server who have a specific role. I need to send email to all users with role mapped to them.
A: This should be now possible with the updated rest endpoint.
Set<UserRepresentation> usersOfRole = realmResource.roles().get(roleName).getRoleUserMembers();
A: Here is another interesting query, which would also display other useful fields.
SELECT kr_role.REALM_ID 'Realm', cl.CLIENT_ID 'Realm Client',
kr_role.NAME 'Role Name',
kr_role.DESCRIPTION 'Role Description',
user_ent.USERNAME 'Domain ID', user_ent.EMAIL 'Email'
FROM keycloak_role kr_role, user_role_mapping role_map,
user_entity user_ent, client cl
WHERE role_map.USER_ID = user_ent.ID
AND kr_role.ID = role_map.ROLE_ID
AND kr_role.CLIENT = cl.ID
AND cl.REALM_ID = '<realm_name>'
AND cl.CLIENT_ID = '<client_name>'
ORDER BY 1, 2, 3;
A: Based on the documentation it appears to be this API:
GET /{realm}/clients/{id}/roles/{role-name}/users
It is there for a while. In this older version however it was not possible to get more than 100 users this way. It was fixed later and pagination possibility was added.
A: There is an outstanding feature request asking for this function via the API.
In the meantime if your requirement is once-off you could obtain the user names (or email addresses) by interrogating the database joining KEYCLOAK_ROLE to USER_ROLE_MAPPING to USER_ENTITY
Something like:
SELECT username
FROM keycloak_role kr
JOIN user_role_mapping rm ON kr.id = rm.role_id
JOIN user_entity ue ON rm.user_id = ue.id
WHERE kr.name = 'your_role_name';
A: If anyone is still searching for a Postgres Query to find information regarding users/roles/groups in keycloak database, I came up with this one lately.
It uses two CTEs to have the groups and roles straight (recursing for groups in groups, because they can nest in arbitrary depth and fetching composite roles with their parents) and a simple UNION for group and direct assignments.
Please note the WHERE clause, where you can limit the realm and different aspects. You can search for
*
*all roles from a specific user (just matching username)
*all users, that have a particular role assigned (matching role_name)
*everything coming from a specific group (I sometimes use it without the username column in the projection to just see, what roles a group has. Please note the prefix in the group column)
-- flat out GROUPS in GROUPS
WITH RECURSIVE groups AS (
SELECT
g.id,
g.id AS parent_group,
g.name,
g.name AS parent_name,
g.realm_id,
1 AS iter
FROM
keycloak_group g
UNION
SELECT
groups.id,
parents.parent_group,
groups.name,
parents.name,
groups.realm_id,
groups.iter + 1
FROM
groups
INNER JOIN keycloak_group parents ON groups.parent_group = parents.id
),
-- Collect roles and composite roles
roles AS (
SELECT
r.id,
r.name AS role_name,
null AS base_role,
c.client_id
FROM
keycloak_role r
LEFT JOIN client c ON r.client = c.id
UNION
SELECT
r.id,
r2.name,
r.name,
c.client_id
FROM
keycloak_role r
JOIN composite_role cr ON r.id = cr.composite
JOIN keycloak_role r2 ON r2.id = cr.child_role
LEFT JOIN client c ON r.client = c.id
)
SELECT DISTINCT
username,
role_name,
base_role, -- for composite roles
client_id,
source,
realm_name
FROM
(
-- Roles from Groups
SELECT
ue.username,
roles.role_name,
roles.base_role,
roles.client_id,
ue.realm_id,
'group ' || g.name AS source,
realm.name AS realm_name
FROM
user_entity ue
JOIN realm ON ue.realm_id = realm.id
JOIN user_group_membership ugm ON ue.id = ugm.user_id
JOIN groups g ON g.id = ugm.group_id
JOIN group_role_mapping grm ON g.parent_group = grm.group_id
JOIN roles roles ON roles.id = grm.role_id
UNION
-- direct role assignments on User
SELECT
ue.username,
roles.role_name,
roles.base_role,
roles.client_id,
ue.realm_id,
'direct',
realm.name
FROM
user_entity ue
JOIN realm ON ue.realm_id = realm.id
JOIN user_role_mapping urm ON ue.id = urm.user_id
JOIN roles roles ON roles.id = urm.role_id
) AS a
WHERE
realm_name = 'realm_name'
AND (
-- username = 'username'
role_name IN ('roleName')
-- source = 'group GROUPNAME'
)
ORDER BY
username,
role_name
;
This query works from keycloak 9 to 16.1.1 (the last jboss/keycloak version I got from docker hub).
A: SELECT username,
kr.NAME,
kr.REALM_ID
FROM KEYCLOAK_ROLE kr
JOIN USER_ROLE_MAPPING rm ON kr.id = rm.role_id
JOIN USER_ENTITY ue ON rm.user_id = ue.id
ORDER BY USERNAME,
NAME,
REALM_ID;
| |
doc_23534639
|
public static void doubleTapElementBy(By by) {
WebElement el = getDriver().findElement(by);
MultiTouchAction multiTouch = new MultiTouchAction(getDriver());
TouchAction action0 = new TouchAction(getDriver()).tap(el).waitAction(50).tap(el);
try {
multiTouch.add(action0).perform();
} catch (WebDriverException e) {
logger.info("Unable to do second tap on element, probably because element requieres single tap on this Android version");
}
}
A: You can also try below approach using tap method in TouchAction class.
TouchAction taction = new TouchAction(driver);
taction.tap(tapOptions().withElement(ElementOption.element(YOUR_WebElement))
.withTapsCount(2)).perform();
You will need to add below static import as well:
import static io.appium.java_client.touch.TapOptions.tapOptions;
A: This is a workaround in pseudocode and possibly there's a more "official" way to do it, but it should do the work if no other solution is available:
Interpretmessages(){
switch(msg)
{
OnClick:
{ if (lastClicked - thisTime() < 0.2) //if it was clicked very recently
{doubleTapped()} //handle it as a double tap
else{lastClicked = thisTime()} //otherwise keep the time of the tap
} //end of OnClick
} //End of Message Handler
}//End of switch
}//End of messageHandler
If you have access to ready timer functions, you can set a function to be executed 0.2s after the click has gone off:
OnClick: if (!functionWaiting) // has the timer not been set?
{
enableTimer(); // set a function to go off in x time
clicks = 0; //we'll tell it that there's been one click in a couple of lines
} //set it for the first click
clicks++; //if it's already clicked, it'll become 2 (double tap) otherwise it's just one
So, the idea is that when you get a tap, you check if there's been another one recently (a. by checking the relative times, b. by checking if the function is still pending) and you handle it dependingly, only note that you will have to implement a timer so your function fires a bit later so you have time to get a second tap
The style draws upon the Win32's message handling, I'm pretty sure it works there, it should work for you too.
A: Double tap and hold -- Use below code:
new TouchAction(driver).press(112,567).release().perform().press(112,567).perform();
Double tap -- Use below code:
new TouchAction(driver).press(112,567).release().perform().press(112,567).release().perform();
| |
doc_23534640
|
S1=rog1.groupby('Date')['availabi'].mean()
S1.index
# output
DatetimeIndex(['2018-05-10', '2018-06-10', '2018-07-10'],
dtype='datetime64[ns]', name='Date', freq=None)
But when I decide to plot the lot.
plt.figure(figsize=(10,4))
plt.plot(S1.index, S1)
The below is what I get
The y-axis values are fine. I dunno where the plotted values are coming from. I only have 3 lines in this Series
A: The issue is that matplotlib auto-detects the number and spacing of x-ticks to populate the x-axis without overlapping labels, and also without leaving too much white space.
The simplest workaround I can think of:
1. Create figure and axis handles
2. Plot your data in the axis
3. Manually set the xtick positions and labels
Code to replace your two lines of plotting:
fig, ax = plt.subplots(figsize=(10, 4))
S1.plot(ax=ax)
ax.set_xticks(S1.index);
ax.set_xticklabels(S1.index.strftime('%Y-%m-%d'));
| |
doc_23534641
|
The depth of the tree structure isn't known beforehand so probably recursion would be the solution?
I'm using React but I guess the question isn't really React-specific so generic JS or even pseudo-code would help a lot.
Example data:
[
{
"name": "banana",
"path": "food.healthy.fruit",
// ... may contain other parameters
},
{
"name": "apple",
"path": "food.healthy.fruit"
}
{
"name": "carrot",
"path": "food.healthy.vegetable"
},
{
"name": "bread",
"path": "food"
},
{
"name": "burger"
"path": "food.unhealthy"
},
{
"name": "hotdog"
"path": "food.unhealthy"
},
{
"name": "germany",
"path": "country.europe"
},
{
"name": "china",
"path": "country.asia"
}
]
Desired result:
<ul>
<li>
food
<ul>
<li>bread</li>
<li>healthy
<ul>
<li>
fruit
<ul>
<li>apple</li>
<li>banana</li>
</ul>
</li>
<li>
vegetable
<ul>
<li>carrot</li>
</ul>
</li>
</ul>
</li>
<li>
unhealthy
<ul>
<li>burger</li>
<li>hotdog</li>
</ul>
</li>
</ul>
</li>
<li>
country
<ul>
<li>
europe
<ul>
<li>germany</li>
</ul>
</li>
<li>
asia
<ul>
<li>china</li>
</ul>
</li>
</ul>
</li>
</ul>
*
*
food
*
*bread
*healthy
*
*
fruit
*
*apple
*banana
*
vegetable
*
*carrot
*
unhealthy
*
*burger
*hotdog
*
country
*
*
europe
*
*germany
*
asia
*
*china
A: Group by path at first.
You can do this by iterating through the source data and split path for each item by dot symbol. Then store each item in a object by keys like this
store[country] = store[country] || {}
store[country][europe] = store[country][europe] || []
store[country][europe].push(germany)
Then get all the keys of object at root level and recursivly render all of the items. Here is some pseudo-code:
function render(store){
let keys = Object.keys(store)
let ul = document.createElement('ul')
for (var i = 0; i < keys.length; i++){
let key = keys[i]
if (typeof store[key] === 'object') {
let li = document.createElement('li')
//create a branch, return it with our render function and append to current level
li.appendChild(render(store[key]))
} else {
// create html presentation for all items under the current key
let li = document.createElement('li')
}
ul.appendChild(li)
}
return ul
}
A: First of all, you need to restructure data into nested groups. Here is how you can reduce your array to necessary structure:
const tree = data.reduce(function(prev, curr) {
const branches = curr.path.split('.')
let branch = prev
let branchName
while (branches.length) {
branchName = branches.shift()
let rootIndex = branch.length ? branch.findIndex(el => el.name === branchName) : -1
if (rootIndex === -1) {
let newBranch = {
name: branchName,
children: []
}
branch = branch[branch.push(newBranch) - 1].children
} else {
branch = branch[rootIndex].children
}
if (branches.length === 0) {
branch.push({
name: curr.name
})
}
}
return prev
}, [])
It will give you array similar to this:
[
{
name: 'food',
children: [
{
name: 'bread'
},
{
name: 'healthy',
children: [
{
name: 'fruit',
children: [
{name: 'bannana'},
{name: 'apple'}
]
}
]
}
]
},
{
name: 'country',
children: [
// ...
]
}
]
After that, it's easy to create Tree component that would recursively render branches:
const Tree = (props) => (
<ul>
{props.data.map((branch, index) => (
<li key={index}>
{branch.name}
{branch.children && (
<Tree data={branch.children} />
)}
</li>
))}
</ul>
)
Demo. Check the demo below.
const data = [{
"name": "banana",
"path": "food.healthy.fruit"
}, {
"name": "apple",
"path": "food.healthy.fruit"
}, {
"name": "carrot",
"path": "food.healthy.vegetable"
}, {
"name": "bread",
"path": "food"
}, {
"name": "burger",
"path": "food.unhealthy"
}, {
"name": "hotdog",
"path": "food.unhealthy"
}, {
"name": "germany",
"path": "country.europe"
}, {
"name": "china",
"path": "country.asia"
}]
const tree = data.reduce(function(prev, curr) {
const branches = curr.path.split('.')
let branch = prev
let branchName
while (branches.length) {
branchName = branches.shift()
let rootIndex = branch.length ? branch.findIndex(el => el.name === branchName) : -1
if (rootIndex === -1) {
let newBranch = {
name: branchName,
children: []
}
branch = branch[branch.push(newBranch) - 1].children
} else {
branch = branch[rootIndex].children
}
if (branches.length === 0) {
branch.push({
name: curr.name
})
}
}
return prev
}, [])
const Tree = (props) => (
<ul>
{props.data.map((branch, index) => (
<li key={index}>
{branch.name}
{branch.children && (
<Tree data={branch.children} />
)}
</li>
))}
</ul>
)
ReactDOM.render(
<Tree data={tree} />,
document.getElementById('demo')
);
<script src="https://cdnjs.cloudflare.com/ajax/libs/react/15.1.0/react.min.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/react/15.1.0/react-dom.min.js"></script>
<div id="demo"></div>
| |
doc_23534642
|
<tr>
<td>Track</td>
<td><?php echo $trackResult. '´<br/>';
if($trackResultOne != NULL){ echo $trackResultOne."´<br/>";}
if($trackResultTwo != NULL){ echo $trackResultTwo."´";}
?></td>
</tr>
A: Unfortunatelly, you will need to use the ELSE part of the IF to retrieve a blank (''), IE converts automatically the real null into the string 'null'.
Regards,
Pablo
| |
doc_23534643
|
folder
└───subfolder
└───subsubfolder
I hava a Main.java in folder and Main.java uses class inside subsubfolder.
Here is how I did:
import subfolder.*;
import subfolder.subsubfolder.*;
However, I got the message following when I execute javac Main.java
$ javac -g Main.java
Main.java:23: error: cannot access Node
Node root = new Node();
^
bad class file: ./subfolder/subsubfolder/Node.class
class file contains wrong class: subsubfolder.Node
Please remove or make sure it appears in the correct subdirectory of the classpath.
1 error
Is my way of importing class file wrong?
A:
It says package subfolder
The package declaration of Node should say
package subfolder.subsubfolder;
Providing an example for clarity:
folder/
Your source root (typically called 'src')
folder/Main.java
class Main { ... } (no package declaration)
folder/subfolder
folder/subfolder/subsubfolder/Node.java
package subfolder.subsubfolder;
public class Node { ... }
If your Main indeed lives in a package (i.e. if your situation is something like src/folder/Main.java) then you should not do
cd src/folder
javac Main.java
you should do
cd src
javac folder/Main.java
A: Your Node class declares that it belongs to package subsubfolder, but it should belong to package subfolder.subsubfolder. Alternatively, you could move directory subfolder/subsubfolder to be a sibling of directory subfolder.
| |
doc_23534644
|
for eg:
create view test as select * from test where owner = current_user //like this
Can I do it like this?
A: The owner of the View will be the currently logged in user by default, you only have to modify it if you want the owner to be someone other than the currently logged in user.
To modify the owner you can use
ALTER VIEW [ IF EXISTS ] name OWNER TO { new_owner | CURRENT_USER | SESSION_USER } from https://www.postgresql.org/docs/current/static/sql-alterview.html
If your intention is to apply row level security then you should apply it to the underlying table, not the view, which should restrict the returned rows appropriately to the view.
CREATE POLICY name ON table_name
[ FOR { ALL | SELECT | INSERT | UPDATE | DELETE } ]
[ TO { role_name | PUBLIC | CURRENT_USER | SESSION_USER } [, ...] ]
[ USING ( using_expression ) ]
[ WITH CHECK ( check_expression ) ]
https://www.postgresql.org/docs/current/static/sql-createpolicy.html
| |
doc_23534645
|
Given the following method:
public TOut WithThing<TOut>(Func<T, TOut> func)
{
var thing = CreateThing();
thing.DoSomething();
return func(thing);
}
I have an equivalent void method which wraps the above for when I don't want to return a value (it's calling WithThing<bool> really and just discarding the result):
public void WithThing(Action<T> action)
{
WithThing(x =>
{
action(x);
return false;
});
}
I'm trying to create asynchronous versions of the above methods because "Thing" has synchronous and asynchronous methods.
I think the equivalent of the first method is:
public async Task<TOut> WithThing<TOut>(Func<T, Task<TOut>> func)
{
var thing = CreateThing();
await thing.DoSomethingAsync().ConfigureAwait(false);
return await func(thing).ConfigureAwait(false);
}
I didn't think I'd need the ConfigureAwait's but Visual Studio is suggesting them. Is this correct? Why do I need the ConfigureAwait's?
If the above code is correct, then I'd guess the signature for the equivalent that doesn't return a value would be public async Task WithThing(Func<T, Task> func). What would the body be?
A: Beware that using ConfigureAwait(false) is contextual and you must be aware of when not to use it. Refer to the ConfigureAwait FAQ and Async/Await - Best Practices in Asynchronous Programming - Configure Context.
Your guess on the return types is correct.
I would avoid wrapping too much as it adds allocations and delegate invocations.
For the sync part I would do something like this:
public TOut WithThing<TOut>(Func<T, TOut> func)
{
Someting();
return func(thing);
}
public void WithThing(Action<T> action)
{
Someting();
action(x);
}
private void Someting()
{
var thing = CreateThing();
thing.DoSomething();
}
and for the async:
public async Task<TOut> WithThing<TOut>(Func<T, Task<TOut>> func)
{
await SometingAsync();
return await func(thing).ConfigureAwait(false);
}
public async Task WithThing(Func<Task> action)
{
await SometingAsync();
await action(thing).ConfigureAwait(false);
}
private void SometingAsync()
{
var thing = CreateThing();
await thing.DoSomethingAsync().ConfigureAwait(false);
}
| |
doc_23534646
|
How to write an application in C++ or Java or any programming languages?
A: You can create a netfilter kernel module (in C language), and hook yourself for various packet events such as receiving a packet on a particular interface etc. You will need to check the packet header to figure out whether it is a TCP SYN request, and then decide what to do with it.
https://www.netfilter.org/
You cannot create a user mode C++ or Java program to achieve this.
That being the answer for what you are asking, perhaps a better alternative would be to add rules to the firewall depending on what invalid requests you want to disable.
| |
doc_23534647
|
My Previous Code:
foreach (string file in newZips) {
FileInfo fileInfo = new FileInfo(file);
string dirName = newPath + "\\" + fileInfo.Name.Substring(0, fileInfo.Name.Length - 4);
Console.WriteLine(dirName);
Directory.CreateDirectory(dirName);
ZipFile.ExtractToDirectory(allZipsPath + "\\" + fileInfo.Name, dirName);
}
A: Maybe this helps you:
string path = @"C:\..\..\myFolder";
if(!Directory.Exists(path))
{
Directory.CreateDirectory(path);
}
Thats how you can check a path if it contains the Folder you expect. And if not it creates that Folder!
--- EDIT (if unknown zip-Name) ---
string myPathToZip = @"C:\..\..\folderName";
foreach (string file in Directory.GetFiles(myPathToZip, "*.zip", SearchOption.AllDirectories))
{
//the current path of the zipFile (with the Name included)
var path = new FileInfo(file.ToString());
//The filename
var filename = Path.GetFileName(file.ToString()).Replace(".zip", "");
}
| |
doc_23534648
|
Aim: I am trying to build a Bus Android App that allows users to query about when the next bus will be arriving.
This can be achieved by making API Calls to a locally public transport API (the link is below). Of the APIs available, I am interested in Bus Arrival, Bus Routes, and Bus Stops.
*
*Bus Arrival API allows me to get the timestamp of the next bus, and has 2 query parameters, which is the Bus Stop Code and the Bus Service Number we are interested in. Using this, I implemented the feature that allows Users to check all the incoming buses at a bus stop if they provide the Bus Stop Code.
*Bus Stops API allows me to get the details of the Bus Stops, such as the name and road of the bus stops. However, it only has 1 query parameter ($skip), which means that the API call returns all the bus stops found in the country. Each call is also limited to 500 records, so I have to use $skip to get to the next 500 records.
*Bus Services API allows me to get the route of a particular bus, that is all the bus stops a bus service will travel to. Similar to Number 2, it returns the routes of all the bus services in the country and can only be traversed using $skip.
(https://datamall.lta.gov.sg/content/dam/datamall/datasets/LTA_DataMall_API_User_Guide.pdf)
Feature Implemented: Currently, I wish to implement a feature where Users can search any Bus Service Number, and the app will return the route for that bus. This means that they can click on any of the bus stops in the Route to get the bus timings for all the buses at that bus stop.
To implement this, I used ViewModelScope.launch { } to call the Data Repository, allowing me to call the functions I implemented to make the API Calls. I have nested this code within onKeyboardSearch so that the 2 functions to get the Bus Timings and Bus Stop Details are called when the User searches for the Bus Stop.
Issue: However, I realised that I have to click the "Search" Button twice for a User to search using the Service Number as the UI is not updated with the data from the API call I think.
Also, I am currently using a nested for loop (so n^2 time complexity) to filter the Bus Stops and Bus Services APIs to get the data I want. I have attached my code below and I am terribly sorry if my explanation is bad, I will try to provide more information if there are any doubts. Thank you for reading my post.
AppViewModel.kt
package com.example.busexpress.ui.screens
import android.util.Log
import androidx.compose.runtime.getValue
import androidx.compose.runtime.mutableStateOf
import androidx.compose.runtime.setValue
import androidx.lifecycle.ViewModel
import androidx.lifecycle.ViewModelProvider
import androidx.lifecycle.ViewModelProvider.AndroidViewModelFactory.Companion.APPLICATION_KEY
import androidx.lifecycle.viewModelScope
import androidx.lifecycle.viewmodel.initializer
import androidx.lifecycle.viewmodel.viewModelFactory
import com.example.busexpress.BusExpressApplication
import com.example.busexpress.data.SingaporeBusRepository
import com.example.busexpress.determineBusServiceorStop
import com.example.busexpress.network.BusRoutes
import com.example.busexpress.network.BusStopInRoute
import com.example.busexpress.network.BusStopValue
import com.example.busexpress.network.SingaporeBus
import kotlinx.coroutines.flow.MutableStateFlow
import kotlinx.coroutines.flow.StateFlow
import kotlinx.coroutines.flow.asStateFlow
import kotlinx.coroutines.launch
import retrofit2.HttpException
import java.io.IOException
/**
* [AppViewModel] holds information about a cupcake order in terms of quantity, flavor, and
* pickup date. It also knows how to calculate the total price based on these order details.
*/
class AppViewModel(private val singaporeBusRepository: SingaporeBusRepository): ViewModel() {
/**
* StateFlows to store the Data of API Calls
*/
private val _busServiceUiState = MutableStateFlow(SingaporeBus())
val busServiceUiState: StateFlow<SingaporeBus> = _busServiceUiState.asStateFlow()
private val _busStopNameUiState = MutableStateFlow(BusStopValue())
val busStopNameUiState: StateFlow<BusStopValue> = _busStopNameUiState.asStateFlow()
private val _busRouteUiState = MutableStateFlow(BusRoutes())
val busRouteUiState: StateFlow<BusRoutes> = _busRouteUiState.asStateFlow()
/** The mutable State that stores the status of the most recent request */
var busUiState: BusUiState by mutableStateOf(BusUiState.Loading) // Loading as Default Value
// Setter is private to protect writes to the busUiState
private set
var busNameUiState: BusStopNameUiState by mutableStateOf(BusStopNameUiState.Loading)
private set
var busRoutingUiState: BusRouteUiState by mutableStateOf(BusRouteUiState.Loading)
private set
// Boolean to determine if we should show Bus Stops or Routes
var busServiceBoolUiState: Boolean by mutableStateOf(false)
private set
/**
* Call init so we can display status immediately.
*/
init {
getBusTimings(null)
getBusStopNames(0)
}
fun getBusRoutes(targetBusService: String?) {
busServiceBoolUiState = true
viewModelScope.launch {
busRoutingUiState = BusRouteUiState.Loading
busRoutingUiState = try {
// Goal: To get the Route, i.e. all the Bus Stop Codes for a SINGLE Bus Service
var skipIndex = 0
var targetServiceRoute = mutableListOf<BusStopInRoute>()
var targetRouteFound = false
do {
// Variables to hold Conditions
var iteratorIndex = 0
var completedRoute = false
// API Call
val listResult = singaporeBusRepository.getBusRoutes(skipIndex)
val busStopRoute = listResult.busRouteArray
// 500 since according to LTA Record each API Call is confined to 500 Records
for (i in 1..500) {
// Log.d("debugTag", "This is output ${busStopRoute[iteratorIndex].serviceNo}")
// Must compare string since Bus Services like 901M exist
if (busStopRoute[iteratorIndex].serviceNo == targetBusService) {
// Found a Bus Stop of Route, append to Result Array
targetServiceRoute.add(busStopRoute[i])
if (busStopRoute[iteratorIndex].stopSequence != 1) {
targetRouteFound = true
}
}
// Condition to Check if Route is Complete
else if (targetRouteFound && (busStopRoute[iteratorIndex].stopSequence == 1)) {
// Passed the Target Route alr
completedRoute = true
break
}
iteratorIndex += 1
}
// Stopping Condition
if (completedRoute) {
_busRouteUiState.value = BusRoutes(
metaData = listResult.metaData,
busRouteArray = targetServiceRoute
)
// For Loop to get all the Arrival Timings for all Bus Stops in the Route
break
}
else {
skipIndex += 500
}
} while(true)
BusRouteUiState.Success(targetServiceRoute)
}
catch (e: IOException) {
BusRouteUiState.Error
}
catch (e: HttpException) {
BusRouteUiState.Error
}
}
}
// TODO Make it async and await()
fun getBusStopNames(targetBusStopCode: Int) {
busServiceBoolUiState = false
viewModelScope.launch {
busNameUiState = BusStopNameUiState.Loading
busNameUiState = try {
var targetBusStop = BusStopValue()
var targetBusStopFound = false
var skipIndex = 0
// Retrieve the Desired Bus Stop Object
do {
val listResult = singaporeBusRepository.getBusDetails(numRecordsToSkip = skipIndex)
val busStopDetails = listResult.value
val forLoopSize = busStopDetails.size
var indexBSD = 0
// Loop through the 500 Records of this Call to see if the Bus Stop we want is inside
for(i in 1..forLoopSize) {
if (busStopDetails[indexBSD].busStopCode.toInt() == targetBusStopCode) {
targetBusStopFound = true
break
}
// Update to check every Record of API Call
indexBSD += 1
}
// Check if the Bus Stop we want is in this API Call
if (targetBusStopFound) {
targetBusStop = busStopDetails[indexBSD]
}
else {
// Call the Next 500/ or whatever size pulled Records
skipIndex += forLoopSize
}
} while(!targetBusStopFound)
// After finding the Correct Bus Stop
if (targetBusStop.busStopCode != "Bus Stop Not Found") {
_busStopNameUiState.value = BusStopValue(
busStopCode = targetBusStop.busStopCode,
busStopRoadName = targetBusStop.busStopRoadName,
busStopDescription = targetBusStop.busStopDescription,
latitude = targetBusStop.latitude,
longitude = targetBusStop.longitude
)
}
BusStopNameUiState.Success(targetBusStop.busStopRoadName)
}
catch (e: IOException) {
BusStopNameUiState.Error
}
catch (e: HttpException) {
BusStopNameUiState.Error
}
}
}
fun getBusTimings(userInput: String?) {
// Determine if UserInput is a BusStopCode
val userInputResult = determineBusServiceorStop(userInput = userInput)
val busStopCode = userInputResult.busStopCode
val busServiceNumber = userInputResult.busServiceNo
//var listResult: SingaporeBus = SingaporeBus(metaData = "Initialised", busStopCode = "Initialised")
// Launch the Coroutine using a ViewModelScope
viewModelScope.launch {
busUiState = BusUiState.Loading
// Might have Connectivity Issues
busUiState = try {
// Within this Scope, use the Repository, not the Object to access the Data, abstracting the data within the Data Layer
val listResult = singaporeBusRepository.getBusTimings(
busServiceNumber = busServiceNumber,
busStopCode = busStopCode
)
_busServiceUiState.value = SingaporeBus(
metaData = listResult.metaData,
busStopCode = listResult.busStopCode,
services = listResult.services
)
// Assign results from backend server to busUiState {A mutable state object that represents the status of the most recent web request}
BusUiState.Success(busTimings = listResult)
}
catch (e: IOException) {
BusUiState.Error
}
catch (e: HttpException) {
BusUiState.Error
}
}
}
// Factory Object to retrieve the singaporeBusRepository and pass it to the ViewModel
companion object {
val Factory: ViewModelProvider.Factory = viewModelFactory {
initializer {
val application = (this[APPLICATION_KEY] as BusExpressApplication)
val singaporeBusRepository = application.container.singaporeBusRepository
AppViewModel(singaporeBusRepository = singaporeBusRepository)
}
}
}
}
// Simply saving the UiState as a Mutable State prevents us from saving the different status
// like Loading, Error, and Success
sealed interface BusUiState {
data class Success(val busTimings: SingaporeBus) : BusUiState
// The 2 States below need not set new data and create new objects, which is why an object is sufficient for the web response
object Error: BusUiState
object Loading: BusUiState
// Sealed Interface used instead of Interface to remove Else Branch
}
sealed interface BusStopNameUiState {
data class Success(val busStopName: String): BusStopNameUiState
object Error: BusStopNameUiState
object Loading: BusStopNameUiState
}
sealed interface BusRouteUiState {
data class Success(val busRoutes: MutableList<BusStopInRoute>): BusRouteUiState
object Error: BusRouteUiState
object Loading: BusRouteUiState
}
CommonComposable.kt
https://pastebin.com/hnXhjDFE
Default Screen.kt
https://pastebin.com/9VzUgChM
| |
doc_23534649
|
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
#include <stdlib.h>
int main(void) {
int i=0;
char* string[100];
char line[100];
FILE *file;
file = fopen("plates.txt", "r");
while(fgets(line, sizeof line, file)!=NULL) {
printf("%s",line);
string[i]=line;
i++;
}
fclose(file);
return 0;
}
but i want to now select a random line of my array and print it. All lines need to have an equal chance of being selected but they can only be selected once. Im not too sure how to do this...
Thank you in advance
A: Please be mindful of this line string[i]=line as it makes all the array entries in string that you set all point to the last line read which is not what you want and it's pretty important to understand that.
That said, here's a solution to the problem that assumes we can just store all the lines in memory and on the stack:
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <time.h>
#define MAX_LINE_LENGTH 128
#define MAX_LINE_COUNT 1000
int main(int argc, char **argv) {
char lines[MAX_LINE_COUNT][MAX_LINE_LENGTH];
int numLines = 0;
if (argc < 2) {
fprintf(stderr, "missing file name\n");
return EXIT_FAILURE;
}
FILE *fp = fopen(argv[1], "r");
if (fp != NULL) {
while (fgets(lines[numLines++], MAX_LINE_LENGTH, fp)) {
printf("%03d> %s", numLines, lines[numLines-1]);
}
fclose(fp);
srand (time(NULL));
int randomIndex = rand() % numLines;
printf("Selected random line #%d> %s", randomIndex+1, lines[randomIndex]);
} else {
fprintf(stderr, "file '%s' not found\n", argv[1]);
return EXIT_FAILURE;
}
}
And the corresponding output:
➜ ~ gcc random-line.c && ./a.out random-line.c
001> #include <stdio.h>
002> #include <stdlib.h>
003> #include <string.h>
004> #include <time.h>
005>
006> #define MAX_LINE_LENGTH 128
007> #define MAX_LINE_COUNT 1000
008>
009> int main(int argc, char **argv) {
010> char lines[MAX_LINE_COUNT][MAX_LINE_LENGTH];
011> int numLines = 0;
012>
013> if (argc < 2) {
014> fprintf(stderr, "missing file name\n");
015> return EXIT_FAILURE;
016> }
017>
018> FILE *fp = fopen(argv[1], "r");
019> if (fp != NULL) {
020> while (fgets(lines[numLines++], MAX_LINE_LENGTH, fp)) {
021> printf("%03d> %s", numLines, lines[numLines-1]);
022> }
023> fclose(fp);
024> srand (time(NULL));
025> int randomIndex = rand() % numLines;
026> printf("Selected random line #%d> %s", randomIndex+1, lines[randomIndex]);
027> } else {
028> fprintf(stderr, "file '%s' not found\n", argv[1]);
029> return EXIT_FAILURE;
030> }
031> }
Selected random line #2> #include <stdlib.h>
| |
doc_23534650
|
u = User.new
u.name = "Ralph"
u.valid? # => true
u.validated? # => false
I want to prevent too much queries on geocoding.
A: If you have before_validation :geocode callback you can improve your geocode method to cache heavy code results this way:
def geocode
@geocode_results ||= {}
# suppose geocoding depends on `lat_lon` attribute
@geocode_results[lat_lon] ||= begin
# Your heavy code here
end
end
Caching as hash value lets redo geocoding when lat_lon changes.
| |
doc_23534651
|
The call I am using to attempt to create the group is
az rest --method post \
--uri 'https://graph.microsoft.com/v1.0/groups' \
--body '{"description": "A description", "displayName": "MyAppGroup", "mailEnabled": false, "mailNickname": "test", "securityEnabled": true, "owners@odata.bind": ["https://graph.microsoft.com/v1.0/users/oooooooo-oooo-oooo-oooo-oooooooooooo"]}' \
--headers "Content-Type=application/json"
To graph permissions, I have bound the API permission Group.Create to my service principal. To understand the permissions I am required to grant, I am following this page:
https://learn.microsoft.com/en-us/graph/api/group-post-groups?view=graph-rest-1.0&tabs=http#permissions
With the Group.Create permissions, when I run the rest call to the Graph API above, I get the following permission error
Forbidden({
"error": {
"code": "Authorization_RequestDenied",
"message": "Insufficient privileges to complete the operation.",
"innerError": {
"date": "2020-11-02T13:31:35",
"request-id": "...",
"client-request-id": "..."
}
}
})
I completely understand that if I were to add the Directory.ReadWrite.All, I could make the group and would have all required permissions. However this permission is overscoped and would allow my service principal to disable users in the Active Directory tenant - something my organisation will now allow. Therefore I cannot grant my service principal this permission.
The documentation I have linked above implies to me that Group.Create is a sufficient permission to enable a service principal to create a group.
My question is what I am doing wrong, or what permissions am I missing to be able to create a group? Directory.ReadWrite.All is clearly overscoped to simply create an AD security group and so using it is not an option for me.
A: Hopefully this helps someone else - I realised the answer immediately after posting this.
I had added the property
"owners@odata.bind": ["https://graph.microsoft.com/v1.0/users/oooooooo-oooo-oooo-oooo-oooooooooooo"]
to the json post data.
Removing this property allowed me to create the group with just the Group.Create permission.
Adding the permission User.Read.All allows the service principal to read the user data for the owner, and so is sufficient to create the group with any necessary owners.
After adding this API permission, my service principal was able to create the group (with owners) as expected.
| |
doc_23534652
|
This does not seem to be case for Matlab 2017A. How can this feature be enabled?
A: If you have the image processing toolbox, use impixelinfo. Make sure the figure is open first, then type this command into the MATLAB command prompt. You can then hover your mouse over the image and you can see the intensities on the bottom left corner of the figure.
Here's an example of it in action1:
Do note that the coordinates are reversed where X is the column coordinate while Y is the row coordinate.
1. Source: http://www.johnloomis.org/ece564/notes/basics/aoi/pixval1.jpg
A: Use imtool (if you have the Image Processing Toolbox) to get detailed information about pixel values.
| |
doc_23534653
|
Can anyone please share me sample application for Onelogin SAML logout functionality with redirection.
I have already refer onelogin site.
https://developers.onelogin.com/saml/examples/logout-response
still not getting response.
A: The Onelogin dot-net SAML toolkit is a proof of concept as described in its repository.
You should use another SAML toolkit (alternatives listed on the repo).
btw, check this SingleLogout class used on that view that executes a single logout request (extracted from the SAML toolkit of ITfoxtec).
| |
doc_23534654
| ERROR: type should be string, got "https://github.com/Avalanche-io/c4/tree/v0.7.0\nNow as suggested in this answer from stack overflow: Not able to install cmd version of c4 from github\nI execute the following command in my ubuntu terminal\ngo get github.com/Avalanche-io\ngo get github.com/Avalanche-io/c4/id\ngo get github.com/Avalanche-io/c4/cmd/c4\n\nthen as they have shown in the example of how to use this repo\npackage main\n\nimport (\n \"fmt\"\n \"io\"\n \"os\"\n\n c4 \"github.com/avalanche-io/c4/id\"\n)\n\nfunc main() {\n file := \"main.go\"\n f, err := os.Open(file)\n if err != nil {\n panic(err)\n }\n defer f.Close()\n\n // create a ID encoder.\n e := c4.NewEncoder()\n // the encoder is an io.Writer\n _, err = io.Copy(e, f)\n if err != nil {\n panic(err)\n }\n // ID will return a *c4.ID.\n // Be sure to be done writing bytes before calling ID()\n id := e.ID()\n // use the *c4.ID String method to get the c4id string\n fmt.Printf(\"C4id of \\\"%s\\\": %s\\n\", file, id)\n return\n}\n\nI just copy this same example and created a main.go file and when I run this command which they have defined here in their README.md https://github.com/Avalanche-io/c4/blob/v0.7.0/id/README.md\nThe command is go run main.go ```` Instead of getting the c4 id``` of the file as they have shown in their example. I am getting the following error\nmain.go:8:3: cannot find package \"github.com/avalanche-io/c4/id\" in any of:\n /usr/lib/go-1.13/src/github.com/avalanche-io/c4/id (from $GOROOT)\n /home/vinay/go/src/github.com/avalanche-io/c4/id (from $GOPATH)\n\n\nI don't know about go language so it is becoming very difficult for me to solve the problem here, Is there any go developer which will help me out.\n\nA: main.go file is not able to find the package github.com/avalanche-io/c4/id inside /home/vinay/go/src/github.com/avalanche-io/c4/id , As I can see you have run the following go get commands\ngo get github.com/Avalanche-io\ngo get github.com/Avalanche-io/c4/id\ngo get github.com/Avalanche-io/c4/cmd/c4\n\n\nbut none of them has name github.com/avalanche-io/c4/id\nso according to me, You need to execute the following command\ngo get github.com/avalanche-io/c4/id\n\nNow just run your main.go\ngo run main.go\n\n"
| |
doc_23534655
|
I am able to run my PhpUnit test using the default connection, but I want to use a different database connection than the one I use to test the interface.
What I would like to know (if its possible):
*
*Is there a way to select a different
connection for my models before I
run all my tests;
*Can I just add a
connection in my local.xml like
this:
<phpunit_setup>
<connection>
<host><![CDATA[localhost]]></host>
<username><![CDATA[username]]></username>
<password><![CDATA[password]]></password>
<dbname><![CDATA[dbname]]></dbname>
<active>1</active>
</connection>
</phpunit_setup>
if yes, how do I access it.
thanks.
A: Maybe there is another solution, but I found out that we can change the "etc_dir" when we lauch the application.
*
*I copied the "app/etc/local.xml" and "app/etc/config.xml" to a newly created folder "tests/etc/";
*I changed this database configuration to what I needed;
*I made a symbolic link in "tests/etc/" to point to "app/etc/modules" (A copy is not recommended);
*Finally I passed the defaults parameters and the "etc_dir" to the "Mage::app()" method in a file "tests/helper.php" that is executed to setup my tests (include path, white list for code coverage).
It looked like this.
Before
"tests/helper.php"
...
// Start Magento application
Mage::app();
...
After
"tests/helper.php"
...
// Start Magento application
Mage::app('default', 'store', '/path/to/test/etc');
...
My app folder
My test folder
Hope this could help someone.
A: You can just create your own local.xml, for example:
<?xml version="1.0"?>
<config>
<global>
<resources>
<default_setup>
<connection>
<host><![CDATA[localhost]]></host>
<username><![CDATA[root]]></username>
<password></password>
<dbname><![CDATA[magento_test]]></dbname>
<active>1</active>
</connection>
</default_setup>
</resources>
</global>
</config>
And apply it in you testCase setUp method with:
$test_config = new Mage_Core_Model_Config('test/local.xml');
Mage::getConfig()->extend($test_config);
| |
doc_23534656
|
class A, B, C{
Some common features
}
and be able to refer to A, B and C.
I tried to do it like 'class' name +=ID (',' name += ID)* but this wont work since no object are apparently created. I assume I have to use actions but I don't understand how. Can anyone help me?
| |
doc_23534657
|
//table featureInfo
SET FOREIGN_KEY_CHECKS=0;
-- ----------------------------
-- Table structure for featureinfo
-- ----------------------------
DROP TABLE IF EXISTS `featureinfo`;
CREATE TABLE `featureinfo` (
`Id` int(11) NOT NULL AUTO_INCREMENT,
`globalId` int(11) NOT NULL DEFAULT '0',
`name` varchar(255) NOT NULL DEFAULT '',
PRIMARY KEY (`Id`),
KEY `globalId` (`globalId`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
//another table featurefix
SET FOREIGN_KEY_CHECKS=0;
-- ----------------------------
-- Table structure for featurefix
-- ----------------------------
DROP TABLE IF EXISTS `featurefix`;
CREATE TABLE `featurefix` (
`Id` int(11) NOT NULL AUTO_INCREMENT,
`globalId` int(11) NOT NULL DEFAULT '0',
`modifyname` varchar(255) NOT NULL DEFAULT '',
PRIMARY KEY (`Id`),
KEY `FK-Guid` (`globalId`),
CONSTRAINT `FK-Guid` FOREIGN KEY (`globalId`) REFERENCES `featureinfo` (`globalId`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
Then I use the hbm2java to create the entry class:
public class Featureinfo implements java.io.Serializable {
private Integer id;
private int globalId;
private String name;
private Set featurefixes = new HashSet(0);
public Featureinfo() {}
}
Now I wonder why there is a set attribute in FeatureInfo?
And the Featureinfo.hbm.xml:
<hibernate-mapping>
<class name="com.pojo.Featureinfo" table="featureinfo" catalog="hibernateset">
<id name="id" type="java.lang.Integer">
<column name="Id" />
<generator class="identity" />
</id>
<property name="globalId" type="int">
<column name="globalId" not-null="true" />
</property>
<property name="name" type="string">
<column name="name" not-null="true" />
</property>
<set name="featurefixes" inverse="true">
<key>
<column name="globalId" not-null="true" />
</key>
<one-to-many class="com.pojo.Featurefix" />
</set>
</class>
</hibernate-mapping>
The set element are defines, why not use the "joined-subclass" instead?
Also what is the difference between the "set/map/list/idbag" and "one-to-many/many-to-one" in the mapping xml file ?
A: hbm2java is interpreting the foreign key from FeatureFix onto FeatureInfo as a Set. This is the natural interpetation of it. On plain reading of the database there is a one-to-many relationship between FeatureInfo and FeatureFix and so hbm2java renders the class like that.
hbm2java uses a set because that is the simplest mapping solution and thus the default.
hbm2java cannot tell the difference between a joined-subclass structure and a plain one to many structure at the database level cause they are rendered the same and thus it goes with what would be typical: one-to-many. A joined-subclass is not used as much as a one-to-many.
| |
doc_23534658
|
A: scopes are for cross references. you cannot cross reference to a string. so could you elaborate what you really want to do?
| |
doc_23534659
|
$hookUrl = 'https://discord.com/api/webhooks/977171177120358412/U8T5Nv5BDCOL70IeXyPjA8Vlxo-4BB2b6QXhG0nAwD_gsw2AHXCSbbcjmiFvLlFFZD6I'
$content = @"
You can enter your message content here.
With a here-string, new lines are included as well!
Enjoy.
"@
$payload = [PSCustomObject]@{
content = $content
}
Invoke-RestMethod -Uri $hookUrl -Method Post -Body ($payload | ConvertTo-Json)
Invoke-RestMethod : {"message": "Cannot send an empty message", "code": 50006}
At line:17 char:1
+ Invoke-RestMethod -Uri $hookUrl -Method Post -Body ($payload | Conver ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : InvalidOperation: (System.Net.HttpWebRequest:HttpWebRequest) [Invoke-RestMethod], WebExc
eption
+ FullyQualifiedErrorId : WebCmdletWebResponseException,Microsoft.PowerShell.Commands.InvokeRestMethodCommand
I have no idea where I made a mistake, I would be glad if you could help me.
A: Invoke-restmethod -Uri $hookUrl -Method Post -Body ($payload | ConvertTo-Json) -Headers @{ "Content-Type" = "application/json" }
This seems to work
| |
doc_23534660
|
Example 1:
Abbott KW, Snidal D (2009) The Governance Triangle: Regulatory Standards Institutions and the Shadow of the State. In: Mattli W , Woods N (eds) The Politics of Global Regulation, pp. 44–88. Princeton University Press, Princeton, NJ
Example 2:
Moschella M , Tsingou E (eds) (2013) Great Expectations, Slow Transformations: Incremental Change in Financial Governance. ECPR Press, Colchester
I need to split them into 7 columns with this data:
*
*first author
*second author
*third to N author
*publication year
*title of source article
*published in (not always included, but always starts with In:)
*More info - means everything after the published in/after title of source article (in case it was not part of a larger publication)
I tried using the split into columns tool in excel, but because the data is so varied I couldn't do it efficiently.
Does anyone know a solution to this?
A: See How to split Bibiliography MLA string into BibTex using c#? where I linked to several dedicated tools for extracting bibliographic information from formatted text.
A: Try this VBA macro. It uses regular expressions to parse out the different segments; but if data is not how you have presented, it will fail; so if there are failures, you'll need to see how it mismatches either my assumptions or the way you presented the data.
The macro assumes the data starts in A1 and is in column A, with no label in row 1. The results are written into column B and subsequent; with a label row 1 -- but these could be placed anywhere.
This code goes into a regular module.
Option Explicit
Sub ParseBiblio()
Dim vData As Variant
Dim vBiblios() As Variant
Dim rRes As Range
Dim re As Object, mc As Object
Dim I As Long
'Assume Data is in column A.
'Might need to start at row 2 if there is a label row
vData = Range("A1", Cells(Rows.Count, "A").End(xlUp))
'Results to start in Column B with labels in row 1
Set rRes = Range("b1")
Set re = CreateObject("vbscript.regexp")
With re
.MultiLine = True
.Global = True
.ignorecase = True
.Pattern = "(^[^,]+),?\s*([^,]+?)(?:,\s*([^(]+))?\s*\((\d{4})\)\s*(.*?\.)\s*(?:In:\s*(.*)\.)?\s*(.*)"
End With
'Results array and labels
ReDim vBiblios(1 To UBound(vData) + 1, 1 To 7)
vBiblios(1, 1) = "First Author"
vBiblios(1, 2) = "Second Author"
vBiblios(1, 3) = "Other Authors"
vBiblios(1, 4) = "Publication Year"
vBiblios(1, 5) = "Title"
vBiblios(1, 6) = "Published In"
vBiblios(1, 7) = "More Info"
For I = 1 To UBound(vData)
Set mc = re.Execute(vData(I, 1))
If mc.Count > 0 Then
With mc(0)
vBiblios(I + 1, 1) = .submatches(0)
vBiblios(I + 1, 2) = .submatches(1)
vBiblios(I + 1, 3) = .submatches(2)
vBiblios(I + 1, 4) = .submatches(3)
vBiblios(I + 1, 5) = .submatches(4)
vBiblios(I + 1, 6) = .submatches(5)
vBiblios(I + 1, 7) = .submatches(6)
End With
End If
Next I
Set rRes = rRes.Resize(rowsize:=UBound(vBiblios, 1), columnsize:=UBound(vBiblios, 2))
rRes.EntireColumn.Clear
rRes = vBiblios
With rRes
With .Rows(1)
.Font.Bold = True
.HorizontalAlignment = xlCenter
End With
.EntireColumn.AutoFit
End With
End Sub
| |
doc_23534661
|
This is a part of my data frame:
df <- structure(list(country = structure(c(1L, 1L, 1L, 1L, 1L, 1L), .Label = c("Aruba",
"Angola", "Anguilla", "Albania", "United Arab Emirates", "Argentina",
"Armenia", "Antigua and Barbuda", "Australia", "Austria", "Azerbaijan",
"Burundi", "Belgium", "Benin", "Burkina Faso", "Bangladesh",
"Bulgaria", "Bahrain", "Bahamas", "Bosnia and Herzegovina", "Belarus",
"Belize", "Bermuda", "Bolivia (Plurinational State of)", "Brazil",
"Barbados", "Brunei Darussalam", "Bhutan", "Botswana", "Central African Republic",
"Canada", "Switzerland", "Chile", "China", "Cote d'Ivoire", "Cameroon",
"Congo, Democratic Republic", "Congo", "Colombia", "Comoros",
"Cabo Verde", "Costa Rica", "Curacao", "Cayman Islands", "Cyprus",
"Czech Republic", "Germany", "Djibouti", "Dominica", "Denmark",
"Dominican Republic", "Algeria", "Ecuador", "Egypt", "Spain",
"Estonia", "Ethiopia", "Finland", "Fiji", "France", "Gabon",
"United Kingdom", "Georgia", "Ghana", "Guinea", "Gambia", "Guinea-Bissau",
"Equatorial Guinea", "Greece", "Grenada", "Guatemala", "Guyana",
"China, Hong Kong SAR", "Honduras", "Croatia", "Haiti", "Hungary",
"Indonesia", "India", "Ireland", "Iran (Islamic Republic of)",
"Iraq", "Iceland", "Israel", "Italy", "Jamaica", "Jordan", "Japan",
"Kazakhstan", "Kenya", "Kyrgyzstan", "Cambodia", "Saint Kitts and Nevis",
"Republic of Korea", "Kuwait", "Lao People's DR", "Lebanon",
"Liberia", "Saint Lucia", "Sri Lanka", "Lesotho", "Lithuania",
"Luxembourg", "Latvia", "China, Macao SAR", "Morocco", "Republic of Moldova",
"Madagascar", "Maldives", "Mexico", "North Macedonia", "Mali",
"Malta", "Myanmar", "Montenegro", "Mongolia", "Mozambique", "Mauritania",
"Montserrat", "Mauritius", "Malawi", "Malaysia", "Namibia", "Niger",
"Nigeria", "Nicaragua", "Netherlands", "Norway", "Nepal", "New Zealand",
"Oman", "Pakistan", "Panama", "Peru", "Philippines", "Poland",
"Portugal", "Paraguay", "State of Palestine", "Qatar", "Romania",
"Russian Federation", "Rwanda", "Saudi Arabia", "Sudan", "Senegal",
"Singapore", "Sierra Leone", "El Salvador", "Serbia", "Sao Tome and Principe",
"Suriname", "Slovakia", "Slovenia", "Sweden", "Eswatini", "Sint Maarten (Dutch part)",
"Seychelles", "Syrian Arab Republic", "Turks and Caicos Islands",
"Chad", "Togo", "Thailand", "Tajikistan", "Turkmenistan", "Trinidad and Tobago",
"Tunisia", "Turkey", "Taiwan", "U.R. of Tanzania: Mainland",
"Uganda", "Ukraine", "Uruguay", "United States of America", "Uzbekistan",
"St. Vincent & Grenadines", "Venezuela (Bolivarian Republic of)",
"British Virgin Islands", "Viet Nam", "Yemen", "South Africa",
"Zambia", "Zimbabwe"), class = "factor"), isocode = structure(c(1L,
1L, 1L, 1L, 1L, 1L), .Label = c("ABW", "AGO", "AIA", "ALB", "ARE",
"ARG", "ARM", "ATG", "AUS", "AUT", "AZE", "BDI", "BEL", "BEN",
"BFA", "BGD", "BGR", "BHR", "BHS", "BIH", "BLR", "BLZ", "BMU",
"BOL", "BRA", "BRB", "BRN", "BTN", "BWA", "CAF", "CAN", "CHE",
"CHL", "CHN", "CIV", "CMR", "COD", "COG", "COL", "COM", "CPV",
"CRI", "CUW", "CYM", "CYP", "CZE", "DEU", "DJI", "DMA", "DNK",
"DOM", "DZA", "ECU", "EGY", "ESP", "EST", "ETH", "FIN", "FJI",
"FRA", "GAB", "GBR", "GEO", "GHA", "GIN", "GMB", "GNB", "GNQ",
"GRC", "GRD", "GTM", "GUY", "HKG", "HND", "HRV", "HTI", "HUN",
"IDN", "IND", "IRL", "IRN", "IRQ", "ISL", "ISR", "ITA", "JAM",
"JOR", "JPN", "KAZ", "KEN", "KGZ", "KHM", "KNA", "KOR", "KWT",
"LAO", "LBN", "LBR", "LCA", "LKA", "LSO", "LTU", "LUX", "LVA",
"MAC", "MAR", "MDA", "MDG", "MDV", "MEX", "MKD", "MLI", "MLT",
"MMR", "MNE", "MNG", "MOZ", "MRT", "MSR", "MUS", "MWI", "MYS",
"NAM", "NER", "NGA", "NIC", "NLD", "NOR", "NPL", "NZL", "OMN",
"PAK", "PAN", "PER", "PHL", "POL", "PRT", "PRY", "PSE", "QAT",
"ROU", "RUS", "RWA", "SAU", "SDN", "SEN", "SGP", "SLE", "SLV",
"SRB", "STP", "SUR", "SVK", "SVN", "SWE", "SWZ", "SXM", "SYC",
"SYR", "TCA", "TCD", "TGO", "THA", "TJK", "TKM", "TTO", "TUN",
"TUR", "TWN", "TZA", "UGA", "UKR", "URY", "USA", "UZB", "VCT",
"VEN", "VGB", "VNM", "YEM", "ZAF", "ZMB", "ZWE"), class = "factor"),
year = 1950:1955, currency = structure(c(4L, 4L, 4L, 4L,
4L, 4L), .Label = c("Algerian Dinar", "Argentine Peso", "Armenian Dram",
"Aruban Guilder", "Australian Dollar", "Azerbaijanian Manat",
"Bahamian Dollar", "Bahraini Dinar", "Baht", "Balboa", "Barbados Dollar",
"Belarussian Ruble", "Belize Dollar", "Bermudian Dollar",
"Bolivar Fuerte", "Boliviano", "Brazilian Real", "Brunei Dollar",
"Bulgarian Lev", "Burundi Franc", "CFA Franc BCEAO", "CFA Franc BEAC",
"Cabo Verde Escudo", "Canadian Dollar", "Cayman Islands Dollar",
"Cedi", "Chilean Peso", "Colombian Peso", "Comoro Franc",
"Convertible Marks", "Cordoba Oro", "Costa Rican Colon",
"Croatian Kuna", "Czech Koruna", "Dalasi", "Danish Krone",
"Denar", "Djibouti Franc", "Dobra", "Dominican Peso", "Dong",
"East Caribbean Dollar", "Egyptian Pound", "Ethiopian Birr",
"Euro", "Fiji Dollar", "Forint", "Franc Congolais", "Gourde",
"Guarani", "Guinea Franc", "Guyana Dollar", "Hong Kong Dollar",
"Hryvnia", "Iceland Krona", "Indian Rupee", "Iranian Rial",
"Iraqi Dinar", "Jamaican Dollar", "Jordanian Dinar", "Kenyan Shilling",
"Kip", "Kuwaiti Dinar", "Kwacha", "Kwanza", "Kyat", "Lari",
"Lebanese Pound", "Lek", "Lempira", "Leone", "Lilangeni",
"Loti", "Malagasy Ariary", "Malaysian Ringgit", "Manat",
"Mauritius Rupee", "Metical", "Mexican Peso", "Moldovan Leu",
"Moroccan Dirham", "Naira", "Namibian Dollar", "Nepalese Rupee",
"Netherlands Antillian Guilder", "New Israeli Sheqel", "New Leu",
"New Taiwan Dollar", "New Turkish Lira", "New Zealand Dollar",
"Ngultrum", "Norwegian Krone", "Nuevo Sol", "Ouguiya", "Pakistan Rupee",
"Pataca", "Peso Uruguayo", "Philippine Peso", "Pound Sterling",
"Pula", "Qatari Rial", "Quetzal", "Rand", "Rial Omani", "Riel",
"Rufiyaa", "Rupiah", "Russian Ruble", "Rwanda Franc", "Saudi Riyal",
"Serbian Dinar", "Seychelles Rupee", "Singapore Dollar",
"Som", "Somoni", "Sri Lanka Rupee", "Sudanese Pound", "Surinam Dollar",
"Swedish Krona", "Swiss Franc", "Syrian Pound", "Taka", "Tanzanian Shilling",
"Tenge", "Trinidad and Tobago Dollar", "Tugrik", "Tunisian Dinar",
"UAE Dirham", "US Dollar", "Uganda Shilling", "Uzbekistan Sum",
"Won", "Yemeni Rial", "Yen", "Yuan Renminbi", "Zloty"), class = "factor"),
rgdpe = c(NA_real_, NA_real_, NA_real_, NA_real_, NA_real_,
NA_real_), rgdpo = c(NA_real_, NA_real_, NA_real_, NA_real_,
NA_real_, NA_real_), pop = c(NA_real_, NA_real_, NA_real_,
NA_real_, NA_real_, NA_real_), emp = c(NA_real_, NA_real_,
NA_real_, NA_real_, NA_real_, NA_real_), avh = c(NA_real_,
NA_real_, NA_real_, NA_real_, NA_real_, NA_real_), hc = c(NA_real_,
NA_real_, NA_real_, NA_real_, NA_real_, NA_real_), ccon = c(NA_real_,
NA_real_, NA_real_, NA_real_, NA_real_, NA_real_), cda = c(NA_real_,
NA_real_, NA_real_, NA_real_, NA_real_, NA_real_), cgdpe = c(NA_real_,
NA_real_, NA_real_, NA_real_, NA_real_, NA_real_), cgdpo = c(NA_real_,
NA_real_, NA_real_, NA_real_, NA_real_, NA_real_), cn = c(NA_real_,
NA_real_, NA_real_, NA_real_, NA_real_, NA_real_), ck = c(NA_real_,
NA_real_, NA_real_, NA_real_, NA_real_, NA_real_), ctfp = c(NA_real_,
NA_real_, NA_real_, NA_real_, NA_real_, NA_real_), cwtfp = c(NA_real_,
NA_real_, NA_real_, NA_real_, NA_real_, NA_real_), rgdpna = c(NA_real_,
NA_real_, NA_real_, NA_real_, NA_real_, NA_real_), rconna = c(NA_real_,
NA_real_, NA_real_, NA_real_, NA_real_, NA_real_), rdana = c(NA_real_,
NA_real_, NA_real_, NA_real_, NA_real_, NA_real_), rnna = c(NA_real_,
NA_real_, NA_real_, NA_real_, NA_real_, NA_real_), rkna = c(NA_real_,
NA_real_, NA_real_, NA_real_, NA_real_, NA_real_), rtfpna = c(NA_real_,
NA_real_, NA_real_, NA_real_, NA_real_, NA_real_), rwtfpna = c(NA_real_,
NA_real_, NA_real_, NA_real_, NA_real_, NA_real_), labsh = c(NA_real_,
NA_real_, NA_real_, NA_real_, NA_real_, NA_real_), irr = c(NA_real_,
NA_real_, NA_real_, NA_real_, NA_real_, NA_real_), delta = c(NA_real_,
NA_real_, NA_real_, NA_real_, NA_real_, NA_real_), xr = c(NA_real_,
NA_real_, NA_real_, NA_real_, NA_real_, NA_real_), pl_con = c(NA_real_,
NA_real_, NA_real_, NA_real_, NA_real_, NA_real_), pl_da = c(NA_real_,
NA_real_, NA_real_, NA_real_, NA_real_, NA_real_), pl_gdpo = c(NA_real_,
NA_real_, NA_real_, NA_real_, NA_real_, NA_real_), i_cig = structure(c(NA_integer_,
NA_integer_, NA_integer_, NA_integer_, NA_integer_, NA_integer_
), .Label = c("extrapolated", "benchmark", "interpolated",
"ICPPPP-benchmark+interpolated", "ICPPPP-extrapolated"), class = "factor"),
i_xm = structure(c(NA_integer_, NA_integer_, NA_integer_,
NA_integer_, NA_integer_, NA_integer_), .Label = c("extrapolated",
"benchmark", "interpolated"), class = "factor"), i_xr = structure(c(NA_integer_,
NA_integer_, NA_integer_, NA_integer_, NA_integer_, NA_integer_
), .Label = c("market", "estimated"), class = "factor"),
i_outlier = structure(c(NA_integer_, NA_integer_, NA_integer_,
NA_integer_, NA_integer_, NA_integer_), .Label = c("no",
"yes"), class = "factor"), i_irr = structure(c(NA_integer_,
NA_integer_, NA_integer_, NA_integer_, NA_integer_, NA_integer_
), .Label = c("regular", "lowcapital", "lowerbound", "outlier"
), class = "factor"), cor_exp = c(NA_real_, NA_real_, NA_real_,
NA_real_, NA_real_, NA_real_), statcap = c(NA_real_, NA_real_,
NA_real_, NA_real_, NA_real_, NA_real_), csh_c = c(NA_real_,
NA_real_, NA_real_, NA_real_, NA_real_, NA_real_), csh_i = c(NA_real_,
NA_real_, NA_real_, NA_real_, NA_real_, NA_real_), csh_g = c(NA_real_,
NA_real_, NA_real_, NA_real_, NA_real_, NA_real_), csh_x = c(NA_real_,
NA_real_, NA_real_, NA_real_, NA_real_, NA_real_), csh_m = c(NA_real_,
NA_real_, NA_real_, NA_real_, NA_real_, NA_real_), csh_r = c(NA_real_,
NA_real_, NA_real_, NA_real_, NA_real_, NA_real_), pl_c = c(NA_real_,
NA_real_, NA_real_, NA_real_, NA_real_, NA_real_), pl_i = c(NA_real_,
NA_real_, NA_real_, NA_real_, NA_real_, NA_real_), pl_g = c(NA_real_,
NA_real_, NA_real_, NA_real_, NA_real_, NA_real_), pl_x = c(NA_real_,
NA_real_, NA_real_, NA_real_, NA_real_, NA_real_), pl_m = c(NA_real_,
NA_real_, NA_real_, NA_real_, NA_real_, NA_real_), pl_n = c(NA_real_,
NA_real_, NA_real_, NA_real_, NA_real_, NA_real_), pl_k = c(NA_real_,
NA_real_, NA_real_, NA_real_, NA_real_, NA_real_), id = c(1L,
1L, 1L, 1L, 1L, 1L)), row.names = c("ABW-1950", "ABW-1951",
"ABW-1952", "ABW-1953", "ABW-1954", "ABW-1955"), class = "data.frame")
Now I want to run the following code:
library(dplyr)
library(ggplot2)
library(Synth)
library(pwt10)
#Experimental design
df <- pwt10.0 %>%
mutate(id = group_indices_(pwt10.0, .dots=c('isocode')))
comparison_states <- c("ARM", "AUS", "CAN", "CHN",
"GBR", "ITA", "JPN", "LUX",
"NOR", "NZL", "SGP", "SWE",
"THA", "TWN", "USA")
control_ids<- df %>%
select(isocode, id) %>%
filter(isocode %in% comparison_states) %>%
distinct() %>%
pull(id)
dataprep.out<-dataprep(
foo = as.data.frame(df),
predictors = c("rgdpe", "avh", "rconna", "rtfpna", "rkna", "emp"),
predictors.op = "mean",
dependent = "labsh",
unit.variable = "id",
time.variable = "year",
treatment.identifier = 94,
controls.identifier = control_ids,
time.predictors.prior = c(1991:1997),
time.optimize.ssr = c(1995:1997),
special.predictors = list(list("labsh", 1992:1997 ,"mean")),
unit.names.variable = "isocode",
time.plot = 1992:2005
)
It keeps generating the following error message:
Error in dataprep(foo = as.data.frame(df), predictors = c("rgdpe", "avh", : unit.names.variable not found as character variable in foo.
I think it should be working because isocode has string values.
But I don't know why and want to fix this issue.
A: I cannot tell you what's going on behind the scenes, but I think that Synth wants a few things:
First, turn factor variables into characters;
df <- df %>% mutate_if(is.factor, as.character)
Second, make sure you don't have too many NA values -- I'm replacing your NAs with 1s just to get the code to run;
df[is.na(df)] <- 1
Third, make sure your predictors are numeric.
predictors <- c("rgdpe", "avh", "rconna", "rtfpna", "rkna", "emp")
df[,predictors] <- sapply(df[,predictors],as.numeric)
That is sufficient for me to be able to generate dataprep.out. Does that help?
| |
doc_23534662
|
For example, the following is sample content I would receive for data which contains CDATA tags. But there is some other scenarios where the CDATA tags are ommited.
<Data><![CDATA[ <h1>CHAPTER 2<br/> EDUCATION</h1>
<P> Analysis paragraph </P> ]]></Data>
Is there an elegant way to somehow detect that, and implement ReadXml method that can parse both types of input (with or without CDATA)? So far my ReadXml() implementation is as follows, but am getting errors parsing when CDATA tag is omitted.
public void ReadXml(XmlReader reader)
{
bool isEmpty = reader.IsEmptyElement;
reader.ReadStartElement();
if (isEmpty)
{
_data = string.Empty;
}
else
{
switch (reader.MoveToContent())
{
case XmlNodeType.Text:
case XmlNodeType.CDATA:
_data = reader.ReadContentAsString();
break;
default:
_data = string.Empty;
break;
}
reader.ReadEndElement();
}
}
A: The code below is tested on the following samples:
<Data><h1>CHAPTER 2<br/> EDUCATION</h1><P> Analysis paragraph </P></Data>
<Data>test<h1>CHAPTER 2<br/> EDUCATION</h1><P> Analysis paragraph </P></Data>
<Data><![CDATA[ <h1>CHAPTER 2<br/> EDUCATION</h1><P> Analysis paragraph </P> ]]></Data>
<Data></Data>
I use an XPathNavigator instead as it allows backtracking.
public void ReadXml(XmlReader reader)
{
XmlDocument doc = new XmlDocument {PreserveWhitespace = false};
doc.Load(reader);
var navigator = doc.CreateNavigator();
navigator.MoveToChild(XPathNodeType.Element);
_data = navigator.InnerXml.Trim().StartsWith("<") ? navigator.Value : navigator.InnerXml;
}
| |
doc_23534663
|
BEGIN TRAN
ROLLBACK
When I removed above lines and just ran the proc....everything works fine...I guess it was doing an rollback...as SQLMenace said...my bad i guess...never happened before so i was quite confused....anyways...thanks..hopefully it will help someone else....
Hi all,
I have a stored procedure that basically inserts some entries/rows after checking if those entries dont exist before. Now it says that rows affected when I run it but when i open the table it has no new entries....and hence every time i run that proc now it says it has inserted entries when it should actually just check for existing values and not do anything...now it shows something like this -
(1 row(s) affected)
(1 row(s) affected)
when it should be only showing
(1 row(s) affected)
Now I am guessing its deleting the row after it inserts it immediately...and that is why it never shows up...now i checked for any update or delete cascade constraints on the table...but I didnt find any...can anyone help me and give some advice on this...
A: You have a trigger on that table that probably deletes the row
run this to verify, change 'your table name' to the name of the table
select *
from sys.triggers
where OBJECT_NAME(parent_Id) = 'your table name'
if the trigger doesn't exist post the proc code, it is possible that you are doing a rollback
A: BTW
you can change the toolpack behavior
SSMS Tools-->New Query Template-->Options--> delete the SQL from the template text
See image below
| |
doc_23534664
|
using (var tx = new TransactionScope(TransactionScopeOption.RequiresNew, new TransactionOptions() { IsolationLevel = IsolationLevel.ReadCommitted }))
{
using (var db = MyDataContext.GetDataContext())
{
try
{
MyObject myObject = new MyObject()
{
SomeString = "Monday"
};
db.MyObjects.InsertOnSubmit(myObject);
db.SubmitChanges();
tx.Complete();
}
catch (Exception e)
{
}
}
}
A: My understanding is that it works in the case when the transaction scope is tied to only one connection. Often times because its a best practice to open a connection late and close early. there might be situation where the scope spans two connections. Those scenarios are not supported in sql azure.
An example of where it might not work is; taking yr example; Assuming MyDataContext.GetDataContext() returns a new instance of a connection.
using (var tx = new TransactionScope(TransactionScopeOption.RequiresNew,
new TransactionOptions()
{ IsolationLevel = IsolationLevel.ReadCommitted }
))
{
try{
DoSomething(); //a method with using (var db = MyDataContext.GetDataContext())
DoSomethingElse(); //another method with using (var db = MyDataContext.GetDataContext())
tx.Complete();
}
catch { //Exception handler
}
}
These links should give you some pointers aswell
*
*Transactions in Sql
Azure
A: Quick update on distributed transactions with Azure SQL Database: A couple of days ago, we introduced support for distributed transactions in Azure SQL Database. The feature that lights up the capability is called elastic database transactions. It focuses on scenarios using the .NET distributed transaction APIs such as TransactionScope. These APIs start working against Azure SQL Database once you installed the new 4.6.1 release of the .NET framework. You can find more information about how to get started here: https://azure.microsoft.com/en-us/documentation/articles/sql-database-elastic-transactions-overview/.
Please give it a try!
| |
doc_23534665
|
EDIT: I collect the entire classes from the project. I separate abstract classes, interfaces, subclasses by looking the collection, whereas I also want to know how many classes have test behavior. In other words, how many classes are actually test classes. One more thing: I don't know these classes in advance, these are not mine!
A: Let me share with you how I like to organize my tests in eclipse :-) maybe you may find useful.
First, I create two projects, one for the app and another for the test.
The test project, of course, has a dependency on the app project
Now, let's suppose you want to add some test case, you just point to the right src dir.
So you want to create your test code without mixing app code and test code (for example, utils), just leave what's specific to the right project.
The only name convention I use is the eclipse junit default, appending the word "Test" in the end of the test class.
No need for ant scripts to deploy only the app code.
Even JUNIT dependency is restricted to the test project.
I hope it helps.
| |
doc_23534666
|
To sort the date column, what I was thinking of is to put the condition like this:
if(index === 3){//for date column sort...
} else {
return function(a, b) {
var valA = getCellValue(a, index), valB = getCellValue(b, index)
return $.isNumeric(valA) && $.isNumeric(valB) ? valA - valB : valA.localeCompare(valB)
}
}
But couldn't really figure out how to sort the table when I have the date range in Date column. Any help would be greatly appreciated!
//sort table
$('th').click(function(){
//alert($(this).index())
$('th').css({'background-color' : '#cccccc'});
$(this).css('background-color', '#808080');
var table = $(this).parents('table').eq(0)
var rows = table.find('tr:gt(0)').toArray().sort(comparer($(this).index()))
this.asc = !this.asc
if (!this.asc){rows = rows.reverse()}
for (var i = 0; i < rows.length; i++){table.append(rows[i])}
})
function comparer(index) {
return function(a, b) {
var valA = getCellValue(a, index), valB = getCellValue(b, index)
return $.isNumeric(valA) && $.isNumeric(valB) ? valA - valB : valA.localeCompare(valB)
}
}
function getCellValue(row, index){ return $(row).children('td').eq(index).text() }
th{
background-color: #cccccc;
}
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
<table>
<thead>
<tr>
<th>S.No.</th>
<th>Number</th>
<th>Text</th>
<th>Date</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>100</td>
<td>Canada</td>
<td>01/06/2016 - 01/07/2018</td>
</tr><tr>
<td>2</td>
<td>3000</td>
<td>USA</td>
<td>12/08/2017 - 12/12/2017</td>
</tr><tr>
<td>3</td>
<td>1202</td>
<td>Mexico</td>
<td>12/09/2018 - 01/07/2018</td>
</tr><tr>
<td>4</td>
<td>20</td>
<td>Brazil</td>
<td>04/29/2018 - 05/01/2018</td>
</tr><tr>
<td>5</td>
<td>1680</td>
<td>Germany</td>
<td>04/29/2018 - 05/01/2018</td>
</tr>
</tbody>
</table>
A: One option would be to split the date and get the first part only from the date range.
For eg: you have date like: 12/09/2018 - 01/07/2018, so what I would do is just get the first part of it (12/09/2018), get the time from that date, use it to compare with another date and sort:
//sort table
$('th').click(function(){
//alert($(this).index())
$('th').css({'background-color' : '#cccccc'});
$(this).css('background-color', '#808080');
var table = $(this).parents('table').eq(0)
var rows = table.find('tr:gt(0)').toArray().sort(comparer($(this).index()))
this.asc = !this.asc
if (!this.asc){rows = rows.reverse()}
for (var i = 0; i < rows.length; i++){table.append(rows[i])}
})
function comparer(index) {
if(index === 3){//for date column sort...
return function(a, b) {
var valA = getCellValue(a, index), valB = getCellValue(b, index)
var datePartsA = valA.split(" - ")[0].split("/"); //MM/DD/YYYY
var dateA = new Date(datePartsA[2], (datePartsA[0] - 1), datePartsA[1]);
var dateResultA = dateA.getTime ();
var datePartsB = valB.split(" - ")[0].split("/");
var dateB = new Date(datePartsB[2], (datePartsB[0] - 1), datePartsB[1]);
var dateResultB = dateB.getTime ();
return $.isNumeric(dateResultA) && $.isNumeric(dateResultB) ? dateResultA - dateResultB : dateResultA.localeCompare(dateResultB)
}
} else { //for other sort
return function(a, b) {
var valA = getCellValue(a, index), valB = getCellValue(b, index)
return $.isNumeric(valA) && $.isNumeric(valB) ? valA - valB : valA.localeCompare(valB)
}
}
}
function getCellValue(row, index){ return $(row).children('td').eq(index).text() }
th{
background-color: #cccccc;
}
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
<table>
<thead>
<tr>
<th>S.No.</th>
<th>Number</th>
<th>Text</th>
<th>Date</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>100</td>
<td>Canada</td>
<td>01/06/2016 - 01/07/2018</td>
</tr><tr>
<td>2</td>
<td>3000</td>
<td>USA</td>
<td>12/08/2017 - 12/12/2017</td>
</tr><tr>
<td>3</td>
<td>1202</td>
<td>Mexico</td>
<td>12/09/2018 - 01/07/2018</td>
</tr><tr>
<td>4</td>
<td>20</td>
<td>Brazil</td>
<td>04/29/2018 - 05/01/2018</td>
</tr><tr>
<td>5</td>
<td>1680</td>
<td>Germany</td>
<td>04/29/2018 - 05/01/2018</td>
</tr>
</tbody>
</table>
If both date have first part same, you can compare the second part too.
| |
doc_23534667
|
I found this very hard to get implemented using Cake syntax so I decided to concatenate the name of the school and the name of the show and then compare that string to the $term. I realised VirtualFields would not work across multiple models so adopted using prepared statements, the syntax itself evaluated fine but I get this SQL error:
"Column not found: 1054 Unknown column 'romeo and juliet' in 'where clause'"
So it seems to pass the term fine but compares it to the column name rather than the record. Here is the syntax for my AutoComplete function with the prepared statement:
public function autoComplete() {
$this->autoRender = false;
$term = $_GET['term'];
$shownames = $this->Order->Show->query(
'SELECT CONCAT(sm_schools.title, " - ", sm_shows.title)
FROM sm_shows
LEFT JOIN sm_schools
ON sm_shows.school_id = sm_schools.id
WHERE CONCAT(sm_schools.title, " - ", sm_shows.title) LIKE ' .$term
);
echo json_encode($this->_encode($shownames));
}
Any help would be great.
Thanks
A: There's no quotes around Romeo and Juliet, which is probably where the error is coming from. However, you should be doing this with actual prepared statements.
$db = $this->Order->getDataSource();
$shownames = $db->fetchAll(
'SELECT CONCAT(sm_schools.title, " - ", sm_shows.title)
FROM sm_shows
LEFT JOIN sm_schools
ON sm_shows.school_id = sm_schools.id
WHERE CONCAT(sm_schools.title, " - ", sm_shows.title) LIKE ?',
array($term)
);
| |
doc_23534668
|
struct Foo {
static let constant = "SomeConstant"
}
print(Foo.constant)
enum Foo: String {
case constant = "SomeConstant"
}
print(Foo.constant.rawValue)
*
*Which one would make sense based on comparison of memory allocation at runtime ?
*Since both seems to be type-properties for me, will they remain forever in stack memory till app is alive.
A: The Swift language doesn't have an official standard to refer to in cases like this. The memory layouts of these two pieces of code are implementation-defined, by the Apple Swift compiler, which is the de-facto standard for the language.
You can look at the emitted SIL or machine code, however, any observations you make are consequences of current implementation details, which are subject to change.
All that is to say: there's no reason why the compiler should handle these differently, but you can't rely on that to not change in the future.
| |
doc_23534669
|
Splunk regular expressions are PCRE (Perl Compatible Regular Expressions) and use the PCRE C library.
The regex in question:
"(Exception \W: |Exception: |Microsoft.Data.SqlClient.SqlException | Exception\s \(\dx\d{8}\)\: | Microsoft.Data.SqlClient.SqlException\s \(\dx\d{8}\)\: )(?<ErrInfo2>[A-Za-z0-9\s_@.\/<#&->+?=:$!',\\\)(;-]+)"
Piece of text it works on :
Microsoft.Data.SqlClient.SqlException (0x80131904): Transaction (Process ID 76) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction.
Piece of text it doesn't work on:
System.Data.SqlClient.SqlException (0x80131904): Cannot insert duplicate key row in object 'collecting.IstHochrechnung' with unique index 'IX_IstHochrechnung_Year_CostUnitId_CostCenterId'. The duplicate key value is (2022, 2605, 333).
The part Exception\s \(\dx\d{8}\)\: takes care of the (0x80131904): entry so i dont see why it would remain empty.
Any help is more than appreciated!!
A: One is using the namespace Microsoft.Data and the other System.Data. You just need to make it so it doesn't matter.
(Exception \W: |Exception: | ?\w{1,}.Data.SqlClient.SqlException\s ?\(\dx\d{8}\)\: )(?<ErrInfo2>[A-Za-z0-9\s_@.\/<#&->+?=:$!',\\\)(;-]+)
This still cares about the namespace, but has made it so that the first segment can be any word. E.g. test.Data.SqlClient.
I always test on regex101.com
Here is a smaller query as well... noticed you had a fair bit of duplication.
[\w\.]{1,}Exception ?\(0x\d{8}\)?: ?(?<ErrInfo2>[A-Za-z0-9\s_@.\/<#&->+?=:$!',\\\)(;-]+)
| |
doc_23534670
|
I'm ignoring factors (excuse the pun) such as the availability of tools and libraries. I'm asking only about the language paradigm itself.
A: One of the important reasons stack-based languages are being developed is because the minimalism of their semantics allows straightforward interpreter and compiler implementation, as well as optimization.
So, one of the practical advantage of such paradigm is that it allows enthusiast people to easily build more complex things and paradigms on top of them.
The Scheme programming language is another example of that: minimalist syntax and semantics, straightforward implementation, and lots of fun!
A: [EDITED] We already have good answers and I know nothing about the Factor language. However, the favouring of stack usage is a practical advantage of a stack-oriented paradigma and a reason to adopt such paradigma, as asked.
So, I think it is worth listing the advantages of stack usage instead of heap allocation for completeness:
*
*CPU Time -- The time cost of memory allocation in the stack is practically free: doesn't matter if you are allocating one or one thousand integers, all it takes is a stack pointer decrement operation. example
*Memory leak -- There are no memory leaks when using the stack only. That happens naturally without additional code overhead to deal with it. The memory used by a function is completely released when returning from each function even on exception handling or using longjmp (no referencing counting, garbage collection, etc).
*Fragmentation -- Stacks also avoid memory fragmentation naturally. You can achieve zero fragmentation without any additional code to deal with this like an object pool or slab memory allocation.
*Locality -- Data in stack favors the data locality, taking advantage of cache and avoiding page swaps.
Of course, it may be more complicated to implement, depending on your problem, but we shall favor stack over heap always we can in any language. Leave malloc/new to be used only when actually needed (size or lifetime requirements).
A: Stack orientation is an implementation detail. For example, Joy can be implemented using rewriting - no stack. This is why some prefer to say "concatenative" or "compositional". With quotations and combinators you can code without thinking about the stack.
Expressing yourself with pure composition and without locals or named arguments is the key. It's extremely succinct with no syntactic overhead. Composition makes it very easy to factor out redundancy and to "algebraically" manipulate your code; boiling it down to its essence.
Once you've fallen in love with this point-free style you'll become annoyed by even the slightest composition syntax in other languages (even if just a dot). In concatenative languages, white space is the composition operator.
A: For some people it's easier to think in terms of managing stacks than other paradigms. At the very least, doing some hacking in a stack-based language will improve your ability to manage stacks in general.
Aside: in the early days of handheld calculators, they used something called Reverse Polish notation, which is a very simple stack-based postfix notation, and is extremely memory efficient. People who learn to use it efficiently tend to prefer it over algebraic calculation.
A: I'm not sure whether this will quite answer your question, but you'll find that Factor describes itself as a concatenative language first and foremost. It just happens also to have a stack-based execution model. Unfortunately, I can't find Slava's blog post(? or maybe on the Factor Wiki?) talking about this.
The concatenative model basically means that you pass around "hunks of code" (well, that's how you program anyway) and composition looks like concatenation. Operations like currying are also easy to express in a stack-based language since you just pre-compose with code that adds one thing to the stack. In Factor, at least, this is expressed via a word called curry. This makes it much easier to do higher order programming, and mapping over sequences eventually becomes the "obvious way to do it". I came from Lisp and was amazed going back after programming in Factor for a bit that you couldn't do "obvious things" like bi in Lisp. It really does change how you express things.
Incidentally, it's wise not to get too hung up on the whole stack manipulation thing. Using the locals vocabulary (described here: http://docs.factorcode.org/content/article-locals.html), you don't have to worry about shuffling things around. Often there's a neat way to express things without local variables, but I tend to do that second.
| |
doc_23534671
|
As you asked I declared the sorted list I have a class in it and an artist for key which is from a textboxt of a gui.
Let us say that
string artsearch = something;
bool contains = list.ContainsKet(artsearch);
SortedList<string, Artist> list = new SortedList<string, Artist>();
Artist a = new Artist(artist, members, albm1);
a.Name = artist;
a.Members = members;
list.Add(artist, a);
public class Artist : IComparable
{
public string name;
public string members;
public string album;
public Guid artistID;
public LinkListGen<string> albums;
public Artist()
{
artistID = new Guid();
}
public Artist(string name, string members, LinkListGen<string> albums)
{
this.name = name;
this.members = members;
}
public string Name
{
get { return name; }
set { name = value; }
}
public string Albums
{
get { return album; }
set { album = value; }
}
public string Members
{
get { return members; }
set { members = value; }
}
public int CompareTo(object obj)
{
if (obj is Artist)
{
Artist other = (Artist)obj;
return name.CompareTo(other.name);
}
if (obj is string)
{
string other = (string)obj;
return members.CompareTo(other);
}
else
{
return -999;
}
}
}
A: You can get the value using the indexer:
if(contains)
{
Console.WriteLine("{0} -> {1}", artsearch, list[artsearch]);
}
You can combine these using TryGetValue:
Artist value;
if(list.TryGetValue(artsearch, out value))
{
Console.WriteLine("{0} -> {1}", artsearch, value);
}
Note this works on any class which implements IDictionary<TKey, TValue>, not just SortedList<TKey, TValue>.
| |
doc_23534672
|
I export the data and try to import it using the read.csv function which I've never had problems using before and receive the following error code
un <- read.csv("un.csv", na.strings = "..")
Error in read.table(file = file, header = header, sep = sep, quote = quote, :
more columns than column names
for ref I've opened the csv file in word and this is the format it is in;
""Extracted from the UNHCR Population Statistics Reference Database","United Nations High Commissioner for Refugees"
"Date extracted: 2015-09-18 04:37:24 +02:00"
Year,"Country / territory of asylum/residence",Origin,"Population type",Value
1951,Australia,Various/Unknown,"Refugees (incl. refugee-like situations)",180000
1951,Austria,Various/Unknown,"Refugees (incl. refugee-like situations)",282000
1951,Belgium,Various/Unknown,"Refugees (incl. refugee-like situations)",55000
1951,Canada,Various/Unknown,"Refugees (incl. refugee-like situations)",168511
1951,Switzerland,Various/Unknown,"Refugees (incl. refugee-like situations)"
and so appears to be in the correct format so I'm at a bit of a loss as to what's going wrong.
Thanks for your help
Chris
A: Your data have typos. It should be like:
un = structure(list(Year = c(1951L, 1951L, 1951L, 1951L, 1951L), Country...territory.of.asylum.residence = structure(1:5, .Label = c("Australia",
"Austria", "Belgium", "Canada", "Switzerland"), class = "factor"),
Origin = structure(c(1L, 1L, 1L, 1L, 1L), .Label = "Various/Unknown", class = "factor"),
Population.type = structure(c(1L, 1L, 1L, 1L, 1L), .Label = "Refugees (incl. refugee-like situations)", class = "factor"),
Value = c(180000, 282000, 55000, 168511, NA)), .Names = c("Year",
"Country...territory.of.asylum.residence", "Origin", "Population.type",
"Value"), class = "data.frame", row.names = c(NA, -5L))
Then you could import easily using read.table():
df=read.table("un.csv", header = T, sep=",")
| |
doc_23534673
|
div[0].innerText === "aaaaa zzzzz"
div[1].innerText === "aaaaainvisiblezzzzz"
How can I force innerText to give the same result for div[1] as it gives for div[0]?
I’ve tried to append div[1] to a temporary document but, since the document wasn’t actually displayed, it didn’t help. Only appending it to a literally visible document works.
Test code
var div = [];
div[0] = document.getElementById("visible");
div[1] = div[0].cloneNode(true);
show(0);
show(1);
function show(i) {
document.getElementById("output").innerHTML +=
"<p>div[" + i + "].innerText === <code>" +
div[i].innerText.replace(/\n/g, "") + "</code></p>";
}
#visible {display: block; font-family: sans-serif; font-size: larger; color: red;}
code {background-color: lightgray; padding: 0 .318em;}
<div id="visible">
<span style="display: inline">aaaaa</span>
<span style="display: none">invisible</span>
<span style="display: inline">zzzzz</span>
</div>
<div id="output"></p>
A:
Only appending it to a document literally visible to the user works.
But the user doesn't necessarily have to see that. :-) If you append it, grab innerText, and then remove it, the user will never see it:
var div = [];
div[0] = document.getElementById("visible");
div[1] = div[0].cloneNode(true);
show(0);
document.body.appendChild(div[1]); // *****
show(1);
document.body.removeChild(div[1]); // *****
function show(i) {
document.getElementById("output").innerHTML +=
"<p>div[" + i + "].innerText === <code>" +
div[i].innerText.replace(/\n/g, "") + "</code></p>";
}
#visible {display: block; font-family: sans-serif; font-size: larger; color: red;}
code {background-color: lightgray; padding: 0 .318em;}
<div id="visible">
<span style="display: inline">aaaaa</span>
<span style="display: none">invisible</span>
<span style="display: inline">zzzzz</span>
</div>
<div id="output"></p>
Alternately, since the element isn't in the DOM, it can't be made invisible by CSS, only inline styles. I can't think of any other inline style that would make the text get left out of innerText other than your display: none and visibility: hidden (opacity: 0, for instance, doesn't do it), so it's trivial to exclude those and normalize whitespace for non-pre elements:
function getInnerText(element) {
var node, text = "";
if (element.style.display.toLowerCase() !== "none" && element.style.visibility.toLowerCase() !== "hidden") {
for (node = element.firstChild; node; node = node.nextSibling) {
if (node.nodeType === 3) {
text += node.nodeValue;
} else if (node.nodeType === 1) {
text += getInnerText(node);
}
}
}
// Normalize all whitespace if not "pre"
if (element.tagName !== "PRE" && element.style.whiteSpace.toLowerCase().indexOf("pre") == -1) {
text = text.replace(/\s+/g, ' ');
}
return text;
}
That may well need tweaking (I don't think it handles <div>stuff<pre>big gap</pre></div> properly), but you can run with the idea if you don't want to use the first solution above...
Example:
var div = [];
div[0] = document.getElementById("visible");
div[1] = div[0].cloneNode(true);
show(0);
document.body.appendChild(div[1]); // *****
show(1);
document.body.removeChild(div[1]); // *****
function show(i) {
document.getElementById("output").innerHTML +=
"<p>div[" + i + "].innerText === <code>" +
getInnerText(div[i]).replace(/\n/g, "") + "</code></p>";
}
function getInnerText(element) {
var node, text = "";
if (element.style.display.toLowerCase() !== "none" && element.style.visibility.toLowerCase() !== "hidden") {
for (node = element.firstChild; node; node = node.nextSibling) {
if (node.nodeType === 3) {
text += node.nodeValue;
} else if (node.nodeType === 1) {
text += getInnerText(node);
}
}
}
// Normalize all whitespace if not "pre"
if (element.tagName !== "PRE" && element.style.whiteSpace.toLowerCase().indexOf("pre") == -1) {
text = text.replace(/\s+/g, " ");
}
return text;
}
#visible {display: block; font-family: sans-serif; font-size: larger; color: red;}
code {background-color: lightgray; padding: 0 .318em;}
<div id="visible">
<span style="display: inline">aaaaa</span>
<span style="display: none">invisible</span>
<span style="display: inline">zzzzz</span>
</div>
<div id="output"></p>
| |
doc_23534674
|
Something like :
select userid, permission from permissions where all_of permissions in ('view', 'delete', 'add', 'edit');
Note:
this query is not to do with mysql permissions. It is a generic question, assuming that I have a user_permissions table which has the following fields & data:
userid | permission
1 | view
1 | add
2 | view
2 | delete
2 | add
2 | edit
The query I'm asking should return
userid
2
Please let me know if this is not clear.
Thanks in advance
A: Look into the
SELECT * FROM information_schema.user_privileges
WHERE grantee = '\'root\'@\'localhost\''
OR
SHOW GRANTS FOR 'root'@'localhost';
A: select userid, GROUP_CONCAT(DISTINCT permissions ORDER BY permissions DESC) as permissions_grouped from permissions where permissions in ('view', 'delete', 'add', 'edit') GROUP BY userid HAVING permissions_grouped = "view,edit,delete,add";
thise will first get all the users who have any of those permissions, and then concat all of their permissions to an ordered string, then the having will only select rows with the right string.
edit: formatting
A: You can also do like this:
SHOW GRANTS FOR CURRENT_USER;
| |
doc_23534675
|
If there are multiple messages coming for the upstream of the service-activator, so, only one bean, or class, will be instantiated? right?
Or the bean in service-activator will be instantiated every time a message comes?
Thx
For example, I have a service-activator like this:
<int:service-activator input-channel="input" method="trans" output-channel="output">
<bean class="com.example.eurowp.Transformer" init-method="onInit" destroy-method="onDestroy">
</bean>
</int:service-activator>
A: There is just one instance - the object (bean) is created during context initialization, not at runtime.
If running in a multi-threaded environment, the class must be thread-safe.
| |
doc_23534676
|
When using EJBQL to process data, I see it seems have some limitation:
*
*It cannot process datatime such as find a part of date such as day, month or year
*It cannot find datetime among from...to
*It cannot comparison datetime field
*It cannot map a class not entity to a customize native select query because I want to get List data from SELECT statement but when I query in case join 2 or more table and map the object output into a class but impossible
@PersistenceContext private
EntityManager em;
em.createNativeQuery("SELECT
a.usertype , b.username, b.userpass
FROM tablea a, tableb b WHERE a.id =
b.id,MyClass.class).getResultList
.....
class MyClass(){
String usertype;
String username;
String userpass;
}
Could you help me any ideas?
Thank in advance!
A: *
*It can not, do it in your code. Otherwise, you need to use something database specific on one side of your condition.
*It can, why not. You can use between :fromDate and :toDate in the query, or use > :fromDate and < :toDate, in the NamedQuery. Where is the problem.
*It can. Similar to the last one, use = sign instead
*It can using @SqlResultSetMapping. Refer to this.
| |
doc_23534677
|
Parse error: syntax error, unexpected $end in /.../ on ...
It will point to different line numbers on my script for every time it does appear and if I reload the script on the browser, then it loads the page successfully with no errors.
My question is though that why does this error sometimes appear when I load the script on the browser?
A: The Parse error: syntax error, unexpected $end in / means:
That you have forgotten to close your PHP script with } tag.
Check if you have left your script open somewhere. If you want, you could always count all the { and } and you will find out that there is at-least one more, or less. :) depends on how you look at it.
A: This happens a lot to me when I'm developing websites, usually it's because the file hasn't completely uploaded via FTP.
If this is the case, try waiting a bit longer before accessing the page in your browser.
| |
doc_23534678
|
for (int i = 0; i < mNumberOfAlarm; i++) {
code = cursor.getInt(DATABASE_COLUMN_CODE);
id = cursor.getInt(DATABASE_COLUMN_ID);
Log.i("retrieve code databse", " " + code + " " + id);
arrayInstanceAlarmFragment.add(InstanceAlarmFragment.newInstance(code));
getChildFragmentManager().beginTransaction().add(R.id.instance_of_alarm, arrayInstanceAlarmFragment.get(i)).commit();
cursor.moveToNext();
}
But i have a problem at the line :
getChildFragmentManager().beginTransaction().add(R.id.instance_of_alarm, arrayInstanceAlarmFragment.get(i)).commit();
It works but not in background thread, so i have a choregrapher warning like this:
I/Choreographer: Skipped 120 frames! The application may be doing too much work on its main thread.
I don't understand because the work is doing in the doInBakcground method of my AsyncTask ?
thanks a lot for your help.
| |
doc_23534679
|
http://blog.teamtreehouse.com/create-ajax-contact-form
I'm using PHP Version 5.3.10-1ubuntu3.4 on my server and I've been having trouble with http_response_code(); which is what the example tutorial at the above link uses. I've read http_response_code(); only works with PHP 5.4. So instead I have reverted to using header();.
I have my form working just fine and it's displaying a success message when I submit, rather than errors when I was using http_response_code(); but my PHP isn't that great and I am wanting to know if what I have done is acceptable or if I should be doing it a different way? Please correct my code if so.
Here's the contents of my mailer.php file, where you can see I've commented out http_response_code(); and am using header();.
if ($_SERVER["REQUEST_METHOD"] == "POST") {
// Get the form fields and remove whitespace.
$name = strip_tags(trim($_POST["name"]));
$name = str_replace(array("\r","\n"),array(" "," "),$name);
$email = filter_var(trim($_POST["email"]), FILTER_SANITIZE_EMAIL);
$phone = trim($_POST["phone"]);
$company = trim($_POST["company"]);
$minbudget = trim($_POST["minbudget"]);
$maxbudget = trim($_POST["maxbudget"]);
$message = trim($_POST["message"]);
$deadline = trim($_POST["deadline"]);
$referred = trim($_POST["referred"]);
// Check that data was sent to the mailer.
if ( empty($name) OR empty($phone) OR empty($message) OR !filter_var($email, FILTER_VALIDATE_EMAIL)) {
// Set a 400 (bad request) response code and exit.
//http_response_code(400);
header("HTTP/1.1 400 Bad Request");
echo "Error (400). That's not good, refresh and try again otherwise please email me and let me know you are having trouble submitting this form.";
exit;
}
// Set the recipient email address.
// FIXME: Update this to your desired email address.
$recipient = "myemail@domain.com";
// Set the email subject.
$subject = "Website enquiry from $name";
// Build the email content.
$email_content = "Name: $name\n";
$email_content .= "Email: $email\n\n";
$email_content .= "Phone: $phone\n";
$email_content .= "Company: $company\n\n";
$email_content .= "Budget: $minbudget $maxbudget\n";
$email_content .= "Deadline: $deadline\n";
//$email_content .= "Max Budget: $maxbudget\n";
$email_content .= "\n$message\n\n";
$email_content .= "Referred: $referred\n";
// Build the email headers.
$email_headers = "From: $name <$email>";
// Send the email.
if (mail($recipient, $subject, $email_content, $email_headers)) {
// Set a 200 (okay) response code.
//http_response_code(200);
header("HTTP/1.1 200 OK");
echo "Thank You! I'll be in touch soon.";
} else {
// Set a 500 (internal server error) response code.
//http_response_code(500);
header("HTTP/1.0 500 Internal Server Error");
echo "Error (500). That's not good, refresh and try again otherwise please email me and let me know you are having trouble submitting this form.";
}
} else {
// Not a POST request, set a 403 (forbidden) response code.
//http_response_code(403);
header("HTTP/1.1 403 Forbidden");
echo "Error (403). That's not good, refresh and try again otherwise please email me and let me know you are having trouble submitting this form.";
}
A: Easy solution:
/**
* Sets the response code and reason
*
* @param int $code
* @param string $reason
*/
function setResponseCode($code, $reason = null) {
$code = intval($code);
if (version_compare(phpversion(), '5.4', '>') && is_null($reason))
http_response_code($code);
else
header(trim("HTTP/1.0 $code $reason"));
}
you can use it as:
setResponseCode(404);
or
setResponseCode(401,'Get back to the shadow');
A: I've managed to answer this on my own similar question by going through the PHP source code to work out exactly what happens.
The two methods are essentially functionally equivalent. http_response_code is basically a shorthand way of writing a http status header, with the added bonus that PHP will work out a suitable Reason Phrase to provide by matching your response code to one of the values in an enumeration it maintains within php-src/main/http_status_codes.h.
Note that this means your response code must match a response code that PHP knows about. You can't create your own response codes using this method, however you can using the header method. Note also that http_response_code is only available in PHP 5.4.0 and higher.
In summary - The differences between http_response_code and header for setting response codes:
*
*Using http_response_code will cause PHP to match and apply a Reason Phrase from a list of Reason Phrases that are hard-coded into the PHP source code.
*Because of point 1 above, if you use http_response_code you must set a code that PHP knows about. You can't set your own custom code, however you can set a custom code (and Reason Phrase) if you use the header function.
*http_response_code is only available in PHP 5.4.0 and higher
A: To answer your main question, the biggest response I could see to using headers vs http_response_code(), is that http_response_code() is only supported on PHP 5.4 and greater, older versions would fail using that function.
Using headers as you are in your example will insure you're code will work on older versions.
| |
doc_23534680
|
How do I assign an action URL to the parent form tag from one of the sub user controls?
A: You can access the parent form programmatically using this code below.
HtmlForm frm = new HtmlForm();
frm = (HtmlForm)Page.FindControl("Form1");
frm.Enctype = "multipart/form-data";
A: Not sure about your scenario but this worked for me:
Using HTML5 each of your input boxes can have their own unique form action URL. HTML5 formaction attribute overrides the action attribute of the main form.
| |
doc_23534681
|
from pyproj import Geod
lat1 = 42.73864
lon1 = 111.8052
lat2 = 43.24844
lon2 = 110.6083
geod = Geod(ellps='WGS84')
# This implements the highly accurate Vincenty method
bearing = geod.inv(lon1, lat1, lon2, lat2)[0]
# >>> 60.31358
I have also used the following code that uses a Haversine method
from math import degrees, radians, sin, cos, atan2
def bearing(lat1, lon1, lat2, lon2):
lat, lon1, lat2, lon2 = map(radians, [lat1, lon1, lat2, lon1])
dLon = lon2 - lon1
y = sin(dLon) * cos(lat2)
x = cos(lat1)*sin(lat2) - sin(lat1)*cos(lat2)*cos(dLon)
brng = degrees(atan2(y, x))
if brng < 0: brng += 360.0
return brng
With the same inputs from the previous implementation I get a result of 60.313 degrees, which matches the first implementation. However, when I use the Ruler function in google earth I get a result of 15.71 degrees. Furthermore when I activate the grid on google earth that shows the lines of longitude as a reference, 15.71 degrees makes far more sense. Why does the Google Earth implementation differ from the Python implementations?
A: the outputs of your geod code part are correct.
try to get behind it using easy examples (see below):
this means that there was probably a problem entering the coordinates into google earth or setting up the ruler but not in your code.
lat1 = 40
lon1 = 40
lat2 = 39
lon2 = 40
#output 180.0
or
#Kansas City:
lat1 = 39.099912
lon1 = -94.581213
#St Louis:
lat2 = 38.627089
lon2 = -90.200203
#output 96.4809
the second example can be confirmed on this page:
https://www.igismap.com/formula-to-find-bearing-or-heading-angle-between-two-points-latitude-longitude/
| |
doc_23534682
|
These variables will be global used by the way.
After this I use them to set a command-, icon- and a namelist for a QListWidget.
I i select an item and click a button it executes the command and displays the result in an QTextEdit.
--> You can see the code here. <--
How can i achieve this and is there a better solution?
EDIT:
Im sorry, but english isn't my native language so its hard to explain ...
At first the files which are:
Dialog.h, Dialog.cpp and Dialog.ui
Then the Files which contains the function:
Query.h and Query.cpp
At least the Script wich i call variables.sh for example.
It contains something like this:
CmdList=("kcmshell4 --list|grep -q kcm_grub2",
"kcmshell4 --list|grep -q kcm_networkmanagement",
"which pastebunz",
"[ -z $ink3_ver ]")
NameList=("kcm_grub2",
"kcm_networkmanagement",
"pastebunz",
"Shellmenu")
IconList=(":/icons/icons/GNU.png",
":/icons/icons/networkmanager.png",
":/icons/icons/edit-paste.png",
":/icons/icons/menu.png")
I dont know the length or content of these. So i should use QVector right?
The Query function is called via a button from the Dialog Ui.
Now i must read the variables from variables.h (this should be done at programstart ...).
for (int i = 0; i < ${#$cmdList[*]}; i++) // where '${#$cmdList[*]}' represents the
{ some magical stuff; } //legth or the $CmdList array written in bash ...
Then i must use some loop in my function in Query.cpp like
QVector<QString> vCmdList;
for (int i = 0; i < vCmdList.size(); i++)
{
vCmdList[i] = CmdList[i];
}
I hope its clearer now because i have no idea how to explain it more precicely.
Thanks for your patience ^^
A: It would probably be easier to use QSettings and an .ini file to store your commands than bash arrays.
For example:
[kcm_grub2]
command=kcmshell4 --list|grep -q kcm_grub2
icon=:/icons/icons/GNU.png
[kcm_networkmanagement]
command=kcmshell4 --list|grep -q kcm_networkmanagement
icon=:/icons/icons/networkmanager.png
...
With QSettings::childGroups(), you'll be able to iterate over all the command names to then read the command and the icon path for each name.
| |
doc_23534683
|
<!DOCTYPE html>
<html>
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<meta name="Description" content="Pagina de contact">
<meta name="author" content="Matei Popa">
<link rel="stylesheet" type="text/css" href="css/contactstyle.css">
</head>
<body>
<div class="container">
<ul>
<li>
<div class="navlink">
<div class="img">
<a href="https://www.facebook.com/matei.popa.332" target="_blank">
<img class="image" src="img/contact-pg/facebooklogo.jpg" alt="Logo
Facebook">
</a>
</div>
<a href="https://www.facebook.com/matei.popa.332" target="_blank">
Matei Popa
</a>
</div>
</li>
<li>
<div class="navlink">
<div class="img">
<a href="https://www.instagram.com/matei.popa.332/" target="_blank">
<img class="image" src="img/contact-pg/instagramlogo.jpg" alt="Logo
Instagram">
</a>
</div>
<a href="https://www.instagram.com/matei.popa.332/" target="_blank">
matei.popa.332
</a>
</div>
</li>
<li>
<div class="navlink">
<div class="img">
<a href="https://twitter.com/mateiutz2001" target="_blank">
<img class="image" src="img/contact-pg/twitterlogo.jpg" alt="Logo Twitter">
</a>
</div>
<a href="https://twitter.com/mateiutz2001" target="_blank">
@mateiutz2001
</a>
</div>
</li>
<li>
<div class="navlink">
<div class="img">
<img class="image" src="img/contact-pg/yahoomaillogo.jpg" alt="Logo Yahoo
Mail">
</div>
<div class="paragraph">
<p>alex.matei1808@yahoo.com</p>
</div>
</div>
</li>
<li>
<div class="navlink">
<div class="img">
<img class="image" src="img/contact-pg/gmaillogo.jpg" alt="Logo Gmail">
</div>
<div class="paragraph">
<p>
mattx1829@gmail.com
</p>
</div>
</div>
</li>
</ul>
</div>
<!-- add Phone number, Yahoo Mail, Gmail, Whatsapp, Skype, Reddit, Quora -->
</body>
</html>
Css:
@font-face
{
font-family: Sansation;
src: url(../font/Sansation_Regular.ttf)
}
body
{
background-color: #000;
width: 100%;
height: 100%;
margin: 0;
}
div.container
{
padding-top: 50px;
padding-left: 40px;
margin: 60px;
align-content: flex-start;
}
ul
{
list-style-type: none;
}
li
{
flex-direction: row;
width: 100%;
display: flex;
float: left;
height: 50px;
}
div.navlink
{
display: flex;
flex-direction: row;
width: 100%;
float: left;
text-align: left;
font-family: Sansation;
font-size: 24px;
padding: 10px;
}
a:link
{
color: #FFFFFF;
text-decoration: none;
background-color: transparent;
}
a:hover
{
color: #D1DC19;
background-color: transparent;
text-decoration: underline;
}
a:visited
{
color: #E47507;
background-color: transparent;
text-decoration: none;
}
a:active
{
color: red;
background-color: white;
text-decoration: underline;
}
div.img
{
display: flex;
flex-direction: row;
float: left;
align-content: center;
overflow: hidden;
}
div.paragraph
{
display: table;
margin: 0 auto;
color: #FFFFF;
font-family: Sansation;
padding-left: 0px;
width: auto;
height: 100%;
float: left;
overflow: hidden;
align-content: center;
text-align: center;
}
p
{
height: 100%;
vertical-align: top;
align-content: center;
}
.image
{
vertical-align: middle;
padding-right: 15px;
width: 30px;
height: 30px;
}
How it looks like now
A: You can provide a link for your Yahoo email address using mailto:youremail@address.com.
See: https://www.w3schools.com/html/tryit.asp?filename=tryhtml_links_mailto
Your snippet of code for that li would then be:
<li>
<div class="navlink">
<div class="img">
<a href="https://twitter.com/mateiutz2001" target="_blank">
<img class="image" src="img/contact-pg/yahoomaillogo.jpg" alt="Logo Yahoo Mail"></a>
</div>
<a href="mailto:alex.matei1808@yahoo.com">alex.matei1808@yahoo.com</a>
</div>
</li>
| |
doc_23534684
|
My app uses multiple ws ( websocket ) client instances which are initialized inside to login to a server.
On top level I just want node js to use multiple cores without duplicating the connections, file logs, and other logs from the app.
const cluster = require('cluster')
const config = require('../config.json').sys
const core = require('./test')
process.title = config.title
;(async () => {
if (cluster.isMaster){
for (let i = 0; i < require('os').cpus().length; i++) cluster.fork()
await core() // app i want to start
}
})()
Would this now still benefit from all cpu's?
| |
doc_23534685
|
It works fine but problem is when I delete a user manually from the Firebase Authentication page then open the app the user i deleted still logged in.
Aren't there any way to fix this problem ?
A: Its probably due to you using Access Token and Access Token is still valid even though you deleted user. To avoid that problem easy fix would be when you start app try to Refresh Access Token.
You should also look into how long token is valid.. Maybe try to shorten expiration time.
| |
doc_23534686
|
appointment time but unable to set it on my custom time.
NSDate *dt1 = [NSDate date]; //2018-07-18 10:41:22 +0000
NSString *datestr = [NSString stringWithFormat:@"%@", [NSDate date]];//2018-07-18 10:42:09 +0000
NSRange range = NSMakeRange(11,5);
NSString *cutstring = 05:00 PM;
By this I add substring to current date and change date in my custom
date.
NSString *newstr = [cutstring substringWithRange:NSMakeRange(0, 5)];
NSString *changed = [datestr stringByReplacingCharactersInRange:range withString:newstr]; //2018-07-18 05:00:09 +0000
By this code I got my custom time '2018-07-18 05:00:09 +0000' on which I want to fire UILocalNotification, But issue is that when I set my custom time with NSLocalNotification it set's on such a random time like 18 July 2018 at 4:19:29 PM
[dateFormatter setDateFormat:@"yyyy-MM-dd HH:mm:ss ZZZ"];
NSDate *fireddate = [dateFormatter dateFromString:changed];
localNotification.fireDate = fireddate;
NSLog(@"%@",fireddate);
localNotification.alertBody = [NSString stringWithFormat:@"You have appointment in 1 hour"];
localNotification.soundName = UILocalNotificationDefaultSoundName;
localNotification.applicationIconBadgeNumber = 1;
localNotification.category = @"ACTIONABLE";
//Received Local Notification:<UIConcreteLocalNotification:
0x6000003887b0>{fire date = Wednesday, 18 July 2018 at 4:19:29 PM
India Standard Time, time zone = (null), repeat interval = 0, repeat
count = UILocalNotificationInfiniteRepeatCount, next fire date =
(null), user info = { }}
A: I would just create a fire date with NSDateComponents where you can set year, month, hour, minute etc.
Something like this:
NSDateComponents *dateComponents = [[NSDateComponents alloc] init];
dateComponents.calendar = [NSCalendar currentCalendar];
[dateComponents setYear:year];
[dateComponents setMonth:month];
[dateComponents setDay:day];
[dateComponents setHour:hour];
NSDate *fireDate = [dateComponents date];
You can read more about it in this great article
NSDateComponents - NSHipster
| |
doc_23534687
|
where is the problem?
public class Factorial4 {
public static void main(String[] args) {
for (int i=1; i<=100; i++){
System.out.println("Factorial of " + i + " is " + factorial(i));
}
}
public static int factorial(int n){
int result = 1;
for (int i=1; i<=100; i++){
result *= i;
}
return result;
}
}
A: this code has multiple issues:
*
*you always create the factorial of 100 (loop runs until 100).
*factorial will create values that are by far bigger than Integer.MAX_VALUE. you'll have to use BigInteger instead. The 0 is simply a result of tons of overflows.
A: n not 100.
public static int factorial (int n){
int result = 1;
for (int i = 2; i <= n; i++) { // <-- n, not 100.also, x*1=x
result *= i;
}
return result;
}
Of note, is that int would overflow at 100! so you could use a BigInteger like
public static BigInteger factorial(int n) {
BigInteger result = BigInteger.ONE;
for (int i = 2; i <= n; i++) { // <-- n, not 100.also, x*1=x
result = result.multiply(BigInteger.valueOf(i));
}
return result;
}
A: The correct logic would be
public static BigInteger factorial(int n){
BigInteger result = 1;
for (int c=1; c<=n; c++){
result =result.multiply(BigInteger.valueOf(c));
}
return result;
}
Using BigInteger would make sense because result would exceed int range
A: Integer range problem... for small 'n' you can use long (for 'result'), for larger ones - double.
| |
doc_23534688
|
A: Hungarian notation or not, I'm more curious if people prepend m_ or _ or whatever they use for standard private member variables.
A: This might be counter-intuitive for some, but we use the dreaded Hungarian notation for UI elements.
The logic is simple: for any given data object you may have two or more controls associated with it. For example, you have a control that indicates a birth date on a text box, you will have:
*
*the text box
*a label indicating that the text box is for birth dates
*a calendar control that will allow you to select a date
For that, I would have lblBirthDate for the label, txtBirthDate for the text box, and calBirthDate for the calendar control.
I am interested in hearing how others do this, however. :)
A: I personally prefix private objects with _
Form controls are always prefixed with the type, the only reason I do this is because of intellisense. With large forms it becomes easier to "get a labels value" by just typing lbl and selecting it from the list ^_^ It also follows the logic stated by Jon Limjap.
Although this does go again Microsofts .NET Coding Guidelines, check them out here.
A: For me, the big win with the naming convention of prepending an underscore to private members has to do with Intellisense. Since underscore precedes any letter in the alphabet, when I do a ctrl-space to bring up Intellisense, there are all of my _privateMembers, right at the top.
Controls, though, are a different story, as far as naming goes. I think that scope is assumed, and prepending a few letters to indicate type (txtMyGroovyTextbox, for example) makes more sense for the same reason; controls are grouped in Intellisense by type.
But at work, it's VB all the way, and we do mPrivateMember. I think the m might stand for module.
A: I came through VB and have held onto the control type prefix for controls. My private members use lower-camel case (firstLetterLowercase) while public members use Pascal/upper-camel case (FirstLetterUppercase).
If there are too many identifiers/members/locals to have a 90% chance of remembering/guessing what it is called, more abstraction is probably necessary.
I have never been convinced that a storage type prefix is useful and/or necessary. I do, however, make a strong habit of following the style of whatever code I am using.
A: I don't, but I appreciate your logic. I guess the reason most people don't is that underscores would look kind of ugly in the Properties window at design time. It'd also take up an extra character of horizontal space, which is at a premium in a docked window like that.
A:
Hungarian notation or not, I'm more
curious if people prepend m_ or _ or
whatever they use for standard private
member variables.
Luke,
I use _ prefix for my class library objects. I use Hungarian notation exclusively for the UI, for the reason I stated.
A: I never use underscores in my variable names. I've found that anything besides alpha (sometimes alphanumeric) characters is excessive unless demanded by the language.
A: I'm in the Uppercase/Lowercase camp ("title" is private, "Title" is public), mixed with the "hungarian" notation for UI Components (tbTextbox, lblLabel etc.), and I am happy that we do not have Visual Case-Insensitive-Basic developers in the team :-)
I don't like the underscore because it looks kinda ugly, but I have to admit it has an advantage (or a disadvantage, depending on your point): In the debugger, all the private Variables will be on top due to the _ being on top of the alphabet. But then again, I prefer my private/public pair to be together, because that allows for easier debugging of getter/setter logic as you see the private and public property next to each other,
A: I write down the name of the database column they represent.
A: I use m_ for member variables, but I'm increasingly becoming tempted to just using lowerCamelCase like I do for method parameters and local variables. Public stuff is in UpperCamelCase.
This seems to be more or less accepted convention across the .NET community.
| |
doc_23534689
|
I spend hours looking for this. Nothing works :(
What I tried so far:
*
*use different loaders ( useLoader(GLTFLoader,url) / useGLTF(url) and some more
*wrap the component in a next/dynamic component / dont do it
*solve the errors related to suspense not beeing support by installing next with react 18
*tried this starter template
*use three-stdlib
*tried to write a custom loader in next.config.js
read every issue and forum post i could find on tis issue
The error i get at the moment is:
Server Error
Error: Could not load <url> response.body.getReader is not a function
with a component looking like this:
import React from 'react'
import { useGLTF } from '@react-three/drei'
import { Canvas, } from '@react-three/fiber'
import { Suspense } from 'react/cjs/react.production.min';
export default function Spinner({ ...props }) {
const model = useGLTF("http://localhost:3000/spinner.glb")
return (
<Suspense fallback={"loading"}>
<Canvas
camera={{ position: [1, 1, 1] }}
>
<primitive object={model.scene} />
<color attach="background" args={["hotpink"]} />
</Canvas>
</Suspense>
)
}
package.json:
},
"dependencies": {
"@react-three/drei": "^7.27.3",
"@react-three/fiber": "^7.0.21",
"axios": "^0.24.0",
"next": "^12.0.7",
"react": "^18.0.0-beta-24dd07bd2-20211208",
"react-dom": "^18.0.0-beta-24dd07bd2-20211208",
"three": "^0.135.0",
"three-stdlib": "^2.6.1"
},
"devDependencies": {
"eslint": "8.4.1",
"eslint-config-next": "12.0.7",
"file-loader": "^6.2.0"
}
}
node-version:
16 LTS
A: Wrapping your Model component with the parent and using lazy import solves the issue, e.g.
Model component
import React from 'react'
import { useGLTF } from '@react-three/drei'
export default function Model() {
const model = useGLTF("http://localhost:3000/spinner.glb")
return (
<primitive object={model.scene} />
)
}
Scene component with lazy() import
import { lazy, Suspense } from 'react'
import { Canvas, } from '@react-three/fiber'
const ModelComponent = lazy(() => import("./model"));
export default function Spinner({ ...props }) {
return (
<Suspense fallback={"loading"}>
<Canvas
camera={{ position: [1, 1, 1] }}
>
<ModelComponent />
<color attach="background" args={["hotpink"]} />
</Canvas>
</Suspense>
)
}
This seems to be related to SSR. Similar problems are with TextureLoaders in Next and was having similar hard time to fix it and eventually found that solution with lazy() import. I had just tried that for the model load and it works fine. Can't track this original thread right now, but will try to track and add it here.
A: In nextJS you don't need to use the suspense component.
Use the useTexture hook from @react-three/drei instead of loading using useLoader.
This example code loads the model with texture.
import React from "react";
import { useTexture } from "@react-three/drei";
function Box() {
const colorMap = useTexture("/img/robot.png");
return (
<mesh rotation={[90, 0, 20]}>
<boxBufferGeometry attach="geometry" args={[2, 2, 2]} />
<meshNormalMaterial attach="material" />
</mesh>
);
}
export default Box;
A: What worked in my case is :
import React from 'react'
import { useGLTF } from '@react-three/drei'
import Spinner from "@/public/spinner.glb"
export default function Model () {
const glb = useGLTF(Spinner.src)
return (
<primitive object={model.scene} />
)
}
| |
doc_23534690
|
[
{
"key": "ALL POS",
"color": "#39a5cf",
"values": [
{
"x": "4/01/2012",
"y": 54,
"series": 0
}
]
},
{
"key": "MIX POS",
"color": "#2227f4",
"values": [
{
"x": "4/01/2012",
"y": 34,
"series": 1
}
]
},
{
"key": "PURE POS",
"color": "#9fa9f7",
"values": []
}
]
You can see the pure pos series doesnt have values compared to the other two. Because of this the stacked effect is not working. Can someone help me regarding this?
| |
doc_23534691
|
Started GET "/" for 46.38.178.9 at 2018-01-07 11:50:45 +0000
I, [2018-01-07T11:50:45.044625 #25767] INFO -- : Processing by HomeController#index as */*
I, [2018-01-07T11:50:45.058432 #25767] INFO -- : Rendered shared/analytics/_impression.js.erb (0.6ms)
I, [2018-01-07T11:50:45.059066 #25767] INFO -- : Rendered shared/analytics/_impression.js.erb (0.5ms)
I, [2018-01-07T11:50:45.059719 #25767] INFO -- : Rendered shared/analytics/_impression.js.erb (0.5ms)
I, [2018-01-07T11:50:45.060345 #25767] INFO -- : Rendered shared/analytics/_impression.js.erb (0.5ms)
I, [2018-01-07T11:50:45.061015 #25767] INFO -- : Rendered shared/analytics/_impression.js.erb (0.6ms)
I, [2018-01-07T11:50:45.061626 #25767] INFO -- : Rendered shared/analytics/_impression.js.erb (0.5ms)
I, [2018-01-07T11:50:45.062260 #25767] INFO -- : Rendered shared/analytics/_impression.js.erb (0.5ms)
I, [2018-01-07T11:50:45.062873 #25767] INFO -- : Rendered shared/analytics/_impression.js.erb (0.5ms)
I, [2018-01-07T11:50:45.063504 #25767] INFO -- : Rendered shared/analytics/_impression.js.erb (0.5ms)
I, [2018-01-07T11:50:45.064115 #25767] INFO -- : Rendered shared/analytics/_impression.js.erb (0.5ms)
I, [2018-01-07T11:50:45.064763 #25767] INFO -- : Rendered shared/analytics/_impression.js.erb (0.5ms)
I, [2018-01-07T11:50:45.065389 #25767] INFO -- : Rendered shared/analytics/_impression.js.erb (0.5ms)
I, [2018-01-07T11:50:45.065468 #25767] INFO -- : Rendered shared/analytics/_impressions_multiple.js.erb (7.8ms)
I, [2018-01-07T11:50:45.073834 #25767] INFO -- : Rendered shared/_review_banners.html.erb (6.2ms)
I, [2018-01-07T11:50:45.081136 #25767] INFO -- : Rendered shared/_tour_card.html.erb (7.0ms)
I, [2018-01-07T11:50:45.087936 #25767] INFO -- : Rendered shared/_tour_card.html.erb (6.6ms)
I, [2018-01-07T11:50:45.094084 #25767] INFO -- : Rendered shared/_tour_card.html.erb (5.9ms)
I, [2018-01-07T11:50:45.100248 #25767] INFO -- : Rendered shared/_tour_card.html.erb (6.0ms)
I, [2018-01-07T11:50:45.107051 #25767] INFO -- : Rendered shared/_tour_card.html.erb (6.6ms)
I, [2018-01-07T11:50:45.113837 #25767] INFO -- : Rendered shared/_tour_card.html.erb (6.6ms)
I, [2018-01-07T11:50:45.120657 #25767] INFO -- : Rendered shared/_tour_card.html.erb (6.6ms)
I, [2018-01-07T11:50:45.128780 #25767] INFO -- : Rendered shared/_tour_card.html.erb (7.9ms)
I, [2018-01-07T11:50:45.135850 #25767] INFO -- : Rendered shared/_tour_card.html.erb (6.9ms)
I, [2018-01-07T11:50:45.141214 #25767] INFO -- : Rendered shared/_tour_card.html.erb (5.2ms)
I, [2018-01-07T11:50:45.148157 #25767] INFO -- : Rendered shared/_tour_card.html.erb (6.8ms)
I, [2018-01-07T11:50:45.153795 #25767] INFO -- : Rendered shared/_tour_card.html.erb (5.3ms)
I, [2018-01-07T11:50:45.153877 #25767] INFO -- : Rendered home/_best_sellers.html.erb (79.9ms)
I, [2018-01-07T11:50:45.154567 #25767] INFO -- : Rendered home/_categories.html.erb (0.6ms)
I, [2018-01-07T11:50:45.155882 #25767] INFO -- : Rendered shared/_testimonials.html.erb (0.3ms)
I, [2018-01-07T11:50:45.156036 #25767] INFO -- : Rendered shared/usp/_generic.html.erb (0.0ms)
I, [2018-01-07T11:50:45.156200 #25767] INFO -- : Rendered home/index.html.erb within layouts/application (110.0ms)
I, [2018-01-07T11:50:46.299424 #25767] INFO -- : Rendered shared/_tour_links.html.erb (12.0ms)
I, [2018-01-07T11:50:46.299538 #25767] INFO -- : Rendered shared/_header.html.erb (1140.6ms)
I, [2018-01-07T11:50:46.313482 #25767] INFO -- : Rendered shared/_tour_links.html.erb (13.5ms)
I, [2018-01-07T11:50:46.318732 #25767] INFO -- : Rendered shared/_footer.html.erb (19.0ms)
I, [2018-01-07T11:50:46.319224 #25767] INFO -- : Completed 200 OK in 1275ms (Views: 135.7ms | ActiveRecord: 1137.7ms | Solr: 0.0ms)
Is there any way for me to add the sql queries to the production log file as they are shown in the development console? i.e.
FooterLink Load (1.1ms) SELECT "navigation_links".* FROM "navigation_links" WHERE "navigation_links"."location" = $1 AND "navigation_links"."column" = $2 [["location", 0], ["column", 3]]
Or alternatively is there any way to 'view them live' like you do when making queries in development? All I can find after googling is how to change where the log is stored and the STDOUT but nothing on query logging.
Thanks in advance
A: Just change you log level. You can change it to debug mode. http://guides.rubyonrails.org/debugging_rails_applications.html#log-levels
| |
doc_23534692
|
My web service, when it starts up, calls a static class to load my shared library via System.loadLibrary("mylib"). There are no issues with this. My library is only loaded once and there are no exceptions. I've checked that the path to my library is in the java.library.path property - as expected since loadLibrary worked.
The resultant SWIG files work with a test driver outside of Eclipse - that is we can access native functions in the library. This makes me believe that the function names created by SWIG and the native mapping are being done correctly. However, when using Eclipse to create and deploy the web service EAR to JBOSS we get an UnsatisfiedLinkError complaining that a C++ constructor from my library cannot be found.
SWIG line of my makefile
swig -c++ -java -package my.package -outdir java/my/package -Isrc -Iinc -o src/MODULE_wrap.cpp src/MODULE.i
MODULE.i
/* File : MODULE.i */
%module MODULE
%include "arrays_java.i"
%apply double[] {double *};
%{
\#include "MyCPPHeader.hh"
%}
%include "MyCPPHeader.hh
My creation of the shared objects:
g++ -fPIC -c -Idir1 -I. -Idir2 -Idir3 -Idir4 -Idir5 -O2 -DMACRO_DEF src/MyCPPSource.cpp -o obj/MyCPPSource.osh
g++ -fPIC -c -Idir1 -I. -Idir2 -Idir3 -Idir4 -Idir5 -O2 -DMACRO_DEF src/MODULE_wrap.cpp -o obj/MODULE_wrap.osh
My creation of the library
g++ -shared -L/usr/lib/x86_64-redhat-linux5E/lib64 obj/MyCPPSource.osh obj/MODULE_wrap.osh -o lib/libmylib.so
SWIG .JAR
If I do a jar -tf of the jar containing the SWIG generated JAVA files all of the necessary classes in the appropriate package directory structure are there. Eclipse/JBOSS don't complain about any of these sources - only when it attempts to find the call labeled as "native" in the JNI class.
The java class that loads the library. It lives in JBoss'_Home/server/default/lib within a JAR
public class LibLoader {
static
{
try
{
System.loadLibrary("mylib");
System.out.println("Loaded " + System.mapLibraryName("mylib"));
}
catch (java.lang.UnsatisfiedLinkError e)
{
System.out.println("Got unsat link error " + e.getMessage());
}
catch (Exception e)
{
System.out.println("Got exception " + e.getMessage());
}
}
public LibLoader() {}
}
How that LibLoader class is used from my web service - which is implemented as a java singleton:
public static WebService getInstance()
{
// loads library once outside the service
if (m_libLoader == null)
m_libLoader = new LibLoader();
// Create the singleton instance
if (m_instance == null)
m_instance = new WebService();
// Create an object as defined in my library and accessed with JNI
if (m_libraryObject == null)
{
// THIS IS WHERE I GET THE UnsatisfiedLinkError!!!!!
m_libraryObject = new MyObjectFromSharedLibrary();
}
return m_instance;
}
A: SOLVED
Instead of bundling the SWIG JNI wrapper JAR with my EAR I put it in the same directory on the server as my LibLoader JAR and low and behold it worked.
| |
doc_23534693
|
for url in urls:
r = requests.get(url, allow_redirects=False)
soup = BeautifulSoup(r.content, 'lxml')
words = soup.find_all("td", text=the_word)
print(words)
print(url)
I don't know much. Could anybody please direct me to search for the substrings too?
A: You can use a custom function to check if the word is present in the text.
html = '''
<td>the keyword is present in the text</td>
<td>the keyword</td>
<td></td>
<td>the word is not present in the text</td>'''
soup = BeautifulSoup(html, 'lxml')
the_word = 'keyword'
tags = soup.find_all('td', text=lambda t: t and the_word in t)
print(tags)
# [<td>the keyword is present in the text</td>, <td>the keyword</td>]
Usually only the_word in t would work. But, if there are any <td> tags that don't have any text, as shown in the example (<td></td>), using the_word in t would raise a TypeError: argument of type 'NoneType' is not iterable. That's why we first have to check if text is not None. Hence the function lambda t: t and the_word in t.
If you are not comfortable with lambdas, you can use a simple function which is equivalent to the one above:
def contains_word(t):
return t and 'keyword' in t
tags = soup.find_all('td', text=contains_word)
A: There is no way to do this directly. The only way I can think of is to put all of the text from the 'td' tags into a data structure such as a list or dictionary and test it there.
| |
doc_23534694
|
1.products fields
1.product_id
2.product_title,
3.product_desc
4.product_img
2.keywords fields
1. id
2. keyword
keywords will be
1.hair
2 body
3 tv
4 mobile
product_title will be
1. Hair oil
2. Hair Straightner
3. Body oil
4. Body massage
5. LCD TV 32"
6. LED TV 40"
7. Air conditioner
8. Washing machine
9. Refrigrator
According to the keyword present in the title it have to show listing, for here it have to show products from 1 to 6 ,it should not show the 7 to 9. How I can do it?
A: IMHO, The best thing to do is add a table productsToKeywords that will allow you many to many relationship between the products and Keywords tables.
create table productsToKeywords
(
ptk_product_id int foreigm key reference products (product_id),
ptk_keyword_id int foreigm key reference keywords (id),
primary key(ptk_product_id, ptk_keyword_id)
)
Then your select would look something like this:
select product_id, product_title,, product_desc, product_img
from products
inner join productsToKeywords on(product_id = ptk_product_id)
inner join Keywords on(ptk_keyword_id = id)
where keyword = 'hair'
A: You can try this:
select p.* from Product p
inner join Keyword k
on p.product_title like '%' + k.keyword + '%'
If it is possible to have empty values in keyword or title fields, then you need to add condition to ignore such records too. Otherwise it would match with all.
A: I suggest you to use EXISTS like this:
SELECT *
FROM products p
WHERE EXISTS( SELECT 1
FROM keywords k
WHERE p.product_title Like '%' + k.keyword + '%')
| |
doc_23534695
|
I am able to do this by reading a regular text file stored in res/raw using the following code:
InputStream is = context.getResources().openRawResource(R.raw.my_text_file);
But no clue how to do the same for an .ods file.
I searched through SOF & found a reference to jOpenDocument . But they talk about libraries that are not part of the android SDK & I don't know what to do with these.
Any help is appreciated!
A: If it's imperative that the file be in ODS format, which is similar to an XML format, you can parse it yourself. Check out the following link.
http://www.go4expert.com/forums/showthread.php?t=19110
Otherwise, may I suggest converting it to a CSV first? CSV means comma-separated-values. Thus It uses an even simpler syntax where each row is separated by a newline and each column in a row by a comma. For that you can use this code to get each line:
http://www.java2s.com/Code/Java/Development-Class/SimpledemoofCSVmatchingusingRegularExpressions.htm
A: FYI, you can import SOME external JAR libraries into your android project.
A: JODF supports Android 2.2+ and Java 1.5+. It is Java API for Open Document Format
| |
doc_23534696
|
I am getting this error:
go get gopkg.in/goracle.v2
# gopkg.in/goracle.v2
.go/src/gopkg.in/goracle.v2/orahlp.go:271: undefined: driver.Pinger
Please can anyone suggest anything which works with go version 1.7 without the client installed.
Regards,
Sheetal
| |
doc_23534697
|
When they click allow I want the multi-friend's selector to open so they can invite friend's to the site. I need the permissions dialoge first so I can get the user_id's of the users the person invited.
Is there a way to open the multi-friend's selector when the user clicks allow?
A: Hi when the user aunthenticate your application, it logins to your site, You can capture the event of login and can send him to any other url where you can put friend selector.
FB.Event.subscribe("auth.login", function(response) {
//send to some url(file) and put there friend selector
});
| |
doc_23534698
|
Parcelable encountered IOException writing serializable object (name = com.braden.android.fragments.ListItemFragment$6)
...
Caused by: java.io.NotSerializableException: com.braden.android.fragments.ListItemFragment
To do the callback I used a fairly standard callback interface pattern. The interface extends Serializable. Here's the code for my callback:
private void displayFilter() {
FilterCategoryDialogFragment filterCategoryDialogFragment = new FilterCategoryDialogFragment();
Bundle bundle = new Bundle();
mOnFilterClickListener = new OnFilterClickListener() {
@Override
public void onCategoryClickListener(String filterName) {
updateVenues(mFilter);
}
};
bundle.putSerializable("listenerFilter",
mOnFilterClickListener);
filterCategoryDialogFragment.setArguments(bundle);
filterCategoryDialogFragment.show(getFragmentManager(), DIALOG_CATEGORY_FILTER);
}
This seems to have something to do with using an anonymous inner class that implements serializable so I'm wondering:
1) Why is it that I'm only receiving this exception when I use SearchView and not when I perform an action to send back data via callback or simply click out of the dialog.
2) Is there a workaround here or is this just a bad pattern for me to use.
A: I found the answer to this question here: Callback to a Fragment from a DialogFragment
They key is the "setTargetFragment" method which allows you to tell a fragment which fragment to send its result to. This allows you to avoid having to serialize an interface reference for the callback.
A: All fields of class must be Serialized, otherwise you should get NotSerializableException.
if you check Exception stack you will be able to find that object which not serialized.
| |
doc_23534699
|
// Get rankings JSON file from thebluealliance.com
string TBArankings = @"https://www.thebluealliance.com/api/v2/district/ont/2017/rankings?X-TBA-App-Id=frc2706:ONT-ranking-system:v01";
var rankings = new WebClient().DownloadString(TBArankings);
string usableTeamNumber = "frc" + teamNumberString;
string team_key = "";
int rank = 0;
dynamic arr = JsonConvert.DeserializeObject(rankings);
foreach (dynamic obj in arr)
{
team_key = obj.team_key;
rank = obj.rank;
}
int index = Array.IndexOf(arr, (string)usableTeamNumber); // <-- This is where the exception is thrown.
Console.WriteLine(index);
// Wait 20 seconds
System.Threading.Thread.Sleep(20000);
Here's the json file I'm using.
I've tried multiple different solutions, none of which worked.
A: You could just keep the index in variable.
string usableTeamNumber = $"frc{teamNumberString}";
string team_key = "";
int rank = 0;
int index = 0;
int count = 0;
dynamic arr = JsonConvert.DeserializeObject(rankings);
foreach (dynamic obj in arr)
{
team_key = obj.team_key;
rank = obj.rank;
if (usableTeamNumber.Equals(team_key) {
index = count;
}
count++;
}
Console.WriteLine(index);
A: Create a class that mimics your data structure, like such (only has 3 of the root fields):
public class EventPoints
{
public int point_total { get; set; }
public int rank { get; set; }
public string team_key { get; set; }
}
Then you can Deserialize the object into a list of those objects and you can use LINQ or other tools to query that list:
string teamNumberString = "frc2056";
string TBArankings = @"https://www.thebluealliance.com/api/v2/district/ont/2017/rankings?X-TBA-App-Id=frc2706:ONT-ranking-system:v01";
var rankings = new WebClient().DownloadString(TBArankings);
List<EventPoints> eps = JsonConvert.DeserializeObject<List<EventPoints>>(rankings);
EventPoints sp = eps.Where(x => x.team_key.Equals(teamNumberString)).FirstOrDefault();
Console.WriteLine(eps.IndexOf(sp));
Console.ReadLine();
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.