Document
stringlengths 395
24.5k
| Source
stringclasses 6
values |
|---|---|
How to sum currency items of an array
I want to sum items of an array. These items are currency. However, it has problem when parse to NSNumber from String. Please see my code below
NSString *cash;
cash = self.textfield.text; // textfield has format: currencyFormatter
NSNumberFormatter * number = [[NSNumberFormatter alloc] init];
[number setNumberStyle:NSNumberFormatterCurrencyStyle];
NSNumber * myNumber = [number numberFromString:cash]; // convert into number from string
NSLog(@"myNumber:%@",myNumber);
NSMutableArray *tmp_cash;
[tmp_cash addObject:myNumber]; // add object to Array
long long sum = ((NSNumber*)[tmp_cash valueForKeyPath: @"@sum.longLongValue"]).longLongValue; // sum items of array
However, app crashes and show log myNumber is null. Please help me fix this bug
When you say "textfield has format" do you mean that cash is not just a plan number, or that you have a formatter for it? Can you give an example of what cash could be?
ex: I enter data from keyboard: 200, textfield will show $2.00. And cash is 2.00 because i use :cash = [cash substringFromIndex:1]; to remove first character. Sr for my bad english
What cash has after cash = self.textfield.text;? Take a breakpoint and look what inside.
@user3525058 use that same currencyFormatter you use to display the number, to read it back in as a NSNumber
Use the same currencyFormatter you used to display the number in the textField. Change currencyFormatter to be a property, and lazily load it, to make sure its always initialized:
@property (nonatomic, strong) NSNumberFormatter *currencyFormatter;
-(NSNumberFormatter *)currencyFormatter
{
if (!_currencyFormatter) {
_currencyFormatter = [NSNumberFormatter new];
// set up other options
}
return _currencyFormatter;
}
Now you can use that to parse the text in the UITextField:
NSString *cash;
cash = self.textfield.text; // textfield has format: currencyFormatter
NSNumber * myNumber = [self.currencyFormatter numberFromString:cash]; // convert into number from string
NSLog(@"myNumber:%@",myNumber);
Thanks Rich. But it does not still work. It still show myNumber is null
@duyklinsi as previously asked can you update your answer with the NSLog output of cash :) But before you do anything to it. And also please add the code for creating your currencyFormatter too!
I added the code for currencyFormatter. This is my log:2014-04-27 18:39:06.574 MoneyManagement[1696:c07] cash:222,255.55
2014-04-27 18:39:06.575 MyProject[1696:c07] myNumber:(null)
2014-04-27 18:39:06.576 MyProject[1696:c07] *** Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: '*** -[__NSArrayM insertObject:atIndex:]: object cannot be nil'
@duyklinsi I don't see any more code? And what does the following output NSLog(@"CASH: %@", self.textfield.text);, if you put that directly after cash = self.textfield.text;
I have tried to change [number setNumberStyle:NSNumberFormatterCurrencyStyle]; to [number setNumberStyle:NSNumberFormatterDecimalStyle];
. It works. However myNumber is 222255.55 although the value of Cash is 222,255.55
As I shown above. This is Nslog for Cash: 2014-04-27 18:39:06.574 MyProject[1696:c07] cash:222,255.55
The group separators (the ,s) are for display purposes only, NSNumber doesn't care about them
|
STACK_EXCHANGE
|
A Vision for DigiByte’s Rosetta Implementation: A Launchpad for Universal Development and Collaboration on the DigiByte Blockchain
October 27, 2022 – The DigiByte Alliance continues to support the DigiByte developer community’s adoption of Rosetta as a platform to provide a more robust foundation that lowers the barrier to entry for developing on the DigiByte blockchain. The DigiByte core developers have completed the review of the code that First Bridge submitted in the Spring of 2022 and the implementation of Rosetta on DigiByte is now a developer-ready platform.
DigiByte Alliance Announces the Formation of an Education Committee
October 18, 2022 – The mission of the DigiByte Alliance Education Committee is to broaden the public’s awareness and understanding of the DigiByte blockchain in order to accelerate its growth and adoption through the development and delivery of educational resources to its community, the public, and educational institutions. The Educational Committee of the DigiByte Alliance intends to close the blockchain awareness gap and to open the doors to future financial literacy and inclusion through education, empowerment and action.
Response to Community Questions
DigiByte Alliance Formed in Wyoming to Accelerate Innovation of the DigiByte Blockchain
December 8, 2021 – Dedicated members of the DigiByte blockchain community announced today the formation of the DigiByte Alliance, a groundbreaking philanthropic approach to creating economic support for the decentralized blockchain. The mission of the DigiByte Alliance is to steward the acceleration and innovation of the DigiByte blockchain by focusing on raising funds for development, maintenance, and education. Charitable contributions to the foundation can be made in both cryptocurrency and fiat currency.
Journey to the Creation of the DigiByte Alliance
December 8, 2021 – Creation of the DigiByte Alliance is the result of efforts of several long standing DigiByte community members who wanted to create a platform through which individuals, institutions, corporations, and governments could come to better understand and interact with DigiByte blockchain technology. We understood that while the technology provided many superior features, in order to help grow its adoption and support its continued development, some type of formal structure was necessary to raise funds to implement and facilitate the goal of making DigiByte universally known as a utility to express and transfer value-a tool of value exchange to be used by the people, for the people world-wide.
DigiByte is one of the most decentralized UTXO open-source blockchains in existence. DigiByte never held an Initial Coin Offering (ICO) and was fairly launched on January 10, 2014 with just a 0.5% premine to pay for early development and to incentivize the community to utilize and fortify the network. It is permissionless without any centralized authority and its blockchain is completely open-source, released under the MIT license which gives you the power to run and modify the software. DigiByte is a Bitcoin derivative with a completely independent blockchain.
DigiByte focuses on speed, security, and scalability. Blocks occur every 15 seconds and due to SegWit implementation, can enable up to 1066 on-chain transactions per second with negligible fees. DigiByte uses 5 cryptographic consensus algorithms and real time difficulty adjustment to prevent malicious mining centralization and hash power fluctuation.
DigiByte has a maximum supply of 21 billion. Compared to 21 million Bitcoin, 21 billion DigiByte have been designed to be ready for mass adoption. The block rewards reduces by 1% every month instead of halving every 4 years. All 21 billion DigiByte will be mined by the year 2035. After 2035 the miners will then rely on transaction fees alone.
If you would like to help support the DigiByte network, we welcome you to our Gitter, GitHub,
Discord and Telegram groups. We also encourage you to follow the DigiByte Alliance on Twitter
|
OPCFW_CODE
|
MailArchiva Service Provider Tour
MailArchiva Multitenant (MT)
MailArchiva MT brings hosted email archiving, discovery and compliance services to your customer base. In doing so, your product portfolio is broadened and new revenue stream opportunities are available.
Email Archiving In The Cloud
Using the ISP Edition, each customer gets their very own light weight instance of MailArchiva. Thus, each customer instance is completely isolated from all the other instances running within the ISP Edition. There are numerous benefits to this approach, not least of which, is that it is virtually impossible for one customer to access another customer's data.
All MailArchiva configuration settings, such as defining an encryption password used for securing volume data, is customizable by the end-customer.
Alternatively, using MailArchiva's role based access control, ISP's have complete flexibility over the level of control they wish to grant the customer.
Central Management and Control
While each customer gets their very own light weight instance of MailArchiva, all applications are conveniently managed using one centralized user interface.
Convenient Application ManagementNo need to login to each customer instance individually for management and monitoring purposes! All instances are easily configured and monitored directly from the ISP Edition product.
Greater ControlIndividual management of application gives you fine grained control over your deployment. Customer instances can be independently created, deployed, upgraded, stopped and started. Wish to bring a customer's archiving offline temporarily? No problem. Just, hit the application stop button.
Lightweight ArchitectureAll applications under management by the MailArchiva ISP Edition, run on the same app server and Java Virtual Machine (JVM). Furthermore, all MailArchiva applications share the same heap space (memory).
Hosted Archiving & Discovery
ArchivingTypically, mail can either be pushed via SMTP or Milter protocols, or polled over the Internet from a temporary journal account using the IMAP or POP protocol. In most cases, the setup is a snap and has minimal impact on the customer environment.
AuthenticationUsing most web browsers, users can login and search the contents of their remote email archive. Furthermore, MailArchiva is capable of securely authenticating users residing in the customer's Active Directory or LDAP server over the Internet.
Outlook AccessEven though the customer's email archive resides in a remote location, it is still possible for end-users to acess their from within the Outlook interface. This is made possible by the fact that MailArchiva supports NTLM v3 single-sign-on over the Internet.
Pay-As-You-Go Billing & Licensing
Flexible BillingWe offer a pay-as-you-go billing model that limits your exposure to financial risk. Simply put - billing works on per mailbox per month basis, with no upfront commitment!
At month end, you will receive an automated invoice containing the mailbox usage counts for each of your instances. This invoice can be paid automatically or manually via credit card or US check. The invoice you receive can be used as the basis for billing your customer.
Custom BrandingSubject to certain conditions, such as the preservation of copyright notices, ISP's are permitted to market their email archiving service under their own brand.
|
OPCFW_CODE
|
In case you have enough time and are interested in Finding out far more on a particular topic, look through their classes and sign up!
WeDoWebApps give immaculately charismatic services in the global domains relating to IT services and actual company remedies. As an IT services supplier, the Group strives to provide graphics, wireframe style and design, prototyping, website layout and development, cell apps development, UX/UI, Search engine optimization for company purchasers or specific trying to find large-conclusion Skilled and honest web sites in addition to publications.
We create and design remarkably responsive websites for any sort of small business and Search engine optimisation them to make sure you get noticed by your prospective customers.
affilioogle.com is a web-based small business school where you learn through the scratch the way to setup and operate an internet based company.
no matter whether you're a full newbie, an bettering golfer, or an expert looking for a examination, there's a golf system in your case at stonelees, located in ramsgate on the beautiful kent coast.
The theory powering the Expresit application is always to "celebrate publicly, take care of privately." Great assessments on employees, co-personnel, services or merchandise get posted publicly about the business enterprise's website page, but any damaging evaluations get despatched suitable towards the small business proprietor to ensure any concerns are introduced appropriate to their attention and might be set privately.
WIT-TEE welcomes Every person; regardless of whether a toddler wants to put their artistic creation on the tee, to designers, artists & photographers who develop artwork and pictures and need to breed them faithfully on to attire.
Vimeo is the place cinematography genuinely comes to lifetime. If you wish to check out something which will blow you absent, commence employing Vimeo.
Sarjan Infotech is often a escalating Sydney primarily based web design and software development business and we deliver award winning excellence on the internet. We've got experience in acquiring a variety of projects in many industries.
All our Website design is done in property by our devoted web design staff members; each Net designer specials specifically Along with the consumer and styles the website based on unique customer specifications.
Manta is a small small business directory that can help community American enterprises connect with their buyers and one another. While testimonials are usually not the main function of the site, one of many capabilities is allowing clients to go away critiques in order for a small organization to handle its on the net status.
Mobile : 0411390454 I am hunting mebsites github respository ahead to acquire a favourable response out of your end wherever we can Build a Long run Enterprise Romantic relationship. Get in contact with us using any from the contact choices inside the signature underneath.
More often than not our customers are unaware in the strategies, platforms and specialized alternatives ideal suited to their exceptional process. So the top position to get started on is by having in touch for A fast (or extended) chat about your demands.
Now we have a dedicated group which can help you at just about every move. We guarantee our services and hardly ever disappoint our shopperèle.
|
OPCFW_CODE
|
Updated Alamofire extension for AF 5
AF 5 includes several API and behavior changes that break Siesta.
The extension for Alamofire 4 and previous is now preserved in Extensions/Alamofire-4.
Hi Paul,
First off, sorry to post this here – it's not related to your PR, only prompted by it – but Github lacks a general discussion section and I didn't think raising an issue was right either. Perhaps we can move it elsewhere.
So great to see that work on Siesta continues. I just happened to look in here today on the off chance that you were finding some time during your lock-in (if you have one – here in New Zealand we do), and here you are.
Siesta's been on my mind lately, partly because I'm handing over a project to a developer who's new to Siesta, and giving the requisite explanations, and watching him come to grips with it. (He's an excellent developer and is doing well.)
Also though because I have some free time coming up, and I've always thought Siesta deserves to be wildly more popular than it is. (Hard to gauge popularity of course, but I'd have expected to see more about it in the community. Perhaps I have the wrong impression. Interested to hear your thoughts.)
If I'm right, I guess there could be a couple of reasons for that:
Steep learning curve: Siesta requires you to think in a different way and takes some time to get your head around. Its value won't immediately obvious to all, and then it takes some commitment and a certain level of developerly ability to adopt it. I don't think this is any sort of failing – the docs are excellent; it's just the nature of things.
Actively promoting stuff might not be a very interesting task.
I'd love to see Siesta be adopted by more people and for development to continue.
For my part I'm thinking I'll contribute my RxSwift extensions for you to absorb into the project as you see fit. I'll give it some thought and a bit of refinement first. I've been using Siesta in a variety of projects for a couple of years, and with RxSwift for a year. I really like the combination. The extensions have evolved over time as you'd expect.
I might well do the same for Combine at some point.
Somewhere in there I'll write about all this too. (I don't have social media reach as that hasn't been my thing, but am finally taking the time to blog. I figure it's all useful.)
Adrian
Hi Adrian! I don’t know where the best place for this conversation is either. I’m glad to hear that you’re still finding Siesta useful. Parenting and teaching keep it more on the backburner than I would like, but I do keep trying to improve it!
Also though because I have some free time coming up, and I've always thought Siesta deserves to be wildly more popular than it is.
Well, thanks! I think so too, but as you guessed, promoting can’t be a high priority for me: it’s not a source of income for me, and besides I’m not a promotion expert.
You’re right that it’s a steep learning curve — or maybe just a single big mindset adjustment, hard to see except in hindsight, much like the move from MVC to declarative rendering. I’ve always thought it would be nice to do a video where you walk through converting a traditional MVC-style project to Siesta, and the narrator says “you don’t have to do that!” and deletes gobs of code as they go.
For my part I'm thinking I'll contribute my RxSwift extensions for you to absorb into the project as you see fit.
I’d welcome this. Open a PR when you’re getting close, and mark it “WIP” if you want to discuss before merging. Good regression testing would be a priority, since I won’t be using it myself.
I might well do the same for Combine at some point.
That would be great. After I finally get “offline access out of the box” whipped into shape, I plan on doing building a SwiftUI example and maybe providing an extension. The two are a really good fit.
|
GITHUB_ARCHIVE
|
viewDidLayoutSubviews for Custom UITableViewCell
I want to animate a subview of a custom TableViewCell. To perform this animation, the cell needs the width of this subview, which is laid out by an auto-layout-constraint.
However, when I use the animation-function in the cellForRowAtIndex function (mycell.animate()), the width is 0, because the subviews are not laid out yet and the animation will not work.
In a regular view, I would use viewDidLayoutSubviews(), because then the view is laid out, I can get the width and perform the animation. However, what's the equivalent function for a custom UITableViewCell?
I tried the willDisplay delegate function of the TableView, but when I print out the width of the cells subview, it still says 0...
You probably have to create a subclass of UITableViewCell and then go from there.
hmm I actually have, sorry if that's been unclear
I know this is probably deeper than that but, have you tried assigning the width to the table cell in cellForRow:at:? cell.frame.size.width = tableView.frame.size.width
layoutSubviews inside the cell subclass might do the trick.
lol thanks @lu2302, it really works! Didn't think it can be that easy :)
The correct place is inside layoutSubviews:
class MyCell : UITableViewCell {
override func layoutSubviews() {
super.layoutSubviews()
// do your thing
}
}
Hi, I'm trying to use this approach while adding a gradient to a custom UITableViewCell. I call my applyGradient() function inside of layoutSubviews() My problem is that when the table loads, I see no gradient, but after I click the cell, the gradient will show. Any idea of why the gradient won't show before clicking? I've tried calling layoutSubviews() from different places in my code with no luck. @luk2302
@Chamanhm i've had your issue and the code above did it without the need to call the function, just put the gradient code within the function above in tableViewCell class :)
I've had a similar problem while setting shadow for a subview. Apparently, the layoutSubviews method doesn't get called after the view has been laid out by autolayout. (or it is returning 0 for some reason)
layoutSubviews() did not work in my case. After reuse of the cell, gradient frame was wrong, even though i removed gradient and apply it again in reuse.
Updating frame inside draw(_ rect: CGRect) solved my problem
dropping shadow also worked only from draw(_ rect: CGRect)
It will work if you animate your view inside draw function in tableViewCell
override func draw(_ rect: CGRect) {
super.draw(rect)
//Your code here
}
|
STACK_EXCHANGE
|
This example shows how to use the cross spectrum to obtain the phase lag between sinusoidal components in a bivariate time series. The example also uses the magnitude-squared coherence (MSC) to identify significant frequency-domain correlation at the sine wave frequencies.
Create the bivariate time series. The individual series consist of two sine waves with frequencies of 100 and 200 Hz embedded in additive white Gaussian noise and sampled at 1 kHz. The sine waves in the x-series both have amplitudes equal to 1. The 100 Hz sine wave in the y-series has amplitude 0.5 and the 200 Hz sine wave in the y-series has amplitude 0.35. The 100 Hz and 200 Hz sine waves in the y-series are phase-lagged by radians and radians respectively. You can think of y-series as the noise-corrupted output of a linear system with input x. Set the random number generator to the default settings for reproducible results.
rng default Fs = 1000; t = 0:1/Fs:1-1/Fs; x = cos(2*pi*100*t)+sin(2*pi*200*t)+0.5*randn(size(t)); y = 0.5*cos(2*pi*100*t-pi/4)+0.35*sin(2*pi*200*t-pi/2)+ ... 0.5*randn(size(t));
Obtain the magnitude-squared coherence (MSC) for the bivariate time series. The magnitude-squared coherence enables you to identify significant frequency-domain correlation between the two time series. Phase estimates in the cross spectrum are only useful where significant frequency-domain correlation exists.
To prevent obtaining a magnitude-squared coherence estimate, which is identically 1 for all frequencies, you must use an averaged MSC estimator. Both Welch's overlapped segment averaging (WOSA) and mulitaper techniques are appropriate. mscohere implements a WOSA estimator.
Set the window length to 100 samples. This window length contains 10 periods of the 100 Hz sine wave and 20 periods of the 200 Hz sine wave. Use an overlap of 80 samples with the default Hamming window. Input the sample rate explicitly to get the output frequencies in Hz. Plot the magnitude-squared coherence.
[Pxy, F] = mscohere(x, y, hamming(100), 80, 100, Fs); plot(F, Pxy) title('Magnitude-Squared Coherence') xlabel('Frequency (Hz)') grid
You see that the magnitude-squared coherence is greater than 0.8 at 100 and 200 Hz.
Obtain the cross spectrum of x and y using cpsd. Use the same parameters to obtain the cross spectrum that you used in the MSC estimate. Plot the phase of the cross spectrum and indicate the frequencies with significant coherence between the two times. Mark the known phase lags between the sinusoidal components.
[Cxy, F] = cpsd(x, y, hamming(100), 80, 100, Fs); plot(F, -angle(Cxy)/pi) title('Cross Spectrum Phase') xlabel('Frequency (Hz)') ylabel('Lag (\times\pi rad)') ax = gca; ax.XTick = [100 200]; ax.YTick = [-1 -1/2 -1/4 0 1/4 1/2 1]; grid
You see that, at 100 Hz and 200 Hz, the phase lags estimated from the cross spectrum are close to the true values.
In this example, the cross spectrum estimates are spaced at Hz. You can return the phase estimates at those frequency bins. Keep in mind that the first frequency bin corresponds to 0 Hz, or DC.
phi100 = -angle(Cxy(11)); phi200 = -angle(Cxy(21));
|
OPCFW_CODE
|
df.mean is not the real mean of the Series?
I'm debugging and run into the following strange behavior.
I'm calculating the mean of a pandas series which contains all exactly the same numbers. However, the pd.mean() gives a different number.
question1: why mean of this Series is a different number?
question2: tmm[-1]== tmm.mean() gives False now. Any way to ignore this small difference and make the results True? I don't prefer abs(tmm[-1]-tmm.mean()) < xxx
methods because not sure how to define xxx.
import pandas as pd
import decimal
tmm = pd.Series(14.9199999999999999289457264239899814128875732421875,
index=range(30))
for t in tmm:
print(decimal.Decimal(t))
print('mean is')
print(decimal.Decimal(tmm.mean()))
results:
14.9199999999999999289457264239899814128875732421875
14.9199999999999999289457264239899814128875732421875
14.9199999999999999289457264239899814128875732421875
14.9199999999999999289457264239899814128875732421875
14.9199999999999999289457264239899814128875732421875
14.9199999999999999289457264239899814128875732421875
14.9199999999999999289457264239899814128875732421875
14.9199999999999999289457264239899814128875732421875
14.9199999999999999289457264239899814128875732421875
14.9199999999999999289457264239899814128875732421875
14.9199999999999999289457264239899814128875732421875
14.9199999999999999289457264239899814128875732421875
14.9199999999999999289457264239899814128875732421875
14.9199999999999999289457264239899814128875732421875
14.9199999999999999289457264239899814128875732421875
14.9199999999999999289457264239899814128875732421875
14.9199999999999999289457264239899814128875732421875
14.9199999999999999289457264239899814128875732421875
14.9199999999999999289457264239899814128875732421875
14.9199999999999999289457264239899814128875732421875
14.9199999999999999289457264239899814128875732421875
14.9199999999999999289457264239899814128875732421875
14.9199999999999999289457264239899814128875732421875
14.9199999999999999289457264239899814128875732421875
14.9199999999999999289457264239899814128875732421875
14.9199999999999999289457264239899814128875732421875
14.9199999999999999289457264239899814128875732421875
14.9199999999999999289457264239899814128875732421875
14.9199999999999999289457264239899814128875732421875
14.9199999999999999289457264239899814128875732421875
mean is
14.9200000000000034816594052244909107685089111328125
floats are intrinsically prone to precision errors, that's why you get a different mean.
"I don't prefer abs(tmm[-1]-tmm.mean()) < xxx". Well, bummer then. You can't rely on float values being exactly equal.
Try using np.isclose()
tmm[20]== tmm.mean()
False
np.isclose(tmm[20], tmm.mean())
True
The answer to your 2 questions is basically this:
import pandas as pd
import decimal
tmm = pd.Series(decimal.Decimal(14.9199999999999999289457264239899814128875732421875),
index=range(30))
for t in tmm:
print(decimal.Decimal(t))
print('mean is')
print(decimal.Decimal(tmm.mean()))
Make sure you use decimal.Decimal constructor when you're creating tmm, that's pretty much.
|
STACK_EXCHANGE
|
Factories generate ID's outside the int32 range
I'm trying to seed my database with some random data using the factories bob generates from my schema and noticed that my script is failing due to the number faker generates is above the threshold for my ID field.
bob version: v0.23.2
jaswdr faker version: v1.19.1
Schema:
CREATE TABLE employee(
id SERIAL PRIMARY KEY,
first_name VARCHAR(60) NOT NULL,
);
// db setup excluded
bobDB := bob.New[*sql.DB](db)
dbFactory := factory.New()
seed := 42
source := rand.NewSource(seed)
myFaker := faker.NewWithSeed(source)
dbFactory.AddBaseEmployeeMod(factory.EmployeeMods.RandomizeAllColumns(&myFaker))
tmpl:= dbFactory.NewEmployee()
_, err = tmpl.CreateMany(ctx, bobDB, 10)
if err != nil {
log.Fatal("failed to create employees: ", err)
}
Error I receive:
failed to encode args[0]: unable to encode omit.Val[int]{value:608747136543856411, state:1} into binary format for int4 (OID 23): unable to encode<PHONE_NUMBER>43856411 into binary format for int4 (OID 23):<PHONE_NUMBER>43856411 is greater than maximum value for int4
Under the hood the RandomizeAllColumns will call the following bob code:
func (m employeeMods) RandomID(f *faker.Faker) EmployeeMod {
return EmployeeModFunc(func(o *EmployeeTemplate) {
o.ID = func() int {
return random[int](f)
}
})
}
Sure enough in my debugger If i inspect tmpl.ID() I get back<PHONE_NUMBER>43856411 which is larger than the max int32 (2,147,483,647)
which later causes the error in the bob library exec.goL114
rawSlice, err := scan.All(ctx, exec, m, sql, args...) // fails here
if err != nil {
return nil, err
}
I've managed to temporarily get around this issue by supplying my own mod that implements the EmployeeMod. So others can also benefit from this workaround I'll supply the solution below:
type EmployeeModFunc func(*factory.EmployeeTemplate)
func (f EmployeeFunc) Apply(n *factory.EmployeeTemplate) {
f(n)
}
// Generates Random int32
func RandomID(f *faker.Faker) factory.EmployeeMod {
return EmployeeModFunc(func(o *factory.EmployeeTemplate) {
o.ID = func() int {
return rand.Intn(1<<31 - 1)
}
})
}
Hope this helps anyone facing this issue, until a solution is found :)
So, the real issue is that the database driver should see the type of the column to int32 and not int
What codegen driver are you using?
Side note, to supply custom mods, you can write much less code by using one of the generated column mods.
dbFactory.AddBaseEmployeeMod(
factory.EmployeeMods.RandomizeAllColumns(&myFaker),
factory.EmployeeMods.IDFunc(func() int { return rand.Intn(1<<31 - 1) }),
)
Also, if you do not need the faker to be initialized with a specific seed, you can use nil and the default faker will be used.
So, the real issue is that the database driver should set the type of the column to int32 and not int
What codegen driver are you using?
Side note, to supply custom mods, you can write much less code by using one of the generated column mods.
dbFactory.AddBaseEmployeeMod(
factory.EmployeeMods.RandomizeAllColumns(&myFaker),
factory.EmployeeMods.IDFunc(func() int { return rand.Intn(1<<31 - 1) }),
)
Also, if you do not need the faker to be initialized with a specific seed, you can use nil and the default faker will be used.
I'm using the postgres driver. (Using postgres v15).
Thanks for the additional information I'll make sure to refactor my solution :)
I'm using the postgres driver. (Using postgres v15).
I'll also need to know the column type being mapped to int. Can you share the rough schema for the table?
Yeah this is the rough schema:
schema.sql
CREATE TABLE employee(
id SERIAL PRIMARY KEY,
first_name VARCHAR(60) NOT NULL,
last_name VARCHAR(120) NOT NULL,
full_name VARCHAR(255) GENERATED ALWAYS AS (first_name || ' ' || last_name) STORED
);
I then use atlas to generate the desired table states using this command:
atlas schema apply -u ${DB_URL} --to="file://schema.sql" --env ${ATLAS_ENV}
To find the underlying table structure I run this query:
query:
SELECT column_name, data_type
FROM information_schema.columns
WHERE table_name = 'employees'
ORDER BY ordinal_position;
Which prints:
column_name | data_type
-------------+-------------------
id | integer
first_name | character varying
last_name | character varying
full_name | character varying
(4 rows)
Seems to be correct according to the postgres docs: https://www.postgresql.org/docs/current/datatype-numeric.html#DATATYPE-SERIAL
To clarify, are you using bobgen-psql or bobgen-atlas?
To clarify, are you using bobgen-psql or bobgen-atlas?
I'm using this command to generate the ORM:
PSQL_DSN=${DB_URL} go run github.com/stephenafamo/bob/gen/bobgen-psql@latest -c config/bobgen.yaml
Then my bobgen.yaml
psql:
output: generated/models
except:
public.migrations:
I want to use the bobgen-atlas in the future but atm I can't seem to connect to my table using it but the boben-psql is working fine for now :)
Good to know 👍, I'm new to atlas and HCL thats why I'm keeping to SQL for now. Although I suppose I can always generate the hcl file from my schema.sql 🤔 if this is a problem that doesn't happen in the bobgen-atlas library.
When you map through the postgres serial types
case *postgres.SerialType:
switch t.T {
case "smallserial", "serial2":
c.Type = "int16"
case "serial", "serial4":
c.Type = "int"
case "bigserial", "serial8":
c.Type = "int64"
default:
c.Type = "int"
}
Should you change the type to int32 if it's serial or serial4?
This should already be fixed on the main branch.
A number of improvements have been made for randomization, including generated tests for randomization.
While running those generated tests, I did come across this and already fixed integrer to be generated as int32
Try running from the latest commit, and let me know if you're still running into issues.
I've updated bobgen-atlas to fix this too in #163
@stephenafamo latest commit worked for me thanks 🦄 🦖 🔥
Released v0.24.0
|
GITHUB_ARCHIVE
|
The suse font of self-awareness is a very versatile tool. It’s a clever way of creating a beautiful, colorful font, but even if you don’t use it, it’ll have a different effect when you use it on the rest of your life.
It can be used as a way to create a subtle change in font or design to create an interesting effect, but often its used to create a more dramatic effect on the whole design.
suse is a very versatile font. Its used for a lot of applications, but you can use it also for the blog to make a design look more professional and professional. You can also use it on the website if you want to create a design with a professional look. The font is used on many different websites and applications.
For the suse font, I tried to create a design that is something that is quite unique. You can use suse font in many different ways to make a design more professional. You can use to create a more professional design using a design like the suse font. You can use for the blog to have similar look to the suse font. You can use for the website to create a design with a professional look.
For the suse font, I added the font to the design to make it more professional.
For those of you who are looking for a more professional font for the suse, there are many fonts available. One of the many things that are great about SuSE fonts is that they are easier to use. For example the suse font which is easy to use and can be used by just about any website, and you can also use it for any website you want to build your website. This is the font which we use in SuSE.
The first time I got a suse font, I first saw a design error in the font. I was trying to save it for a more familiar site and I got stuck with the design. So I had to stop and read a design page and go to a new place. The same thing happened to me again in SuSE.
If you’re designing your own website, you’ll see a lot of suse fonts on design pages. These suse fonts make it easier for you to build your own website. Suse fonts are easy to use but have a little bit of a different look. Suse font can be used for any type of website but it tends to be used for a more modern look.
There is also another suse font called “suse-latin”, which is a typeface that is very similar to suse. It has a similar look to suse but with a subtle added flair. Suse-latin is a nice font to have on your website, especially if you want to make it look a bit more retro or retro-rock. You can find it at susefont.com.
It’s funny because the word suse means “sky,” which is the same concept as the word “supertuxknot.” Suse, in the context of the word suse font, is used to refer to a typeface that is super, super cool.
|
OPCFW_CODE
|
Visual LANSA Gets Expanded Interoperability, Developer Convenience
February 23, 2010 Dan Burger
Application development goals aim to continually improve business efficiency, drive increased revenue, integrate silos of information, and speed the roll-out of programs that can do all of the above. So it is with LANSA and its Visual LANSA product, which has just reached version 12. The key to the latest Visual LANSA is its diligence to database interoperability and heterogeneous platforms, while still being attentive to the IBM i customers.
Less duplication of the business logic stored in the LANSA data repository is at the top of the list of Visual LANSA enhancements. This is important when multiple development languages are hitting on the IBM i DB2 database, because it’s more efficient when business rules get enforced regardless of the development language. It adds control by centralizing business logic across the entire development environment and eliminates many opportunities for database corruption by validating data before it’s accepted into the system.
This extension of business rules to other development languages was previously limited to the LANSA development environment. Extending the feature to other development environments is indicative of what’s happening as many organizations seek interoperability.
For those not familiar with the LANSA environment, it uses a central data repository to store system-wide validation rules and business logic. That logic can now be applied regardless of platform and development language to green-screen, Web, Windows, or wireless applications. The result is less duplication of common business logic across programs written in RDML, RPG, PHP, C#, VB.NET, Java, COBOL, Synon, and other languages.
Visual LANSA is an application development platform that underpins other LANSA offerings like iFusion.net, RAMP, LANSA Commerce Edition, and LANSA Composer.
“We have taken the rules and the logic that it stored in the repository and put them into stored procedures,” explains Don Nelson, LANSA’s vice president of technical services. “Now when the database is accessed, a DB2 trigger verifies whether the information provided is correct via the rules. If there is an issue, messages are sent back to the calling program. It is a security blanket. You can always check those rules whenever you do anything to the database.
“Interoperability with other databases is a big enhancement. There are lots of islands of information that have to be connected,” Nelson notes.
Although the majority of work in IBM i shops gets done using the DB2 database, SQL is making great inroads into the IBM i shops.
Version 12 of Visual LANSA is the first to allow developers to create SQL files rather than DDS files. Previously those file definitions were allowed to be imported into LANSA. Now they can originate in LANSA.
“When you run a program on IBM i,” Nelson points out, “you can pull data from a Windows or Linux server that may have an Oracle database or a MySQLdatabase or SQL Server database.
“As an example, when the main ERP system is running on i, but the CRM is running on a Windows server, this allows real-time data to be utilized freely among those servers with two or three commands.”
Real-time access to data stored on Windows and Linux servers directly from IBM i applications is expected to become increasingly popular, so Visual LANSA gets it now.
Visual LANSA Version 12 also includes capabilities that provide native access to MySQL databases and Unicode support.
A wide selection of open source, PHP, and Java applications are written to the MySQL interface and many commercial packages are written for MySQL as well.
Unicode support is a feature that could be adopted by international companies with suppliers or vendors in countries using languages other than English. It simplifies the development process for writing in multiple languages. For example, the creation of a product description table could be coded once because the information presented in that one field would be the same, regardless of whether it was presented in English, Chinese, German, Italian, or any other language.
Pre-built modules also factor into the business application framework in version 12, which Nelson describes as a business framework to distinguish it from a communications framework like Microsoft .NET.
“This is a framework that allows users to organize applications. It provides things like the details based on the product orders a company receives,” Nelson says. “It includes things like specifications, product pictures, and that kind of thing.”
The details become a tab in the application as it is being written. For example, they could be attached documents that record conversations or contracts with a specific customer. These capabilities extend applications beyond what they currently provide in most instances. Some of the features found in document management, spool file generation, and report generation have been added.
The pre-built modules offer this type of functionality increase without writing code.
LANSA has provided developers one other convenience item with this release. It’s another wizard. This one called CRUD for Web. Although CRUD sounds like something that’s stuck to the bottom of your shoe, its purpose here is to bring “create, read, update and delete” functionality to Intranet, B2B and B2C Web sites. Adding the capability to access and update data typically adds a pile of complexity to app development.
The CRUD wizard, Nelson says, generates a Web application after taking the developer through a series of questions that identify which files and fields the app is expected to work against, search, and maintain. The Web app will contain drill-down capabilities that link as many levels as a user desires. For instance, the program that’s created might drill down beginning with a company name, then to that company’s customers, then to a specific order, then to individual item pricing and inventory details.
“In the RPG world a person might be able to do three or four of these types of programs in a day,” Nelson estimates. This process could easily take hours and the CRUD wizard makes it into something that can e accomplished in minutes, if you had your work laid out by knowing which files you were going after and you knew the database well. It does not require knowledge of Visual LANSA beyond finding the toolbar on the browser.”
Visual LANSA version 12 has been available since February 1. It can be downloaded or obtained via a DVD. LANSA does not make pricing information available unless it is requested from a company considering purchase.
|
OPCFW_CODE
|
1. >> In Re: SPP (bluetooth) question I said: This seems like a good place to post this wrt: Edison serial BT connection, but I may start a new thread depending on what happens next. When I get the thread number for this new one, I'll put it back in the old thread (which was originally marked answered before I added the question below) to re-direct here.
2. The new Feb 15 2015 v2 Bluetooth guide is here (link) Intel® Edison Boards and Compute Modules — Intel® Edison Bluetooth User Guide or http://www.intel.com/support/edison/sb/CS-035381.htm (hope that works, should see hyperlink and a raw link, this text box tries to convert the raw link)
3. The SPP profile guide is in there, but with some gaps. The biggest gap (for me so far) is on p. 46:
"Using the test-profile python script in the BlueZ test folder, it is possible to get at the application layer of the RFCOMM socket file. The same file is modified a little bit to loopback received data to receive on the other side to verify SPP, and this file is renamed SPP-loopback.py. Download this file and copy the script into your Intel® Edison device. Find the changes in the test-profile.py file, make the necessary changes, and push the SPP_loopback.py file into your Intel® Edison device. using scp."
4. You can pretty much figure out where the BlueZ code is coming from by looking elsewhere in the Intel document, for example the link on p.30 : http://git.kernel.org/cgit/bluetooth/bluez.git/tree/attrib. Back up from there to this: http://git.kernel.org/cgit/bluetooth/bluez.git/tree/test/test-profile
(hah, figured out if you hover over an auto-converted link you can convert it back to raw again)
5. I dont see anything obvious in changelogs http://git.kernel.org/cgit/bluetooth/bluez.git/log/test/test-profile?showmsg=1 or
http://git.kernel.org/cgit/bluetooth/bluez.git/log/test that could affect for example a change from when the Intel document was written to what I see now. Maybe this checkin from Johan Hedberg is relevant but without knowing if we are even close, I cant tell: http://git.kernel.org/cgit/bluetooth/bluez.git/commit/test/test-profile?id=fe57c2641aebfdc0aec7aa53c1254834cd0ba256
6 Anyone know or can you guess what "The same file is modified a little bit to loopback received data" and "Find the changes in the test-profile.py file, make the necessary changes" could mean?
|
OPCFW_CODE
|
/*
* OpHog - https://github.com/Adam13531/OpHog
* Copyright (C) 2014 Adam Damiano
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License along
* with this program; if not, write to the Free Software Foundation, Inc.,
* 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
*/
( function() {
/**
* These represent certain tile properties. Not all combinations are
* possible, e.g. SPAWNER | CASTLE, and some combinations are necessary,
* e.g. WALKABLE | SPAWNER.
* @type {Object}
*/
window.game.TileFlags = {
WALKABLE: 1,
SPAWNER: 2,
CASTLE: 4,
FOGGY: 8,
// Sometimes you want to fetch a tile that does not have fog
UNFOGGY: 16,
};
/**
* A tile object.
* @param {Number} tilesetID - the ID of the tileset to use. This isn't
* a Tileset itself so that tiles can be saved/loaded easily.
* @param {Number} graphicIndex - the index used to represent this tile.
* @param {Number} tileIndex - this is the index into the current map's
* tile array. It comes in handy a lot, but causes the tile to be tightly
* coupled with our Map class.
* @param {Number} tileX - the X of this tile.
* @param {Number} tileY - the Y of this tile.
* @param {Boolean} walkable - whether or not this tile is walkable.
*/
window.game.Tile = function Tile(tilesetID, graphicIndex, tileIndex, tileX, tileY, walkable) {
this.tileset = game.TilesetManager.getTilesetByID(tilesetID);
this.graphicIndex = graphicIndex;
/**
* See game.TileFlags.
* @type {game.TileFlags}
*/
this.tileFlags = 0;
if (this.graphicIndex == this.tileset.spawnTileGraphic) {
this.tileFlags |= game.TileFlags.SPAWNER | game.TileFlags.WALKABLE;
}
if (walkable) {
this.tileFlags |= game.TileFlags.WALKABLE;
}
if (this.graphicIndex == game.Graphic.GENERATOR) {
this.tileFlags |= game.TileFlags.CASTLE;
}
this.tileIndex = tileIndex;
this.x = tileX;
this.y = tileY;
// If true, this tile is a left endpoint. See Map.js - isTileAnEndpoint.
// This is set by the map.
this.isLeftEndpoint = false;
// See this.isLeftEndpoint
this.isRightEndpoint = false;
// This is an object that relates a left-neighbor's tileIndex (keys in a
// dict can only be strings, otherwise it would be the left-neighbor
// itself and not the tileIndex) to an array of right-neighbors. It
// represents that when you come from a specific left-neighbor, any of
// the right-neighbors are valid.
//
// Not all left-neighbors will exist in this list, for example, if A is
// cardinally above B and B is not a right-endpoint, then B will not have A
// in its leftList.
//
// This tile itself will always appear as a key in leftList, because a
// spawner can be placed on any tile. When units are spawned, they are
// placed as though they came from that tile, which is why this is
// necessary.
//
// If this tile is a right-endpoint, then any right-neighbors stored in
// here will be this tile itself just to have a sane value.
//
// A tile cannot be both a left and right endpoint.
//
// This is only set for walkable tiles.
this.leftList = {};
// This is basically the same thing as leftList except it relates a
// right-neighbor's tileIndex to an array of left-neighbors.
this.rightList = {};
};
/**
* Debug function to print a tile's lists.
* @return {undefined}
*/
window.game.Tile.prototype.printList = function() {
this.realPrintList(true);
this.realPrintList(false);
};
/**
* Very straightforward functions below.
*/
window.game.Tile.prototype.isWalkable = function() {
return (this.tileFlags & game.TileFlags.WALKABLE) != 0;
};
window.game.Tile.prototype.isSpawnerPoint = function() {
return (this.tileFlags & game.TileFlags.SPAWNER) != 0;
};
window.game.Tile.prototype.isCastle = function() {
return (this.tileFlags & game.TileFlags.CASTLE) != 0;
};
/**
* Debug function to print a tile's lists.
* @param {Boolean} printLeftList - if true, this will print leftList.
* @return {String} - an empty string simply so that Chrome doesn't print
* "undefined"
*/
window.game.Tile.prototype.realPrintList = function(printLeftList) {
var listToUse = printLeftList ? this.leftList : this.rightList;
var stringToUse = printLeftList ? 'left' : 'right';
console.log('tileIndex ' + this.tileIndex + '\'s ' + stringToUse + 'List: ');
for( var stringIndex in listToUse ) {
var indexAsNumber = Number(stringIndex);
var rightString = '';
var rightNeighbors = listToUse[stringIndex];
for (var i = 0; i < rightNeighbors.length; i++) {
if ( i > 0 ) rightString += ' ';
rightString += rightNeighbors[i].tileIndex;
};
console.log(indexAsNumber + ': ' + rightString);
};
return '';
};
/**
* Convenience function to get the the world X of this tile's center.
*/
window.game.Tile.prototype.getPixelCenterX = function() {
return this.x * game.TILESIZE + game.TILESIZE / 2;
};
/**
* Convenience function to get the the world Y of this tile's center.
*/
window.game.Tile.prototype.getPixelCenterY = function() {
return this.y * game.TILESIZE + game.TILESIZE / 2;
};
/**
* Converts this tile into a spawner point.
*/
window.game.Tile.prototype.convertToSpawner = function() {
if ( !this.isWalkable() ) {
console.log('Can\'t convert a non-walkable tile into a spawner, ' +
'because it wouldn\'t be in any of the map\'s paths. Tile index: ' + this.tileIndex);
return;
}
this.tileFlags |= game.TileFlags.SPAWNER;
};
/**
* @param {Tile} otherTile - some other tile (or null or undefined)
* @return {Boolean} - true if they're equal
*/
window.game.Tile.prototype.equals = function(otherTile) {
return otherTile !== undefined && otherTile !== null && this.tileIndex === otherTile.tileIndex;
};
//TODO: if there are starting LEFT puzzle pieces that look like these ones,
// this code will fail:
//0 0 0
//1 1 1
//0 0 0
//0 0 0
//
//0 X 0
//Y 1 0
//0 1 1
//0 0 0 (in this case, the castle needs to be where the X is, but it
// would be where the Y is)
/**
* Converts a tile into a castle
* @return {[type]} [description]
*/
window.game.Tile.prototype.convertToCastle = function() {
if ( this.isWalkable() ) {
console.log('Can\'t convert a walkable tile into a castle: ' +
this.tileIndex);
return;
}
this.tileFlags |= game.TileFlags.CASTLE;
};
}());
|
STACK_EDU
|
/**
* This module exports two helpful names:
*
* Spelling - an simplified interface for spell checking.
*
* initNodeHun - a wrapper around Nodehun that cli.ts uses to construct a spell checker.
*/
// This module exports a single function.
import loadEnUS from 'dictionary-en-us'
import { Nodehun } from 'nodehun'
interface Spelling {
isCorrect: (word: string) => boolean
}
class NodehunSpelling implements Spelling {
constructor(private nodehun: Nodehun) {}
isCorrect(word: string): boolean {
return this.nodehun.spellSync(word)
}
}
// The documents and speller are memoized, but eslint doesn't recognize the null checks.
/* eslint-disable require-atomic-updates */
let nodehun: Nodehun | null = null
const initNodehun = async (): Promise<Nodehun> => {
if (nodehun) { return nodehun }
let documents = await initDocuments()
if (!documents) { throw new Error("Can't create Nodehun due to missing dictionary documents.") }
nodehun = new Nodehun(documents.aff, documents.dic)
return nodehun
}
interface DictionaryDocuments {
dic: Buffer
aff: Buffer
}
let documents: DictionaryDocuments | null = null
const initDocuments = async (): Promise<DictionaryDocuments> => {
if (documents) { return documents }
let res: any = await new Promise(
(resolve, reject) => loadEnUS(
(err: string, result: DictionaryDocuments) => {
if (err) { return reject(err) }
return resolve(result)
}
)
)
documents = res
if (!documents) { throw new Error('Failed to load DictionaryDocument') }
return documents
}
export {
initNodehun,
initDocuments,
Nodehun,
Spelling,
NodehunSpelling
}
|
STACK_EDU
|
import { CoordinatePoint, ConverteStrategy, ConverterFn } from "./interface";
import BD09toGCJ02 from "./strategies/bd09togcj02";
import GCJ02toBD09 from "./strategies/gcj02tobd09";
import GCJ02toWGS84 from "./strategies/gcj02towgs84";
import WGS84toGCJ02 from "./strategies/wgs84togcj02";
interface ConverterConfig {
from: "BD09" | "GCJ02" | "WGS84";
to: "BD09" | "GCJ02" | "WGS84";
precision?: "m" | "dm" | "cm";
converter?: ConverterFn;
}
const isFunction = (value: any): boolean =>
toString.call(value) == "[object Function]";
const isArray = (value: any): boolean =>
toString.call(value) == "[object Array]";
class Converter {
config: ConverterConfig;
constructor(config: ConverterConfig) {
this.config = config;
}
private _getStrategy(): ConverteStrategy {
const { from, to } = this.config;
const strategyName = from + "to" + to;
const unit = this.config.precision;
switch (strategyName) {
case "BD09toGCJ02":
return new BD09toGCJ02(unit);
case "GCJ02toBD09":
return new GCJ02toBD09(unit);
case "GCJ02toWGS84":
return new GCJ02toWGS84(unit);
case "WGS84toGCJ02":
return new WGS84toGCJ02(unit);
default:
return new BD09toGCJ02(unit);
}
}
convert(
Point: CoordinatePoint | CoordinatePoint[]
): CoordinatePoint | CoordinatePoint[] {
const strategy: ConverteStrategy = this._getStrategy();
let converterFn: ConverterFn = strategy.doConvert;
if (this.config.converter && isFunction(this.config.converter)) {
converterFn = this.config.converter;
}
return isArray(Point)
? (Point as Array<CoordinatePoint>).map(converterFn)
: converterFn(Point as CoordinatePoint);
}
}
export default Converter;
|
STACK_EDU
|
Difference between OnwardFlights.com and FlyOnward.com?
I found a thread on NomadForum.io discussing OnwardFlights.com and FlyOnward.com, both services seem legit as many people on that thread have used them.
EDIT: FlyOnward.com now stopped working - see answers for more info
Each company provides the same service in appearance, but there seem some confusion.
User "lonelyblogger" wrote:
flyonward.com I booked a test ticket from Thailand to Vietnam and they sent me a Vietnam Airlines ticket from Bangkok to Ho Chi Ming city, the code was valid when I checked with VNA's website. I love it
Then user "andrewkent" wrote:
I just used onwardflights.com [...] I provided them with the confirmation number and date of the flight, the agent punched it into her computer, and she handed me my boarding pass without asking any other questions.
Finally, user "lonelyblogger" wrote:
I used both. Onwardflights.com photoshops tickets, Flyonward.com books real tickets. Onwardflights is cheaper for a reason.
I'm not sure if that last quote makes sense though as user "andrewkent" wrote before that he used this service and "the agent punched it into her computer". You would think that at that point the agent would find out if the ticket was fake? Or maybe she was not checking the data, just adding it as reference in her airline's database. Confusing.
The question that really matters: do both company provide a flight confirmation that can then be checked later on (i.e. by immigration or airport staff, when leaving or entering a country)?
If you know more: is there any difference in the service provided? Any details on these services/companies would be appreciated.
It seems you found a straighforward explanation yourself, is there any reason not to believe it? It's not about the “quality of service”, one of them produces falsified tickets, the other real tickets you won't use, a different kind of fraud. It's up to you to decide which one you want to commit. I would not use either if you are going to a country where officials could be expected to verify your info and care about it (as opposed to countries with “pretend regulations” that just want to see some official-looking document).
I clarified the question as I was confused by the contridiction between what andrewkent and lonelyblogger wrote
"I would not use either if you are going to a country where officials could be expected to verify your info": maybe you're right for now, but who knows, maybe we will all be using this all the time in a few years time regardless of the country we travel to. These are new services so we still see them as rather "experimental" but my guts tell me it's well worth looking into it rather sooner than later.
Is advice on defrauding immigration officials within travel.se's purview?
@davidvc oh come on. That's just a strategy, nothing illegal if the ticket is verifiable. If it's Photoshoped then I definitely would NOT do it.
I wasn't commenting on the morality (indeed, from a purely pragmatic point of view, if you can demonstrate the resources and intent to leave, it seems silly to require a confirmed onward ticket). As @Relaxed says, though, either way you're deceiving the officer: by showing a "ticket" which doesn't exist, or a ticket which you cannot ever use (because the website will cancel it shortly after your entry). And I wasn't sure if questions asking for advice like that were in the scope of the website.
@AdrienBe OK, I understand your question now, sorry for the confusion (+1). Basically, you want to know whether it's true that onwardflights really photoshops tickets.
@davidvc I see, we enter a grey area here I suppose. I'm being pragmatic here: it's about traveling & travel.se has a great community so why not leverage that? I'm confident we'll get some great answer ;)
@Relaxed in a nutshell: yes :) But I clarified the question to "do both company provide a flight confirmation that can then be checked later on (i.e. by immigration or airport staff, when leaving or entering a country)?" because I think this is what really matters.
I used flyonward.com three times in the last 7 weeks. In all three cases I got real confirmed flight reservations which I could verify on the airlines' websites. Only in one case an immigration officer actually wanted to see a ticket on arrival. Showing him the PDF on a phone was enough. I have not used onwardflights, so I can't answer your question
@AdrienBe In many countries it would be illegal to mislead the immigration inspector as to your intentions, whether or not the PNR exists. The other point is that, my suspicion is that, despite claims to the contrary, this website is generating a PNR, not a ticketed reservation (hence the auto cancellation after 48 hours), which may fall foul of a very strict interpretation of the rules. Also, why any airline allows a travel agent to abuse its reservation system like this is a bit of a surprise as well. Qatar's revenue management team would blow its lid if this was happening on their system.
@Calchas what is a PNR?
@AdrienBe A PNR (passenger name record) is a database record on the airline's computer that holds the information about you and your flights. But crucially it is a separate system to ticketing, which are financial documents. It is possible to create a "confirmed" reservation on a flight for a particular person, get the 4 to 6 digit alphanumeric code (the PNR reference) and see the flight on the website without actually having paid for it or actually having a ticket. The computer will usually delete the PNR if a ticket is not attached to it within a few days.
@AdrienBe My guess (and I could be wrong) is that this is how FlyOnward.com operates.
I was thinking about getting one to leave Costa Rica, did it work? Are they actually checking when you arrive at the destination's customs as well as when you check in (depart)?
For my current trip I needed two flights to show the Chinese Visa Office and one onward flight from Taiwan, which I was asked to show at the Scoot check-in desk in Singapore on my layover. All tickets were booked by my coworker who also word at Flight Centre. He told me they do this all the time. He also mentioned that he would get in trouble from his boss for any such flights that he forgot to cancel at the end of the day. Hey may have used an industry jargon term other than "cancel". My impression is that these websites do what my colleague does all the time, for a fee, as a business model.
@JanDoggen You may be right, or not. But either way, what are our options? Really book a flight which we don't take? That's the same result except that we would lose much more money. Of course, if I can use something easier as a proof of onward travel (bus, train or boat going outside of country), then I'm happy to do it.
@JanDoggen I agree, I would recommend to use this as a last resort (ie. other proofs of onward travel are too expensive). However, that's only my opinion. And expressing our opinion regarding whether or not one should or shouldn't use these services is off topic for this question. You can always create another question if you want to discuss this... although I think that opinionated questions will be closed by moderators
I used flyonward.com in Nov 2017. I never received my plane ticket.
I used onwardflights.com in Nov 2017. It worked just fine, I received my plane ticket some hours after ordering.
Appears flyonward.com no longer deliver the tickets.
flyonwards.com buys fully refundable tickets in your name and automatically cancels the ticket 24 or 48 hours afterwards, depending on your selection. You receive the receipt and ticket from the airline.
In regards to the legality of it, it's exactly the same as buying a fully refundable ticket from a travel agent (here flyonwards) and cancelling it later on.
FYI: Three of us were traveling from Singapore to Bali. We all received payment confirmation, but no ticket. The support is dead silent (it's been like ~10 days now). Strange thing is that my colleagues were buying tickets from them previously without any issues. At this moment, I would advise against using this service!
Weird. I have used them a dozen times, and they always came through for me. However I haven't used them recently. Let's hope it's just a temporary problem.
Be warned that there are many, many reviews of flyonward.com that describe it as taking your money without delivering a ticket. I'd guess that whatever ticket booking system it formerly used has cottoned on to its mass ticket cancelling shenanigans and blocked it from buying tickets, and whoever maintains it has just left it up in a broken state
Confirmed. I used flyonward.com in Nov 2017. I never received my plane ticket.
flyonward.com is now dead and redirects to an insecure squatted site.
But I discovered a new competitor at onwardfly.com that charges $9.99 for what seems to be the same service that onwardflights.com provides for $7.00.
|
STACK_EXCHANGE
|
A radius map tool allows you to create custom radius maps with only a few clicks. You can immediately create an Excel report for your location that describes the demographics and locations within your search distance.
Radius mapping software is a tool that allows users to create and visualize maps of geographical areas based on a specified radius or distance from a central point. ZIP Code radius maps are an excellent way to explore and show market and trade areas.
You can also create driving radius maps showing the fastest time rings or the shortest distance rings. Use these maps to identify how far customers are willing to travel to your stores or to find locations that are underserved by emergency services.
Maptitude has a powerful mile radius calculator that you can use to quickly determine the closest locations to a list of origins or to determine the travel costs between them (more...).
Maptitude ships with the most accurate ZIP Codes and street maps available, ensuring that your results enable you to make vastly improved business decisions than by using spreadsheets alone.
“Affordable, easy to use, but powerful”
“I work in Real Estate market analysis. Maptitude has many tools for demographic and location analysis. I can pull population data within drive bands, radius, or various other ways. I create awesome visual maps that clients love. I have also done little more complex retail gravity huff model for regional shopping centres.”
The benefits of radius mapping software are that an analyst can better understand, analyze, visualize, and optimize data for business decisions.
Radius maps show circle-shaped features to indicate the distance from a central point. Examples include choropleth heat maps, which show the intensity of an area in relation to the distance from the central point, and population density maps, which show how many people live within a certain radius.
A map radius calculator makes it easy to find alternative service providers, backup field representatives and the nearest locations. The calculator determines the closest sales representative to each customer or the closest vendor for each store. If the nearest location is not available for reasons such as low inventory or temporary outages, you will quickly be able to choose the alternate nearest suppliers. The results are provided in a table format.
Maptitude allows you to draw a radius on a map. You can draw a radius around a location, draw a circle around a point, or draw a circular area around a set of coordinates. It also allows you to customize the size and color of your radius and can be used for a variety of applications such as route planning, boundary analysis, and market analysis. A variety of options allow you to:
The radius analysis can then be exported to:
Google Maps has a feature that allows you to draw a radius on a map. To use this feature, search for your desired location on Google Maps. Next, select the "More" option from the menu, and select the "Measure Distance" option. You can then draw a radius around your desired location.
To print and measure a radius map, you will need to follow these steps:
It is also possible to create a radius map manually by drawing it on a blank map or paper. To do this, you will need to use a ruler or other measuring tool to mark the radius around the center point and draw the map by hand.
|
OPCFW_CODE
|
Artificial Intelligence is rapidly evolving. The pace of development and scale of impact of AI puts it in the unique position where education, research, and social good are converging rapidly. One Fourth Labs is founded to situate itself at the cusp of this convergence.
At the forefront, we develop learning content on AI. By 2020, we will have a course stack that spans from data sciences to machine learning to deep learning. We believe that such a coherent treatment of these topics that combine in equal measure the mathematical insight with practical skill will bring much value.
We recognised that capacity building is just the start. To ensure the right employment opportunities are generated, our business organisations - big and small - need support in adopting AI at the right time and at the right scale. We work closely with organisations in this major transformation.
Finally, we have to recognise that AI has the potential to create impact at scale disproportionate to the design and build efforts. Especially, given India’s severe and several challenges, we need to pool together people and processes to create AI solutions that solve real challenges.
What is in a name?
One Fourth Labs is named after turiya a philosophical construct from the Mandukya Upanishad that hypothesises that there is one background that underlies and transcends the three common states of consciousness: waking, dreaming, and deep sleep. Hence, the fourth that is the one. One Fourth.
Mitesh M. Khapra is an Assistant Professor in the Department of Computer Science and Engineering at IIT Madras. He researches in the areas of Deep Learning, Multimodal Multilingual Processing, Dialog systems and Question Answering. He holds masters and Ph.D. degrees from IIT Bombay. He has worked for over 4.5 years at IBM Research and published over 25 papers. He was a recipient of the IBM PhD Fellowship and the Microsoft Rising Star Award. He is also a recipient of the Google Faculty Research Award, 2018.
Pratyush is an Assistant Professor at the Department of Computer Science and Engineering at IIT Madras since April 2018. He received his Bachelors and Masters of Technology in Electrical Engineering from IIT Bombay in 2009. He then completed his PhD in Computer Engineering from ETH Zurich in 2014. He then spent over 2.5 years at IBM Research, Bangalore and a few months consulting for machine learning startups. His current research focus is on hardware-software co-design of deep learning systems. He has authored over 35 research papers and has applied for over 20 patents.
Machine Learning Engineer
Prem is Machine Learning Engineer working at One Fourth Labs. He completed his B.E in Computer Science from PSG College of Technology, Coimbatore. He is a deep learning enthusiast with a vision to implement AI technologies across the country.
Machine Learning Engineer
Manick is a Machine Learning Engineer working at One Fourth Labs. He holds a Bachelors in Mechanical Engineering from the College of Engineering Guindy, Chennai. Mechanical by qualification and Software by passion, he has 3 years of experience working in the healthcare IT sector.
|
OPCFW_CODE
|
Are the joist repairs in my home safe and correct?
As the title says, I recently bought a home and am in the early stages of finishing the basement. However, after tearing down the old ceiling tile, I found that my floor had experienced water damage and that for some reason, the joists were cut and sistered using very short lengths (as seen in pics) My question is: how soon do I need to get this fixed and what kind of damage to my wallet can I expect?
It's really difficult to tell what's going on here but it looks like the original joists have been cut short of the supporting wall and then extensions screwed or nailed to the ends. This repair does not appear to be sound to me since these are not "sistered" but just added on and are going to be significantly weaker than the original joist.
I only see one patched joist. Are there more?
@isherwood There is one other that looks similar.
@jwh20 From what I can tell is that there was some water damage beneath the square piece nailed to the floor to cover it up. Based on that, I'm assuming that the joists had experienced similar damage to the point of someone wanting to cut them off and "sister" them.
What is above this?
@DMoore, sliding door between the living room and the back deck
There looks to be no movement in the joists. I myself would put a large simpson tie on these with 10-15 nails a piece before closing it up. As long as the area won't have a piano, treadmill or something really heavy it is really of little concern.
I tend to answer in practical terms, and not in strictly code-compliant or legal terms. In cases like this, code doesn't really apply since it's old work and you're making improvements. We're only talking two joists, right? That's not so terrible.
No, this repair was not done well. Short joist patches only work if they're fastened really well. These aren't.
No, your floor won't immediately collapse (unless you store grand pianos or stacks of aquariums above).
Yes, I would seek to fix the situation. Full-length sistering isn't normally too difficult. I'd do that, and take precautions such as temporary posts if you're removing lumber to fit new. The sting in your wallet will only be due to the crazy price of lumber at the moment.
I'm a little bit nervous about strictly repairing the floor joists because it looks like the header joists have experienced water damage as well (behind that particular joist is a ledger board that is holding up a badly designed deck, so I'm wondering if water somehow got behind the ledger to the header.)
Is that a question or a request for more information? What's a "header joist"?
Header joist aka rim joist. Mmm, no it is not a question or a request for anything, just a layman's observation. But I would certainly welcome any responses.
Edited to add additional comments.
I agree with Isherwood answer to your questions. The extension is weak because it does not overlap the original cutoff joist enough and only 5 nails to connect the extension to the old joist.
Those short 2 x 6 on the inside are only to support the plywood platform to support the boards above. Most likely the ends of the boards were cut off because of the rot.
I would look into what caused the original water damage, as the second photo shows a watermark after the repair. Your comment about water damage on the joist rim maybe because their is still a leak.
Agreed, my plan was to remove the platform supporting the subfloor to see if there is rot again, and to possibly find the source of it. Behind the water marks of the second picture is a ledger board for a very old and worn out deck. I'm expecting an unsealed gap in the ledger to be the source of this leak.
|
STACK_EXCHANGE
|
What is the coding language used for the software used on the ISS? Is it NASA's own coding language, or is it something like C, or C#, maybe Haskell?
Almost all of the safety critical software that runs on the US side of the Space Station is written in Ada. I wrote "almost all" rather than "all" because there are probably some low level device drivers written in assembly. I can't find out in which language / languages the code that runs on the Russian side was written. I wouldn't be surprised if that also is largely Ada.
Non-safety critical software (e.g., anything running on a laptop) is written in a mix of languages.
22$\begingroup$ Wow, this makes me curious as to What makes Ada the language of choice for the ISS's safety-critical systems? $\endgroup$– uhohJun 3, 2019 at 7:38
8$\begingroup$ @PearsonArtPhoto - If it's safety critical, yes. The safety critical software runs on the so-called Multiplexer-Demultiplexer (MDM) computers and critical display devices. Non-safety critical software runs on laptops. $\endgroup$ Jun 3, 2019 at 12:13
9$\begingroup$ What are your sources? $\endgroup$ Jun 3, 2019 at 15:20
4$\begingroup$ @Bruno most likely inside information. (This person appears to work there) $\endgroup$ Jun 3, 2019 at 20:36
1$\begingroup$ @Nefrin - That is true to some extent, and apparently more so in Europe than in the US. That the US Department of Defense dropped the Ada mandate 20 years ago led to many project managers having new projects be coded in anything but Ada. $\endgroup$ Jun 5, 2019 at 12:20
There are a lot of programs involved in running the ISS. The exact details are difficult to discern, a lot of NASA's software is available via this site, with some restrictions, but here is what I can find.
- Astrobee- Runs the "Robotic Operating System"
- Geolocation via a Python Library
- Some elements use LabView
I'm sure there are many other languages, including C, C++, and C#, among others, but it would be very difficult to get a complete list.
10$\begingroup$ software.nasa.gov is where NASA catalogs it's released software. Much of the software for the ISS is not releasable. $\endgroup$ Jun 3, 2019 at 4:23
"The software"- makes it sound like there's a single monolithic program running everything. This won't be the case. There will be hundreds of subsystems, each with several levels of hardware and software automation, each of which will have been built with on a number of tools, technologies, and platforms. $\endgroup$
|
OPCFW_CODE
|
This is an automated email from the git hooks/post-receive script. gregoa pushed a change to annotated tag upstream/3.300 in repository libimage-size-perl.
at 287ea06 (tag) tagging c7dbc590cb804c08e724e62baef86e0a720f82b5 (commit) replaces upstream/3.232 tagged by gregor herrmann on Fri Jun 19 11:56:23 2015 +0200 - Log ----------------------------------------------------------------- Upstream version 3.300 Baldur Kristinsson (1): Add support for WEBP, ICO and CUR file types Brian Fraser (1): Avoid a sprintf() warning in Perl 5.21 David Steinbrunner (1): typo fixes Geoff Richards (1): Add support for old OS/2 version of BMP header Masahiro Nagano (1): bugfix for some jpeg file like http://cpansearch.perl.org/src/TOKUHIROM/Image-JpegCheck-0.10/t/bar.jpg Neil Bowers (3): Added =encoding utf8 to pod - the accented character was causing a pod error Added Z<> to the =item to resolve pod warning Added link to github repo to doc Randy J. Ray (53): Per RT#43452, make the cache visible outside the lexical scope of the module. Making large-scale house-keeping changes to the build/dist process. Replace Make the package buildable in a pure-Perl software stack. Admin file Prep work for rolling a 3.2 release Forgot to credit Craig MacKenna for his idea for the cache/shared mem change. Ignore generated HTML The change-over to a Build.PL left imgsize not being installed. Some new files to ignore. Added patterns to .gitignore, added initial README.textile. Adjustments to the textile formatting. This will become a template for others. Removed a stray colon causing errors with some Perl versions. Removed useless signature test, added QA tests, removed a duplicate test. Moved around some conditionally-needed libs to delay loading until/unless Prep for 3.210 release Small fix to the regex for detecting GIFs, per Slaven Rezic. Prep for 3.220. Large-scale code and documentation clean-up based on perlcritic Added new entries. Added MYMETA.yml to skip-list. Prep for 3.221 release. Textile oops. perlcritic clean-ups from new rules. RT#59995. Added support for Windows Enhanced Metafile Format (EMF). Forgot to bump the version number. Move the author/distro-sanity tests to an "xt" directory. Fixed mode on file. Build/admin file updates for 2.230 release. Turns out the 4 bounding-box ints for EMF are signed. More ignoring. Added MYMETA.json to ignore list. Merge branch 'master' of github.com:rjray/image-size Fixed so that default output now catches errors. Merge pull request #1 from kazeburo/master Merge branch 'master' of github.com:rjray/image-size New test (and image) for JPG tag-offset issue. Multiple changes in this commit: Small change to swfmxsize for short-buffer issues. Build/admin file updates for 2.231 release Removed the "!" flag in pack template for EMF. New additions for files to ignore. Build/admin file updates for 2.232 release Merge pull request #4 from GeoffRichards/master Merge pull request #5 from dsteinbrunner/patch-1 Merge pull request #6 from Hugmeir/master Bump version number. Merge branch 'master' of github.com:rjray/image-size Merge pull request #7 from neilbowers/master RT#41238: Modified patch from user for unpack dying. Merge pull request #9 from bk/master Fix some perlcritic issues. Build/admin file updates for 3.300 release. Forgot to change version number. gregor herrmann (1): Imported Upstream version 3.300 rjray (122): New repository initialized by cvs2svn. Initial revision Initial revision Fixed a bug in jpegsize and added some clarity to docs and comments. Added imgsize, a simple script that sizes images from the command-line. Updated for v1.1. Initial revision Updated for v1.1 Assignment of $Image::Size::revision caused an error due to q//. Revised by Bernd Leibing <bernd.leib...@rz.uni-ulm.de> to use var Changed imgsize to imgsize.PL (from patch by Bernd Leibing Notes for 1.2 and credits to Bernd Leibing <bernd.leib...@rz.uni-ulm.de> Image::Size 2.0 package: Changed list of files for those added and deleted. Updated for Image::Size 2.0. Specified GNU zip as compressor. New tests for new formats, new error reporting. Updated for Image::Size 2.0. Image::Size 2.0 package: better GIF, JPG and PNG handling. Added PPM handling. Notation of the changed error syntax. Updated for release 2.2. Updated for 2.2 Small patch to set binmode for OS/2, etc. that need it. Also change to Fixed it so that the imgsize script gets removed for make clean or make Fixed usage of AutoLoader to 5.004 standards and fixed glitch in XPM regex. Documented specifics of new (2.3) release. Fixed tiny bug in jpeg code that failed to return "JPG" as the 3rd Changed for 2.4. Release 2.5: Support for TIFF images Fixed some problems with reading of XPM and XBM headers. In the case of XPM, Isn't it amazing how flexible that XPM format is with regards to whitespace? Incorporate changes from Cloyce Spradling. Updated for release 2.6 Corrected numerous documentation errors and make the base imgsize routine Changes for release 2.7. Changes for the release of version 2.8. Added support for BMP files, changed VERSION to 2.8. Added a test for the new BMP support. Added some docs and better error-handling. Added attributes to the WriteMakefile call for ActiveState PPM (only done Four changes: fix to GIFs that are GIF87 but have GIF98a-style indicators; Updated for release 2.9. Moved some things around, and added two significant changes: no longer uses Updated for release 2.10 Change the version number for this release due to CPAN treating 2.10 as Changed a lingering uswest.com address to the current. Initial check-in A basic step-by-step for those whose Perl lacks MakeMaker support (generally Checkpoint for the sake of CVS synchronization on laptop. All (known) bugs Admin file corrupted file GIF-named-JPG to replace dave.jpg Updated html_imgsize test and replaced image for test 6 Changed dave.jpg to pak38.jpg Change history moved here from README Added ChangeLog Updated for 2.902 Changes are numerous. See the 2.902 entry in ChangeLog for details. Base checkpoint to pacify CVS Minor fix from CPAN Testers Group for workability on Macs *** empty log message *** Changes for 2.904 release Bumped version number for CPAN Added PREREQ_PM clause for detecting File::Spec Changed from imgsize.PL to plain imgsize, thanks to MakeMaker features Reflect change to imgsize script Added copyright notice so that Debian could use the module Oops-- forgot to bump $VERSION. This warrants move from 2.904 to 2.91. Updates for minor release Manually added a patch from Dan Klein to close files that imgsize opens. Housecleaning Notes for 2.92 Sample Flash file from Dmitry Dorofeev <d...@yasp.com> Added code from Dmitry Dorofeev <d...@yasp.com> to handle ShockWave/Flash Forgot to credit Dmitry in the docs. Added test file for Flash support Added test for Flash support Changes for 2.93 Basic RPM specfile template to allow for building a noarch RPM and SRPM for Adapted parts of the Perl-RPM Makefile.PL to enable building of RPM and SRPM Renamed this file Changes to the name of the template spec file and the generated spec file Test file for PSD (PhotoShop) support *** empty log message *** Added entry for the PSD test file Added test for PSD code, using recently-supplied image Corrected a bug in psdsize(), credited source of the PSD test image, and Changes for version 2.94 Added manual disabling of the cache, and added support for PCD images. Changes for 2.95 release Silly typo in PCD code *** empty log message *** Fixed some lingering tsoft.com e-mail addresses, cleared up the docs per Jeff Notes and version number changes for 2.97 release Small change to step #2 Fixed some documentation issues and a small buglet in an error message. Bumped the version number. Wouldn't have to keep remembering this if I'd move Changes for 2.98 Applied two patches from Ville Skytt� <ville.sky...@iki.fi>, providing MNG and Updates for 2.99 Small change in the block that sets up read from a scalar ref, to avoid Way-long-overdue code cleanup. Bumped version number and moved the trailing "1;" for safety-sake. Changes for 2.991 release Abandoning this Update for 2.992 Added support for Flash 6/FlashMX. Syncing upwards the deletion of .cvsignore. Merge of the local copy as of release 3.0. MIME types Trying to fix MIME Massive check-in to reproduce history from drive crash. This will establish Last change to bring the repository up to 3.01 level. Adjustments to SVN properties. Adjustment to SVN properties. Removing META.yml and SIGNATURE from permanent place in MANIFEST. Removing Changed the copyright info and the licensing. Added COPYRIGHT and LICENSE Small patch from n...@shaplov.ru to fix CWS-related error. Restored for users who don't have Module::Build installed. Admin changes for 3.1 release. Various and sundry changes for 3.1.1. Misspelled a prereq name. Fix URL/specification of the license info for LGPL. ----------------------------------------------------------------------- No new revisions were added by this update. -- Alioth's /usr/local/bin/git-commit-notice on /srv/git.debian.org/git/pkg-perl/packages/libimage-size-perl.git _______________________________________________ Pkg-perl-cvs-commits mailing list Pkgfirstname.lastname@example.org http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg-perl-cvs-commits
|
OPCFW_CODE
|
I'm creating a system that I want to be able to scale up to trillions of triples, and I have serious doubts about how to achieve this using existing triple stores. What should I avoid to ensure I never cripple myself in ways that are not apparent until the data has grown? I would prefer to bias my designs (now) in favour of fast query time (i.e. to better serve responsive web pages).
Is SPIN regarded as the fastest form of rules implementation? Should I partition my graphs one way or another? Should I be creating several SERVICEs to allow parallel querying, or will the existing implementations (of highly scalable triple stores) take care of all that? Should I stick to a specific OWL subset to ensure good inference and query performance? What have your experiences been in refactoring existing large ontologies and datasets to rectify performance issues?
The question initially mentioned AllegroGraph on EC2, but really, it doesn't matter what platform I use if I can get acceptable performance at the scale I aspire to.
Further discussion on this topic can also be found here.
There are 2 dimensions to performance, capacity and scalability which are both interelated.
Fortunately, the features of Semantic Web support both and irrespective of the native features of allegrograph, the diagram below illustrates this:
The basic premise behind this solution is the Shard and Federate design pattern. Here you have a load balanced federation cluster that federates and delegates incoming SPARQL queries to the other shards of the complete dataset. Its concern is only to federate an provide the final solutions to the client. As an additional feature you might want the federation cluster to know what shards (SPARQL endpoints) hold what datasets (graphs), thereby reducing the need for the HTTP client to know what shards (SPARQL endpoints) they need to query for their datasets. This is especially important if the number of datasets and sparql endpoints in going to expand and collapse quite frequently.
Individual shards are load balanced with the option to persist data on a storage area network (a bit old school I know).
As a paradigm for application:
Alternatively, look at using Chef to manage cluster/clouds. My general approach to configuration management is to avoid a system that relies on configuration (a well thought out convention works better).
Finally you also touch upon updating data and providing reasoning (the 2 again are slightly interelated). Depending on whether you use a SAN (shared S3 bucket or not) will determine how this happens. If you do use a shared S3 bucket, then you might follow the Milton and Keynes paradigm (master slave, updates go to the slave, slave becomes master). If you aren't then you might want to follow a star topology or chain topology, with nodes dropping out of the cluster and updating and then re-entering the cluster. If you insist on supporting inference, then this becomes a whole lot more complicated if you deside to forward chain (infact you might need to refactor the architecture quite considerably), but shouldn't be affected by backward chaining. Additionally, take a look at the native features of allegrograph and see how they align.
I'm going to add a 3rd dimension to what @William says.
Before you build a system out you need some sense of what it is going to cost per unit value it delivers.
To take a simple example, Facebook has an ARPU (average revenue per user) of about $5 a year. Salesforce.com has an ARPU that's closer to $1000 per year. Whatever else Facebook does, it can't be spending more than $5 a user in hardware costs. This impacts the scaling of the business because the cost increases as you add users, unlike, say, software development, which can be amortized against a growing user base.
When you fetishize 'scaling' at the expense of efficiency, you're just building a system that will make you go broke faster on a bigger scale. Rather than just burning through your own bank account, you can burn through the bank accounts of some venture capitalists too.
I'm speccing out a system which needs very fast U.I. responsiveness and I can say the production system is not going to use a triple store for certain queries, if it uses one at all, because I need to know these come back in less than a second.
It's important to know what specific scale you have in mind and compare that to what others are doing. Sindice, using a hybrid system, handles 50 billion heterogenous triples. The people at Franz have gotten 1 trillion triples into Allegrograph but that took a machine with 240 cores, 1.2 TB of RAM, two weeks of load time, and certainly close attention from people who understand the product very well.
|
OPCFW_CODE
|
Mon Jun 09 2014 16:49:26 GMT+0800 (China Standard Time)
node-static-site(latest: 0.1.0) A starter project for a static website to be hosted on Heroku, Modulus.io, etc.
puredom-templeton(latest: 1.0.0) Replace puredom's default template() functionality with templeton.
puredom-model(latest: 0.4.4) A synchronized model base class for puredom.
picomarkdown(latest: 0.5.1) Converts basic markdown to HTML.
puredom-sync(latest: 1.2.1) A puredom plugin that lets you chain sequential asynchronous functions.
puredom-taginput(latest: 1.0.0) A puredom plugin that adds support for <input type="tag">
templeton(latest: 2.1.1) It's like the other ones but not at all like the other ones.
puredom-rest(latest: 1.3.1) A high-level network abstraction that makes working with REST APIs simple.
puredom-view(latest: 0.2.0) A simple view-presenter base class/mixin for puredom. Works with puredom-router and puredom-viewmanager
ford.js(latest: 0.7.2) ford.js ======= The library nobody wants but that is for some reason still mayor. [](https://travis-ci.org/developit/ford.js "Build Status") [ What's smaller than an almond? This AMD shim.
zrd3(latest: 0.8.4) Modular ReactJS charts made using d3 chart utilities. Work on project documentation has started [here](https://github.com/esbullington/react-d3/wiki). A few examples of the available charts can be seen below, the others can be [viewed here](https://reacti
modify-babel-preset(latest: 3.2.1) Create a modified babel preset based on an an existing preset.
@zeit/next-workers(latest: 1.0.0) Use [Web Workers](https://developer.mozilla.org/en-US/docs/Web/API/Web_Workers_API/Using_web_workers) in your [Next.js](https://github.com/zeit/next.js) project using `import`.
htm(latest: 3.0.4) The Tagged Template syntax for Virtual DOM. Only browser-compatible syntax.
web-worker(latest: 1.0.0) Consistent Web Workers in browser and Node.
comlink(latest: 4.3.0) Comlink makes WebWorkers enjoyable
buffer-backed-object(latest: 0.2.2) **`BufferBackedObject` creates objects that are backed by an `ArrayBuffer`**. It takes a schema definition and de/serializes data on-demand using [`DataView`][dataview] under the hood. The goal is to make [`ArrayBuffer`][arraybuffer]s more convenient to u
|
OPCFW_CODE
|
This page holds a distribution set for a Linux 2.6/3.x kernel module for wireless network cards with an Agere HERMES II or HERMES II.5 chipset. Currently only PCMCIA cards are supported. The source for PCI (wl_pci.c) is included but was not changed from the original source provided by Agere Systems.
The module is compiled and tested on Ubuntu 11.10 with a Thomson SpeedTouch 110 Wireless PC CARD (Agere Systems 0110-PC) using a Fujitsu Siemens laptop and WEP 128 protected accesspoint.
This driver also works with WPA Personal (WPA TKIP), at least with HERMES II. WPA was broken in earlier versions. WPA was tested with wpa_supplicant. Ubuntu's NetworkManager cannot be used since it uses wpa_supplicant with ap_scan=1; the driver will only connect using ap_scan=2. The hardware does not support WPA2, so this will never be possible using the HERMES II chipsets.
Use with HERMES II.5 is untested, no such card is available for testing.
The changes enabling WPA are accepted by the Linux kernel maintainers, however it will take some time before a kernel with these changes appear in a Linux distribution. This driver can be used in the mean time if you need WPA.
2014-05-09: Sources updated with kernel 3.x staging source, buildable on Ubuntu 14.04
2012-11-30: Sources synchronized with current 3.x kernel
2011-10-19: WPA now works with wpa_supplicant and ap_scan=2
2011-10-19: Cleaned up, sources synchronized with current 3.0 kernel
2010-06-02: Compiles with 2.6.35-rc1 now
2010-06-02: Fixed compile with 2.6.34, driver had some 2.6.35 changes
2010-05-13: Synchronized with kernel driver, compiles with 2.6.34 now
2010-02-05: Added kernel patch to remove atoi
2009-12-06: Adjusted signal measurements using real world measurements
2009-12-06: Sync with kernel staging driver: metadata cleanup
2009-10-30: Sync with kernel staging driver: fixed Kconfig and added TODO
2009-09-24: Resubmitted the modified driver for the kernel staging area
2009-09-24: Simplified the source to a flat structure
2009-09-21: Submitted the driver for inclusion in the kernel staging area
2009-09-20: Compiles now with Linux kernel 2.6.31
2009-05-31: First public version
The following link points to the driver source: wl_lkm26_722_abg.tar.gz
Below you will find the README I added to the source containing usage instructions and background information.
Henk de Groot (firstname.lastname@example.org)
======================================================================= WLAN driver for cards using the HERMES II and HERMES II.5 chipset HERMES II Card PCMCIA Info: "Agere Systems" "Wireless PC Card Model 0110" Manufacture ID: 0156,0003 HERMES II.5 Card PCMCIA Info: "Linksys" "WCF54G_Wireless-G_CompactFlash_Card" Manufacture ID: 0156,0004 Based on Agere Systems Linux LKM Wireless Driver Source Code, Version 7.22; complies with Open Source BSD License. ======================================================================= DESCRIPTION The software is a modified version of wl_lkm_722_abg.tar.gz from the Agere Systems website, addapted for Ubuntu 9.04. Modified for kernel 2.6 by Henk de Groot <email@example.com> Based on 7.18 version by Andrey Borzenkov <firstname.lastname@example.org> $Revision: 39 $ INSTALLATION Unpack in a new directory. Open a terminal screen. Change directory to the source directory Type command make and wait until it is finshed. Now you have build the module wlags49_h2_cs; this module is meant for a HERMES II card. The driver is tested with a Thomson SpeedTouch 110 Wireless PC Card. For the test Station mode was used with WEP. The driver is supposed to support WAP and as accesspoint that is NOT tested. If you have a card using the HERMES II.5 chip you have to make changes to the Makefile and uncomment -DHERMES25. This will build driver wlags49_h25_cs. Note: You can determine the type with command "pccardctrl info" MANIFID: 0156,0002 = HERMES - not supported by this driver MANIFID: 0156,0003 = HERMES II (Wireless B) MANIFID: 0156,0004 = HERMES II.5 (Wireless B/G) After successful compile type command sudo make install to install the module. Now the card should be recognized. It should be able to configure and use the card with NetworkManager. Wpa_supplicant also works, as does manual configuration using the iwconfig/iwlist programs. Note: I only tested Station mode with WEP but if I didn't break anything WPA and AP mode should also work; note however that WPA was experimental in the original Agere driver! Note: to compile as AP change the makefile and remove the line -DSTA_ONLY \ (or comment it, but in that case make sure to move it after all the flags you want to use) CHANGES The HCF functions to control the card are virtually unchanged, the only changes are meant to fix compiler warnings. The only real change is in HCF_WAIT_WHILE which now has a udelay(2) added to give a small delay. The linux driver files (wl_xxxx.c) are changed in the following ways: - Addaptations of Andrey Borzenkov applied to 7.22 source - Alterations to avoid most HCF_ASSERTs -- Switching interrupts off and on in the HCF -- Bugfixes, things that were apparently wrong like reporting link status change which checked a variable that was not changed in HCF anymore. -- Used on WEP but setting keys via SIOCSIWENCODEEXT was not supported -- Recovery actions added The major problem was the order in which calls can be made. The original looks like a traditional UNIX driver. To call an "ioctl" function you have to "open" the device first to get a handle and after "close" no "ioctl" function can be called anymore. With the 2.6 driver this all changed; the former ioctl functions are now called before "open" and after "close", which was not expected. One of the problems was enable/ disable of interrupts in the HCF. Interrupt handling starts at "open" so if a former "ioctl" routine is called before "open" or after "close" then nothing should be done with interrupt switching in the HCF. Once this was solved most HCF_ASSERTS went away. The last point, recovery actions added, needs some clarification. Starting the card works most of the time, but unfortunately not always. At a few times recovery code was added; when the card starts to misbehave or the communication between the HCF and the card is out of sync and the HCF enters DEFUNCT mode everything is reset and reinitialized. Note, hcf.c contains a lot of documentation. It takes some time but slowly some things become clear. Also some unresolved issues are mentioned in hcf.c, so there are still unknown bugs. The card problems are almost in all cases when starting up and before the first association with an AP, once the card is in operation it seems to stay that way; when debugging no HCF_ASSERTS appear anymore. Note: some HCF_ASSERTS still appear, in a number of cases it is a real error, for example at card removal the missing card is detected. LICENSE The Agere Systems license applies. This is why I include the original README.wlags49. The instructions in that file are bogus now. I also include the man page. Even though setting parameters on the module does not work anymore but it provides some information about all the settings. I have no personal contact with Agere, but others have. Agere agreed to make their software available under the BSD license. This driver is based on the 7.22 version. The following was mailed by Agere to Andrey Borzenkov about this: --- Begin Message --- * From: TJ <tj@xxxxxxxxxxx> * Date: Mon, 05 Feb 2007 19:28:59 +0000 Hi Andrey, I've got some good news for you/us/the world of Hermes :) I got a reply from the legal representative at Agere confirming that their source-code is BSD licensed, and I've included the contents of the email here. I hope this re-assures you so that your excellent work on the drivers can be made widely available for other hackers to work with. Regards, TJ. --------- On Mon, 2007-02-05 at 13:54 -0500, Pathare, Viren M (Viren) wrote: "I would like to confirm that the two drivers; Linux LKM Wireless Driver Source Code, Version 7.18 and Linux LKM Wireless Driver Source Code, Version 7.22 comply with Open Source BSD License. Therefore the source code can be distributed in unmodified or modified form consistent with the terms of the license. The Linux driver architecture was based on two modules, the MSF (Module specific functions) and the HCF (Hardware Control Functions). Included in the HCF is run-time firmware (binary format) which is downloaded into the RAM of the Hermes 1/2/2.5 WMAC. This hex coded firmware is not based on any open source software and hence it is not subject to any Open Source License. The firmware was developed by Agere and runs on the DISC processor embedded within the Hermes 1/2/2.5 Wireless MAC devices. Hope this helps. Sincerely, Viren Pathare Intellectual Property Licensing Manager Agere" --- End Message ---
|
OPCFW_CODE
|
Adapter 2.0 - Separate shims and utility functions into separate files
Currently each browser shim and utility functions is separated into files, a main adapter_core.js uses node.js require() with browserify to bring them into one file (adapter.js) which will now be omitted from being checked in (we will still publish a browserified version on the gh-pages branch).
TODOs/questions:
Logging tests are failing due to it now always being a module, not sure how we should suppress logging when used as a module (maybe have a toggle?).
Test this on more browsers (webview, cordova, ios etc)
Decide if we should only create global variable (window.adapter which is set in the browserifyOptions standalone option) for testing purposes (tests need to have access to this) or always include it (it exposes the browserShim object). We could just have different build rules for "prod" (without it) and "testing" (with it). Should also probably create a global variable for the tests where this object can be defined so that the we do not have to change the global object (adapter) name wherever it's called.
Create build targets per browser or just for Edge (due to size of the shim and it's not widely used yet).
Feedback is welcome, this PR can be seen as a discussion thread with working code.
@alvestrand, @fippo
Just to make it clear, I know the logging test is failing, reason is in the main description.
Yes googlebot, @fippo is the author of these merged PR's.
hi @googlebot, i think we've met before ;-)
I tried to use browserify alias so that we only need to set the path for the files in the gruntfile rather than having a relative path in each file requiring a module but then the module.exports scope is all wrong due to alias does require on the files even though they are not even used in adapter_core.js. Took me a while to figure this out. Used alias initially to make it possible to have conditional require for edge, however I managed with ignore ;).
Also, I created some more folders to make for a cleaner structure.
Left todo's/questions:
[ ] Add logging toggle + fix logging test
[ ] Agree upon this design, if the we want to separate even more (like edge_sd.js) bigger functions into separate modules we can always do that later.
[ ] Change NPM name from webrtc-adapter-test
[ ] How does this work when it's included as a NPM module in a different project? Does it run the Gruntfile.js automagically ?
[ ] merge with the master branch
[ ] add tests for all build targets ensuring they deliver what's intended (edge shim should not be included in the no_edge target etc)
(3): webrtc is still available ,-)
(4): IIUC you're making adapter-core.js the result of the build process and point main in package.json to it. main in package.json should point to a file that requires all the other dependencies.
Got ya, will change back to adapter.js then.
I do not mind using either global or window, as you say, it will never run in a node context anyway. Maybe just leave it for now and change it later if desired?
PTAL at the logging toggle, not sure if that's the best way....
@calebboyd do you have any input here (considering your usage context)?
Also a general question, how would developers expect to use this when included via NPM? Should browserified targets be built upon install and included in the node_modules folder or should it be as is (imported using node require and I guess requireJS (have not tested it))?
@KaptenJansson thanks for pinging. After looking over it briefly It shouldn't present any new issues The pre-existing browser detection problems (version === null) still exist but it looks much more open to extension in that regard.
For the general NPM question, I'd expect to download the built versions when installing the package. And that the built versions would not be available in source control
@calebboyd please check https://github.com/webrtc/adapter/commit/743786375a5a8a28a04437093bd26347b530a75f to see if that meets your expectations.
Currently when doing npm publish in the adapter repo:
Builds all targets
Includes them in the published npm package
When added as dep in another project, it symlinks all adapter.js build artifacts from out/ to node_modules/.bin
@KaptenJansson I think it does, except for 3. I think it should be files instead of bin. I'm pretty bin is for exposing a hash of shell executable command: script pairs for npm run commands. Unless the symlinks have specific benfits.
Ow right, I just thought adding them to .bin would add them to the NPM path and you could then use NPM require for adapter.js or it's shims.
I'm using .npmignore hence I do not really need files (need to use it since I want to include the out/ folder in the npm package but not being part of the source)
@fippo @jan-ivar @calebboyd @alvestrand @juberti I've now merged all the changes up until today. PTAL.
The only thing I'm not sure about at the moment is the NPM name and the version number (currently set to webrtc-adapter and 0.3.0).
webrtc is owned by fippo@
webrtc-adapter is owned by willscott@ (last update in May 2015 and does not contain a whole lot of shimming)
webrtc-adapter-test is the current name but it's only a temporary one.
I suggest we reach out to willscott and ask if we can use webrtc-adapter. Just webrtc sounds to generic to me.
PS I do know that one can just publish on top of someone else package but that's IMHO a bit rude.
Flipping the label due to Googlebot is not able to do so after the fact.
I'm going to merge this now since I've not gotten any objections and want to get this before new PR's line up.
|
GITHUB_ARCHIVE
|
ignition-gui4 homebrew tests seg-faulting
There were 4 tests (UNIT_Application_TEST, UNIT_Helpers_TEST, UNIT_MainWindow_TEST, UNIT_Plugin_TEST) that failed to run in a recent home-brew Jenkins CI job of revision 40a4658bb7933f76bd90544401693acba1c19f6c on the master branch:
https://build.osrfoundation.org/job/ignition_gui-ci-master-homebrew-amd64/25/testReport/
The console output show seg-faults:
1: [----------] 7 tests from ApplicationTest
1: [ RUN ] ApplicationTest.Constructor
1/30 Test #1: UNIT_Application_TEST ..................***Exception: SegFault 0.03 sec
7: [----------] 6 tests from HelpersTest
7: [ RUN ] HelpersTest.humanReadable
7: [ OK ] HelpersTest.humanReadable (0 ms)
7: [ RUN ] HelpersTest.unitFromKey
7: [ OK ] HelpersTest.unitFromKey (0 ms)
7: [ RUN ] HelpersTest.rangeFromKey
7: [ OK ] HelpersTest.rangeFromKey (0 ms)
7: [ RUN ] HelpersTest.stringTypeFromKey
7: [ OK ] HelpersTest.stringTypeFromKey (0 ms)
7: [ RUN ] HelpersTest.findFirstByProperty
7/30 Test #7: UNIT_Helpers_TEST ......................***Exception: SegFault 0.03 sec
11: [----------] 4 tests from MainWindowTest
11: [ RUN ] MainWindowTest.Constructor
11/30 Test #11: UNIT_MainWindow_TEST ...................***Exception: SegFault 0.03 sec
15: [----------] 3 tests from PluginTest
15: [ RUN ] PluginTest.DeleteLater
15/30 Test #15: UNIT_Plugin_TEST .......................***Exception: SegFault 0.03 sec
It seems to work when I use lldb, but I was able to get a backtrace by setting ulimit -c unlimited and then running without lldb. Here's the backtrace from UNIT_Helpers_TEST with relevant lines at Helpers_TEST.cc:125 and Application.cc:85:5:
$ bin/UNIT_Helpers_TEST
Running main() from /Users/scpeters/ws/ignition/src/ign-gui/test/gtest/src/gtest_main.cc
[==========] Running 6 tests from 1 test case.
[----------] Global test environment set-up.
[----------] 6 tests from HelpersTest
[ RUN ] HelpersTest.humanReadable
[ OK ] HelpersTest.humanReadable (0 ms)
[ RUN ] HelpersTest.unitFromKey
[ OK ] HelpersTest.unitFromKey (0 ms)
[ RUN ] HelpersTest.rangeFromKey
[ OK ] HelpersTest.rangeFromKey (0 ms)
[ RUN ] HelpersTest.stringTypeFromKey
[ OK ] HelpersTest.stringTypeFromKey (0 ms)
[ RUN ] HelpersTest.findFirstByProperty
Segmentation fault: 11 (core dumped)
$ lldb -c /cores/core.2
core.27906 core.27930 core.27933 core.28216 core.28484
OSRF-macbook-pro-15:build scpeters$ lldb -c /cores/core.28484
(lldb) target create --core "/cores/core.28484"
Core file '/cores/core.28484' (x86_64) was loaded.
(lldb) bt
* thread #1, stop reason = signal SIGSTOP
* frame #0: 0x00007fff728a6fce libsystem_c.dylib`strrchr + 10
frame #1: 0x000000010dacafeb QtCore`QCoreApplicationPrivate::appName() const + 203
frame #2: 0x000000010dacd395 QtCore`QCoreApplicationPrivate::init() + 101
frame #3: 0x000000010d302f79 QtGui`QGuiApplicationPrivate::init() + 57
frame #4: 0x000000010cd29e1a QtWidgets`QApplicationPrivate::init() + 26
frame #5: 0x000000010bb52c72 libignition-gui4.4.dylib`ignition::gui::Application::Application(this=0x00007ffee41426b8, _argc=<unavailable>, _argv=<unavailable>, _type=kMainWindow) at Application.cc:85:5 [opt]
frame #6: 0x000000010baca0e0 UNIT_Helpers_TEST`HelpersTest_findFirstByProperty_Test::TestBody(this=<unavailable>) at Helpers_TEST.cc:125:15 [opt]
frame #7: 0x000000010bade188 UNIT_Helpers_TEST`void testing::internal::HandleExceptionsInMethodIfSupported<testing::Test, void>(testing::Test*, void (testing::Test::*)(), char const*) [inlined] void testing::internal::HandleSehExceptionsInMethodIfSupported<testing::Test, void>(method=<unavailable>, location=<unavailable>)(), char const*) at gtest.cc:2447:10 [opt]
frame #8: 0x000000010bade179 UNIT_Helpers_TEST`void testing::internal::HandleExceptionsInMethodIfSupported<testing::Test, void>(object=<unavailable>, method=<unavailable>, location="the test body")(), char const*) at gtest.cc:2483 [opt]
frame #9: 0x000000010bade081 UNIT_Helpers_TEST`testing::Test::Run(this=0x00007fd473d27fb0) at gtest.cc:2521:5 [opt]
frame #10: 0x000000010badf49c UNIT_Helpers_TEST`testing::TestInfo::Run(this=0x00007fd473d2ace0) at gtest.cc:2697:11 [opt]
frame #11: 0x000000010badfe67 UNIT_Helpers_TEST`testing::TestCase::Run(this=0x00007fd473d2a820) at gtest.cc:2815:28 [opt]
frame #12: 0x000000010baecb67 UNIT_Helpers_TEST`testing::internal::UnitTestImpl::RunAllTests(this=<unavailable>) at gtest.cc:5181:43 [opt]
frame #13: 0x000000010baec618 UNIT_Helpers_TEST`bool testing::internal::HandleExceptionsInMethodIfSupported<testing::internal::UnitTestImpl, bool>(testing::internal::UnitTestImpl*, bool (testing::internal::UnitTestImpl::*)(), char const*) [inlined] bool testing::internal::HandleSehExceptionsInMethodIfSupported<testing::internal::UnitTestImpl, bool>(method=<unavailable>, location=<unavailable>)(), char const*) at gtest.cc:2447:10 [opt]
frame #14: 0x000000010baec609 UNIT_Helpers_TEST`bool testing::internal::HandleExceptionsInMethodIfSupported<testing::internal::UnitTestImpl, bool>(object=<unavailable>, method=<unavailable>, location="auxiliary test code (environments or event listeners)")(), char const*) at gtest.cc:2483 [opt]
frame #15: 0x000000010baec58c UNIT_Helpers_TEST`testing::UnitTest::Run(this=0x000000010bb0b8e8) at gtest.cc:4790:10 [opt]
frame #16: 0x000000010baff6da UNIT_Helpers_TEST`main [inlined] RUN_ALL_TESTS() at gtest.h:2341:46 [opt]
frame #17: 0x000000010baff6cd UNIT_Helpers_TEST`main(argc=1, argv=<unavailable>) at gtest_main.cc:36 [opt]
frame #18: 0x00007fff728043d5 libdyld.dylib`start + 1
(lldb)
These tests seem to be fixed, but could be slightly flaky:
UNIT_Helpers_TEST: https://build.osrfoundation.org/job/ignition_gui-ci-ign-gui4-homebrew-amd64/lastCompletedBuild/testReport/(root)/HelpersTest/history/
UNIT_MainWindow_TEST: https://build.osrfoundation.org/job/ignition_gui-ci-ign-gui4-homebrew-amd64/lastCompletedBuild/testReport/(root)/MainWindowTest/
The other tests seem to be failing at the Application's constructor. The failure happens from ign-gui4 onwards, but not on ign-gui2 and ign-gui3:
https://build.osrfoundation.org/job/ignition_gui-ci-ign-gui2-homebrew-amd64/43/testReport/(root)/ApplicationTest/Constructor/history/
https://build.osrfoundation.org/job/ignition_gui-ci-ign-gui3-homebrew-amd64/27/testReport/(root)/ApplicationTest/Constructor/history/
The main change starting from ign-gui4 is that Application inherits from QApplication instead of QGuiApplication. It's still strange to me why the application construction is not failing on other tests though.
Closing since dome is EOL.
|
GITHUB_ARCHIVE
|
Congratulations, EMC (or should we call them Storagezilla?) ... you've invented a feature NetApp has had since they introduced Clustered OnTAP 8.x some number of years ago.
643 publicly visible posts • joined 15 Aug 2008
The "net neutrality" debate is silly, and the fact that it's become politically partisan is downright stupid.
All you have to do is mandate that anyone who supplies last-mile connections, must offer those last-mile connections as unbundled elements. When they did this with DSL in 1996 it was wonderfully successful; if you didn't like your ISP you could switch to another one who had access to the same wires.
Do this for fibre and coax, and all problems are immediately solved. And no partisan bickering either -- both Congress and the President could easily get behind it.
It's time to start considering an Internet connection, rather than a phone line, to be the "typical" communications infrastructure found in homes and businesses. Ask any building manager -- would they rather connect their alarms and lifts to a bunch of dedicated analog lines, or to the building LAN? Sun Microsystems (RIP) figured this out 20 years ago. They said [http://java.sys-con.com/node/35818] that we need to move from a "dial tone" world to a "web tone" world.
Sun was right about a lot of things but they were either ahead of their time or didn't execute well (take your pick). Remember their "network computing" push? Well guess what, kids: we're there.
I expect at some point Apple will begin producing computers with touch-sensitive keys that simply do not move at all. It will make the computer that much thinner, and it will be terrible, but the Apple fanbois will hail it as the greatest keyboard ever, and other manufacturers will ape it in their designs as well.
Google's devices have great appearance but absolute shit build quality
Agreed. I had to repair the microphone in my Nexus twice, the vibrate setting stopped working, and eventually I had to make all voice calls using the speakerphone. Battery life went to crap even after I replaced the battery.
For my latest upgrade I went back to Samsung and am much happier.
Neutrality doesn't belong at the federal *or* state level. It belongs at the central office level. Equal access to last mile networks would eliminate the cable and phone company monopolies. If a provider isn't "neutral" enough for you, you select another.
Two words: UNBUNDLED ELEMENTS. When last mile DSL was offered as an unbundled element, there was a wonderful diversity of carriers. We need the modern FTTH and DOCSIS plants offered as unbundled elements as well.
End of discussion.
When a $250 midrange phone works just as well as an iPhone X, and doesn't have an obvious design flaw (the notch) masquerading as a feature, it doesn't bode well for Apple. Fanboism used to be a multiplier for Apple's sales, but at this point it's pretty much the only thing driving their sales at all. Smartphones are now a commodity item; you can buy them at lower prices and they still last 3-5 years.
Red Hat Storage Server is built around GlusterFS. Red Hat acquired the company that built Gluster quite some time ago. It's baked pretty hard into their infrastructure.
So this product is really just RHSS preinstalled on SuperMicro hardware. No big deal, I've got that exact combination all over my data centers and it works pretty well.
Facebook is not in the business of admitting the truth. Facebook is in the business of creating its own version of the truth through propaganda, through manipulating people, through crafting its "algorithm" to make people think the way Facebook wants them to think.
Facebook is toxic. Facebook is a cancer on the Internet and should be treated like one.
"The Internet interprets censorship as damage, and routes around it." --John Gilmore, 1993 [http://uncensored.citadel.org]
The purpose of an operating system is to load and run software and then get out of the way. Why would Microsoft actually deliver this? Their entire revenue model is built around forcing a bunch of add-ons that nobody asked for. That's why modern corporate desktops now have "Xbox Live" and "Microsoft Mixed Reality Viewer" and a bunch of "phone" apps that never get used. Are they *really* going to give us the barebone OS that most people would prefer?
Somehow I doubt this is going to actually be usable. It'll probably be locked down to Windows Store apps, and/or it will refuse to run web browsers other than Edge, and/or it'll require a usually-on connection to Microsoft's cloud (the old Chromebook trick).
In the developed world, power plugs are mounted at right-angles to the cable so that pulling the cable gently can identify the plug without disconnecting it
I understand that your intent is to declare the superiority of the UK power plug over the US power plug, and I might even agree with you, but in the "developed world" data centers use C13-C14 and C19-C20 connectors regardless of the locale and voltage. And those are never mounted at right angles.
For how long, 30 minutes to tide you through a thunderstorm, or 4-5 days to carry you through a major problem like in Lancaster a couple of years ago?
I have Verizon FiOS, which is the most common FTTP service in the US. The ONT they give us has a battery backup which will keep the whole service running for a few minutes, to get you through a quick power dip; after that it switches to a mode that just keeps the POTS lines running. That mode is supposed to last for about 8 hours.
Yup. Blocking all of their domains at the network level is really the best way to make sure your computer isn't infected by the Facebook cancer. As long as you don't share a network with f*c*book users, it's best to block them at the router if you can.
Another alternative is to install a f*c*book blocking extension in your browser.
When a request for bids is written this way, it almost always means that they've already decided which vendor they're going to select, and they're just "going through the motions" so that no one can accuse them of cronyism.
I certainly hope the selected vendor is not AWS. There's already enough animosity there.
Back when people hung out on a large variety of online communities (or even BBSes before that) things were better; there was no evil Facebook monopoly looming over it all.
Heck, I've been running this one for the last 30 years, and the idea of uncensored free speech is now more relevant than ever: http://uncensored.citadel.org
Remember "the Internet interprets censorship as damage and routes around it"? Well, Facebook is damage of the worst kind.
"Alexa, please notify the Cloud Spying Service that I want to open my front door."
And of course, when your Internet connection goes down, your whole house stops working. Where are the community projects developing open standards for home automation that don't involve Amazon or Google or Apple slurping up all of the telemetry data from your "smart" home?
Microsoft is the company that makes Windows and Office. We like them that way: boring and stodgy. They don't need to branch out, they don't need to take over the whole universe, they aren't going to kill Apple or Google at this point. We like the new non-threatening Microsoft better than the T-Rex of the 1990's.
Windows Mixed Reality is the hottest thing inside the Microsoft Stores that they keep building in malls right across the aisle from the Apple Stores, and it's so appropriate: the few people who wander into the store try it out, but no one ever buys anything there.
The mistake here is in making network neutrality a political issue. Pretty much everyone on both sides is wrong. An easy and permanent solution would be to prohibit last-mile carriers from offering any services on their wires. All they should be permitted to do is sell connectivity between the customer premise and the central office. At that point, any carrier who has the audacity to screw with the network can instantly be substituted with another carrier who offers a better policy. Doesn't matter anymore whether it's data, voice, or video.
Uncoincidentally, this is *exactly* how electricity is delivered in the United States. Provision and distribution are separated.
I see the point of being in this body, but John Deere ???? In my country, France, where you see a tractor, you're sure as heck to have no 3G whatsof*ckingever !
All the world is not France. Here in the US we have very good coverage in rural areas. John Deere uses the network to remotely disable tractors with modified firmware, to make sure the farmers aren't getting their tractors serviced by third parties.
I know the Reg editors lean left and like to bash Mr. Trump, but he's doing the right thing here. The H1-B program was supposed to provide workforce for openings that could not be filled, but we all know that the only thing it was actually used for was to replace American workers with indentured servants from offshore at a fraction of the cost. The whole program is corrupt and unnecessary and should be scrapped.
|
OPCFW_CODE
|
Who Funds Bitcoin Core Developers? Here Are The Facts
Even superheroes like the Bitcoin Core Developers have to eat. One of the most mind-blowing facts about Bitcoin is that a group of volunteers maintain and keep developing the code. Private companies, NGOs, and wealthy Bitcoiners support them via grants and donations. This is all done over the counter, with as much transparency as possible. Here’s a Bitcoin Grants Tracker, for example. Nevertheless, suspicion and conspiracy theories abound. As they should. Don’t trust, verify.
Luckily for us, a Lightning Labs evangelist that goes by the name Lucas de C. Ferreira took it upon himself to curate and divulge a list of the main contributors and their sponsors. Over the years, the code has had more than 800 contributors. However, only seven people serve the role of “maintainers” nowadays. According to the official website, those are:
“Project maintainers have commit access and are responsible for merging patches from contributors. They perform a janitorial role merging patches that the team agrees should be merged. They also act as a final check to ensure that patches are safe and in line with the project goals. The maintainers’ role is by agreement of project contributors.”
Before we get into them, let’s talk about the code.
What Is Bitcoin Core Exactly?
Even though the name might suggest otherwise, Bitcoin Core is only one of the possible implementations of a Bitcoin client software. It’s not mandatory to use it, but, it’s the most popular option at the moment. On the official website, they define it as:
“Bitcoin Core is an open source project which maintains and releases Bitcoin client software called “Bitcoin Core”.It is a direct descendant of the original Bitcoin software client released by Satoshi Nakamoto after he published the famous Bitcoin whitepaper.Bitcoin Core consists of both “full-node” software for fully validating the blockchain as well as a bitcoin wallet.”
The recently hacked Bitcoin.org offers a more practical definition:
“Bitcoin Core is programmed to decide which block chain contains valid transactions. The users of Bitcoin Core only accept transactions for that block chain, making it the Bitcoin block chain that everyone else wants to use.”
Anyone can contribute Bitcoin Core.
Some Bitcoin Core Developers’ Patrons
@mitDCI also funds the work of @tdryja @ajtowns and Cory Fields. Thank you @Melt_Dem @CoinSharesCo @BaillieGifford @reidhoffman @morcosa @suhasdaftuar @jack @DigitalAssets @GalaxyDigital @GalaxyDigitalHQ @saylor @PaxosGlobal @novogratz @DCGco @cameron @tyler and @jlppfeffer— Lucas de C. Ferreira 🇧🇷⚡️ (@lucasdcf) October 18, 2021
Along with Peter, Chaincode also funds work done by @carl_dong @murchandamus and Russ Yanofsky. They also have educational efforts lead by @adamcjonas and @Caralie_C. A lot of the new BTC developers out there went through their seminars or residencies. https://t.co/RgsDIbLcB2— Lucas de C. Ferreira 🇧🇷⚡️ (@lucasdcf) October 18, 2021
Maintainer #04 – Marco Falke, @MarcoFalke, is the QA/Testing Maintainer, and his work is funded by @Okcoin. They just recently renewed their yearly grant to him. It's not the only grant Okcoin gave to Bitcoin developers. We need more Bitcoin companies doing that!— Lucas de C. Ferreira 🇧🇷⚡️ (@lucasdcf) October 18, 2021
More Bitcoin Core Developers’ Patrons
Maintainer #06 – Michael Ford, @fanquake, received grants from @BitMEX in conjunction with @indepreserve. @bitMEX also gave grants to other Bitcoin developers!— Lucas de C. Ferreira 🇧🇷⚡️ (@lucasdcf) October 18, 2021
As you can see, we have an increasingly diverse set of Bitcoin companies and advocates supporting Bitcoin development. More and more companies realizing how important is to contribute to open source BTC development.— Lucas de C. Ferreira 🇧🇷⚡️ (@lucasdcf) October 18, 2021
Conclusions: Are Seven Enough?
For a decentralized, worldwide project, seven maintainers don’t seem like much. However, they’re not the only ones and Bitcoin Core is not the only project. Just look at this list of “people working on Bitcoin and related projects” that you could contribute to. If you’re looking for a way to give back to the Bitcoin project, Ferreira provides these other two options. The Bitcoin Development Fund and Open Sats.
Also, consider that anyone is welcome to participate and submit proposals to the Bitcoin Core project. Further decentralization is in your hands, dear reader. If you have doubts or are curious about how changes are made, or about how the community selects maintainers, check this excellent article out. If you’re one of those shadowy super coders, the Bitcoin Core project might be the career change you’re looking for.Source
|
OPCFW_CODE
|
[MPlayer-G2-dev] vp layer and config
michaelni at gmx.at
Mon Dec 15 12:10:48 CET 2003
On Monday 15 December 2003 11:49, D Richard Felker III wrote:
> On Mon, Dec 15, 2003 at 10:13:02AM +0100, Arpi wrote:
> > - split mp_image to colorspace descriptor (see thread on this list)
> > and buffer descriptor (stride, pointers), maybe a 3rd part containing
> > frame descriptor (frame/field flags, timestamp, etc so info related to
> > the visual content of the image, not the phisical buffer itself, so
> > linear converters (colorspace conf, scale, expand etc) could simply
> > passthru this info and change buffer desc only)
> I've been working on implementing this, but there's one element of
> mp_image_t I'm not sure where to put. Actually this has been bothering
> me for a while now. The exported quant_store (qscale). In G1 the
> pointer just gets copied when passing it on through filters, but this
> is probably between mildly and seriously incorrect, especially with
> out-of-order rendering.
> IMO storing quant table in the framedesc isn't a good idea, since
> quantizers are only valid for the original buffer arrangement.
> Actually, I tend to think they belong in the buffer descriptor, almost
> like a fourth plane. But who should be responsible for allocating and
> freeing the quant plane? IMO the only way it can really work properly
> is to have the same code that allocates the ordinary planes be
> responsible for the quant plane too..
btw, we could also pass other stuff like motion vectors around, these maybe
usefull for fast transcoding
> This would mean:
> When exporting buffers, you just set quant to point at whatever you
> like (as long as it won't be destroyed until the buffer is released).
> When using automatic buffers, the vp layer would allocate quant plane
> for you (but how does it know the right size?) and you have to fill it
> (or else don't mark it as valid).
> When using direct rendering, the target filter has to allocate the
> quant plane (again, how does it determine the size?).
the quant plane is always (width+15)/16 x (height+15)/16 big, but we could use
int bpp; //bits per pixel (32bit for 2x16bit motion vectors)
int log2_subsample; // like chroma_w/h_shift
int offset; //x/y offsets of the 0,0 sample relative to the luma plane
in 1/2 sample precission
level[i]= get_vlc(); i+=get_vlc(); (violates patent EP0266049)
median(mv[y-1][x], mv[y][x-1], mv[y+1][x+1]); (violates patent #5,905,535)
buf[i]= qp - buf[i-1]; (violates patent #?)
for more examples, see http://mplayerhq.hu/~michael/patent.html
stop it, see http://petition.eurolinux.org & http://petition.ffii.org/eubsa/en
More information about the MPlayer-G2-dev
|
OPCFW_CODE
|
- 100% plagiarism-free papers
- Prices starting at $10/page
- Writers are native English speakers
- 100% satisfaction guarantee
- Free title and reference pages
- Attractive discount policy
This company created in 2001
- Free Unlimited Revisions
- 24/7 Customer Support
- Team of professional English writers and Editors
- Attractive Discount System
- Plagiarism Free Papers
- Confidentiality and Authenticity
- Money back guarantee
- Direct Contact with Writer
This company created in 2004
- Writing original dissertations from scratch
- Writing any part of dissertation per your instructions
- Editing/proofreading of your dissertation by professional editors
- No plagiarism – guaranteed!
no ready-made papers, only original writing
- 24/7 support team
help you need while writing a dissertation
- Highly qualified writers
only native speakers with PhD degrees
- Affordable pricing system
This company created in 2010
On error resume foxpro
Structured Error Handling in VFP 8: the application occasionally crashed with this error, usually on system startup. (problem with set next statement, no retry, no stackinfo, sometimes incomplete info where the error occured .'d never triedreturn [to functionname] before, you learn something about foxpro every day! consider this example:Here is a bug that i couldn't find documented anywhere:This is a visual foxpro bug about. if any error occurs in the program, the application will execute catch statememts, then finally section and then exit. - doc: on error not invoked when error occurs in error event.
On Error - Visual FoxPro Wikionly weird catch to this, that i see, is, if you call a method of a class inside the try block, and an error occurs, vfp is going to run the code in the error method of that class first. other questions tagged vba visual-foxpro dbf linked-tables ms-access-2016 or ask your own question. i believe it is only other catch statements until it hits the vfp error handler, but i am not sure. if you do the same thing in the catch block, then you get the current stack trace, any ideas on how to get the stack trace at the time of the error?: bug: "file is in use" error when opening an encrypted table. any code after the endtry does not execute if there is an exception or error.
ActiveX component can't create Object Error? Check 64 bit Statusto either handle their own specific errors or pass errors up the. error logging, and the user has the choice to debug, continue,Retry, cancel, or quit. issue is that if you are trying to implement structured error handling, and for example, you have a business object that needs to throw up an exception to the client code's catch, having an active error() event in the class will mess up that interaction between the objects. conform to a strict convention for propagating errors up the. if the error isn't handled by a catch in that try block, then vfp runs the code in the finally block and clears out code in the call stack until it finds something to handle the error. but, for an existing project that uses global and error() method you can still make use of some method level try.
Try Catch - Visual FoxPro Wikiusing on error i had to use com return error() to "kill" the execution, but this left some files open (.: prb: set reprocess and on error routines; "attempting to lock".. if there is an error, but it's not caught, then finally code does execute, and at the endtry, execution goes directly to the next higher catch, whether it is in the same routine or in a calling routine. excellent introduction to vfp's error handling facilities,Explaining in detail how to.: prb: "error 1709" when several users are using the same database., try/catch would give us everything what on error gives us :-) on error within try/catch would possibly break my code :-( my understanding was that the on error will be ignored within a try/catch block -- tom.
- Research proposal on exclusive breastfeeding
Vba - MS-Access Could not find installable ISAM (vfpoledb) - Stacklooks like it should work is to grab the stack info in the error event and throw it (then it will be in exception. try-catch-endtry is for me as global errorhandling not acceptable because debugging is with on error easier. has nesting capabilities that would be difficult or impossible to implement with on error. class level error handling is an artifact of the old way, and try. works better as a global error handler than on error does. exclusively (perhaps there would still be a use for error()), but you have to go through and make sure it won't break anything on an existing app.
- Resume en espanol del delantal blanco
- Scannable version of resume
- Search resume access of yacht
ON ERROR Command
VFP Error Handling Referenceshow to build a simple global error handling class that. there is no way to recover from error, as it it possible with on error global procedure - bogdan zamfir. this way you see more related code that is error-prone (instancing word, then loading a document), that is wrappeed inside an error handler.) (from the help) "visual foxpro supports set next statement debugging functionality only within a single code block. the old on error command, not the newer error event method) and shows a. i'm trying to figure out is why would you want to use error() or on error.
Trouble-shooting a Visual FoxPro application. i know there was an error, but i don't know what do do with it, so. entries for the following vfp commands: assert, messagebox( ),On shutdown, return, set compatible, set debug, set echo, set step, clear events. to consider when thinking of try/catch is that you can "throw" your own exceptions (that are not neccessary errors) and still catch them with try/catch. i would like however that ms mark on error and the error method in a class as obsolete. - how an on error routine affects the error event.: prb: on error not called when trigger fails in browse or grid.
Structured Error Handling in VFP 8
Tables: Relink tables from different datasourcesthe program scans the many files that make up the application's visual foxpro database whenever any of the files is opened. again, i have not done much experimenting, but my understanding is that when an error occurs in a try block, no further code is executed in that block beyond the error. that didn't explain why the error was only occurring intermittently.: bug: define popup prompt can cause visual foxpro to quit.. this code will execute whether errors between try and catch are. - how to use the on error command and the error event.
On Error - Visual FoxPro Wiki
Download Microsoft OLE DB Provider for Visual FoxPro 9.0 from: this is caused by a somewhat strange interaction between visual foxpro and the pdf driver. i wrote above, i use on error for global error-handling and try. - bug: on error does not trap "alias not found" error. silly thing is that none of the foxpro database files need to be scanned, as they can't contain executable code. everywhere and kicking out on error completely without having any advantages.(unless you issue comreturnerror in the catch section) -- stuart dunkeld.
ActiveX component can't create Object Error? Check 64 bit Status
if you are running a visual foxpro application, you might come across some of these same issues. i am still a little fuzzy myself about what qualifies to handle the error on the way up the stack.* log stackinfo/errorinfo from where the error occured (perhaps a function call in a script). are often cases when one access database has linked tables from different data sources (odbc, excel, foxpro etc). am attempted to create linked tables in ms access 2016 using the visual foxpro ole db driver. see if file can be used exclusive w/o causing error.
: “could not find installable isam”2loading csv “could not find installable isam”0vba error :could not find installable isam0could not find installable isam in excel 2007 vba0vba query csv files “could not find installable isam” error1could not find installable isam excel 20100could not find installable isam vba1oledb connection to access database with password: “could not find installable isam”1how do you link ms access 2016 to visual fox pro dbf tables as linked tables? looks fine, and i actually already have a class doing the same job as thescriptingwrapper so would only have to add the error event code.) we (our "framework") log all errors in a table with all infos we can get.: prb: "error writing to file" error using a table on a cd-rom. was just experimenting a little, and the main problem is that control returns to the doscripting method after the error event fires., the on error would be within the try/catch, with the error handler getting the stack info and then throwing.
Try Catch - Visual FoxPro Wiki
. for me at least - currently i use a global on error() that calls a routine to write the error particulars to a table and exit the user from the app. you must remedy the cause of the error before restarting'). vfp commands category vfp 8 category vfp 8 new features category error handling. then again, it would be pretty easy to pull the on error out of your main and wrap you read events with a try. an error happens in a click-event of a button you get (in o_err. could be a problem when using try/catch as global error handling (to replace a global on error code).
vba - MS-Access Could not find installable ISAM (vfpoledb) - Stack
foxpro report designer has an unfortunate tendency to store the so-called printer environment within the report. doug shows how the global error handler can be used to., from the comments and code example above the light may be coming on, tell me if i have this right - when a throw is issued, an error is thrown and code execution in the current method/procedure may be canceled (depending on the error handler) and control is then returned to the calling method/procedure on the call stack, that is except for the code in the finally section.), when i tried this though, the exception doesn't get caught by the try/catch and gives a "user thrown error" c/s/i message. / catch would allow wrapping error prone areas and handle without completely exiting the app - like the word instantiation above - that can fail for reasons outside the app, etc. i just imagine the nighmare debugging codes that mix the 3 styles on error handling.
code in the finally section executes all the time, exception or not (except in rare cases, like com error). if there is no error method code, then it will go to the catch of the current try. can't throw out of a dll, but you can let the error method trap the exception and call comreturnerror. the error is triggered by visual foxpro's createobject() function when it fails to find the component in question. - bug: on error not called when update conflict occurs in grid. if there was no error, you would expect it to output "a","b","c","d".
How it works
STEP 1 Submit your order
STEP 2 Pay
STEP 3 Approve preview
STEP 4 Download
|
OPCFW_CODE
|
Open Data in Developing Countries
February 27, 2013 Editor 0
iHub Research is pleased to formally announce the commencement of a new project named “Understanding the impacts of Code4Kenya open data applications and services”. This research is part of a two –year research program titled ‘Exploring the Emerging Impacts of Open Data in Developing Countries‘ (or Open Data in Developing Countries- ODDC), coordinated by the World Wide Web Foundation and funded through grant 107075 from the International Development Research Centre (IDRC, Canada).
The aim of the research program is to understand the dynamics of both open data policy and practice across the developing world, paying attention to the dynamics of open data use across different geographies and contexts, and looking at both positive impacts of open data, and unintended consequences. Through southern-led research cases, it seeks to develop a deeper understanding of developing country contexts and to determine the potential benefits and challenges of open data in such locations, supporting comparisons and contrasts to be drawn with early open data models from the US (data.gov) and the UK (data.gov.uk).
Overall, ODDC will conduct 17 independent case studies in 14 countries and iHub Research’s project will explore emerging impacts of open Data in Kenya, alongside another local study. This will also form part 2 of a previous study on the same initiative.
The Kenya Open Data Initiative was launched in July 2011 and hosts more than 430 government datasets on the opendata.go.ke portal which has received hundreds of thousands of views and more than 5,000 dataset downloads. Despite this, use of Kenya’s open datasets has fallen short of initial expectations, with only a minority of the population having ever accessed the platform. The Code4Kenya project is an outreach initiative, supporting intermediaries to work with datasets and to develop applications and services which make data more accessible and that promote transparency, accountability, citizen engagement and improved public service delivery.
This project will explore the long-term impacts of this outreach initiative, focusing particularly on work relating to counties, health, and education sectors. This will contribute to understanding of the role that technology intermediaries play in facilitating impacts from open data, and to an assessment of the value of interventions that stimulate and incubate tech community uses of open data.
This research will run for a period of 9 months with regular updates posted on our blog as well as the program’s website.
Read more about the research program and the Open Data Research Network here.
- Entrepreneurship and Comparative Advantage
- Negotiating Win-Win Strategic Innovation Partnerships
- Medicines hitchhike with Coca Cola to save lives
- Emerging CSIR researcher receives 2013 JD Roberts Award
- Coffee genome sheds light on the evolution of caffeine
- Managing knowledge in IT projects: a framework for enterprise system implementation
Subscribe to our stories
- Entrepreneurial Alertness, Innovation Modes, And Business Models in Small- And Medium-Sized Enterprises December 30, 2021
- The Strategic Role of Design in Driving Digital Innovation June 10, 2021
- Correction to: Hybrid mosquitoes? Evidence from rural Tanzania on how local communities conceptualize and respond to modified mosquitoes as a tool for malaria control June 10, 2021
- BRIEF FOCUS: Optimal spacing for groundnuts in smallholder farming systems June 9, 2021
- COVID-19 pandemic: impacts on the achievements of Sustainable Development Goals in Africa June 9, 2021
|
OPCFW_CODE
|
Thomas Sundberg is a consultant at Sigma in Stockholm, Sweden. He has a Masters degree in Computer Science from the Royal Institute of Technology, KTH, in Stockholm. Thomas has been working as a developer for more than 20 years. He has taught programming at The Royal Institute of Technology, KTH, one the leading technical universities in Sweden. Thomas has developed an obsession for technical excellence. This translates to Software Craftsmanship, Clean Code, Testing and Automation.
Thomas is also a speaker at different conferences and developer venues, including eXtreme Programming XP, Agila Sverige, Öredev, Turku Agile Day, Agile Central Europe, GeeCON, Java Developer Day, Agile By Example, Scandinavian Developer Conference and Agile Testing Days. Thomas runs a blog where he writes about programming, Software craftsmanship and whatever problem he wants to share a solution about. It can be found at http://thomassundberg.wordpress.com/
Software development is an industry that has been around for a little bit more than 50 years. There are a lot of really smart people working in this industry. How is it possible that these smart people are so good at failing? How can we as an industry continue year after year with failing or really slow development?
The answer is embarrassingly easy, we tend to apply methods and techniques we don't understand or that don't bring any value.
There are many anti patterns that can be applied to software projects. They tend to fall into these categories: Architectural, Development, User interface, Organisational, Management.
We will look at a selection of these anti patterns and see why they are so bad and the problems they contribute with.
Just looking at bad examples may be depressing. But if you can identify a bad example in your own project or product then you have a chance to do something about it. Understanding and accepting that you have a problem is always the first step to fix it.
Cucumber has been around a long time in the Ruby world. It is a popular tool that allows development teams to describe how software should behave in plain text. The text is written in a business-readable domain-specific language and serves as documentation, automated test and development-aid - all rolled into one format. Cucumber-JVM has been available to the Java community since March 2012.
I will walk you through what Behaviour Driven Development (BDD) is and how Cucumber-JVM can be used as a tool to actually implement the desired behaviour using the format:
- Given a system setup
- When we perform an action
- Then we expect a specific result
I will also develop a very simple example where we can see how a model can grow from the desired external behaviour to the very core of a system. Another example will demonstrate how a web application can be verified using Cucumber and Selenium.
Finally I will show you how Cucumber can be fitted into your continuous integration/delivery system using Maven and thus be a crucial part of your automated acceptance test suite.
Keywords:Cucumber-JVM, Automated testing, Continuous integration, Maven
|
OPCFW_CODE
|
Every Programmer loves free eBook, even more, if it comes from renowned technical book publishers like Oreilly or Manning. In the last article, I have shared some of the best free Java programming books and today I am going to share some of the equally best free Python Programming books. Best book for python programming pdf.
You possess chosen, or have been chosen to register to our subréddit.You've arrive to the correct place to talk about Half-Life. Rules. Submissions must be directly associated to the Half-Life franchise. Important Valve and Vapor news may end up being permitted per Moderator discretion. Articles must become high quality. Low-value submissions that may take away from significant discussion are usually not allowed. Examples: Memes (age.g.
Use the 'Use' key, probably 'E'. Almost every door opens up by just running up against it, however, the pipe exit in Surface Tension opens with the use key. I spent hours trying to figure this one out and had many say it was a bug, but I found one place that finally gave the answer. Surface Tension is the twelfth chapter of Half-Life. The chapter features expansive outdoor sections and large-scale firefights as the primary backdrop.
Picture macros, 'One-liner' humor), avenues, generic Let's Plays, reposts, junk e-mail, rants, etc. All articles and responses must follow. Please become sincere to others. Individual attacks, bigotry, battling words, otherwise inappropriate conduct or content, comments that slander or demean a particular consumer or group of customers will be removed.
No personal info, in posts or remarks. Stalking, harassment, witch looking, trolling, brigading, ddósing, or doxxing wiIl not really be tolerated and will effect in a ban. No porno or gore. All additional NSFW content and responses must end up being tagged. Publishing uncalled for components may end result in an immediate and long lasting bar. When posting fan generated articles, try out to credit the original designer and link to the primary resource whenever achievable.
Do not pass other's function off as your own.Offenders of these rules may become prohibited without warning. Useful ResourcesHalf-Life Web sites. For me, my favorite part of Half-Life had been Unforeseen Implications/Office complex and Surface tension. First, we possess survival horror and then we have full on motion film with helicopters ánd tanks. My least favourite part of Half-life has been On a Rail and Xen. I don't actually like getting shot at when I'm inside a teach that has no cover up or anything, I experienced to vacation resort to getting out of the teach which had been obviously not the intention of valve, Wow and wear't overlook Xen, the whole thing seems unfinished and the degree design is usually pretty bad in these components.For Half-Lifé 2 I gained't proceed into too much detail.Best-RavenholmWorst-Nóva Prospekt. Hmmm.thére are so numerous great components in each video game that narrówing it down tó just one or two parts is difficult.In common, I enjoy how both video games start off with some sort of 'run of the mill' area before items really kick off.
I óut that in estimates because a top secret government analysis lab its dangerous waste materials, mech, and teleportation experiments is run of the mill within that planet's context and exact same with shifting through peculiar checkpoints and intérrogation chambers.I also really appreciate the platforming-puzzle areas as well. HL2's level where you have got to move across the sand without coming in contact with it has been their edition of the flooring is lava and I treasured it. Making use of the Gravity Weapon to create platforms had been great. Formally doesn't create sense since the ant elephants experience the vibrations and come upward and attack.so placing a box and bouncing onto it and running across it can be 100% heading to trigger vibrations bit who cares, I cherished that component hahaBut yeah.quite very much the moments that actually balance filming and platforming.
|
OPCFW_CODE
|
using System;
using System.Collections.Generic;
using System.Drawing;
using System.Linq;
using System.Runtime.Versioning;
using System.Text;
using NLog;
using OpenKh.Kh2;
namespace OpenKh.Command.MapGen.Utils
{
class ImageResizer
{
private static readonly int[] AllowedTextureWidthList = new int[] { 128, 256, 512 };
private static readonly int[] AllowedTextureHeightList = new int[] { 64, 128, 256, 512 };
private static Logger logger = LogManager.GetCurrentClassLogger();
public static Imgd NormalizeImageSize(Imgd imgd)
{
if (false
|| !AllowedTextureWidthList.Contains(imgd.Size.Width)
|| !AllowedTextureHeightList.Contains(imgd.Size.Height)
)
{
var newWidth = (imgd.Size.Width <= 128) ? 128
: (imgd.Size.Width <= 256) ? 256
: 512;
var newHeight = (imgd.Size.Height <= 64) ? 64
: (imgd.Size.Height <= 128) ? 128
: (imgd.Size.Height <= 256) ? 256
: 512;
var width = imgd.Size.Width;
var height = imgd.Size.Height;
logger.Info($"This image will be resized due to comformance of texture size. Resizing from ({width}, {height}) to ({newWidth}, {newHeight}).");
if (imgd.PixelFormat == Imaging.PixelFormat.Indexed8)
{
var bits = imgd.GetData();
var newBits = new byte[newWidth * newHeight];
var dstToSrcX = Enumerable.Range(0, newWidth)
.Select(xPos => (int)(xPos / (float)newWidth * width))
.ToArray();
var dstToSrcY = Enumerable.Range(0, newHeight)
.Select(yPos => (int)(yPos / (float)newHeight * height))
.ToArray();
for (var y = 0; y < newHeight; y++)
{
var dstOffset = newWidth * y;
var srcOffset = width * dstToSrcY[y];
for (var x = 0; x < newWidth; x++)
{
newBits[dstOffset + x] = bits[srcOffset + dstToSrcX[x]];
}
}
return new Imgd(
new Size(newWidth, newHeight),
Imaging.PixelFormat.Indexed8,
newBits,
imgd.GetClut(),
false
);
}
else
{
throw new NotSupportedException($"{imgd}");
}
}
else
{
return imgd;
}
}
}
}
|
STACK_EDU
|
I promised I’d write a post with more details on key technology that I thought you could leverage within the MSIM program. Well, it took me a little bit longer than what I thought it would and this post isn’t as refined as I would like it to be… but here is (please forgive me if there are some grammatical errors here… I used Macspeech Dictate to write this post). Enjoy!
MICROSOFT OFFICE ONENOTE
Microsoft office OneNote is a fantastic application that is part of the office suite of applications. OneNote really is a note taking tool it was designed to be either used with your traditional keyboard and mouse input or used with the applet type application where you can draw and scribble notes like you would on a white board or piece of paper. OneNote has some key features that I love and use them everyday work. In no particular order here are my favorite features:
1. Audio Recording: Microsoft OneNote allows you to record audio through your laptops built in microphone. This in and of itself is not a real compelling feature, what makes it compelling is that as you’re recording audio and typing at the same time OneNote is indexing the audio synchronized to the notes that your typing at the time. So what that means is that if you were to go back later on after you’re done recording your audio and taking notes and you had a question about some note that you took the word very detailed with what you type you could hover your mouse over that particular word and it would pull up audio controls (meaning play, pause, stop etc.) and allow you to click play in the audio would start just a few seconds prior to when you type that word in your notes. It’s difficult to describe here in words so I’ve included a link to a YouTube video that describes what I’m saying.
o General Feature overview including search: http://bit.ly/doCnl9
o Audio Recording Linked to your Notes: http://bit.ly/bKQjWm
o Terrible Video but if you jump to 2:10 you can see the audio being played back the text note that was taken at that time in the recording: http://bit.ly/a9WuNS
2. Search: Microsoft office OneNote has a fantastic search feature that allows you to search text throughout all your notebooks tabs and pages. In addition to searching the text that you typed in OneNote the searches you perform will also search for text in any screenshots that he pasted in as well as, if you have the option turned on, words in any audio recording that you have recorded.
3. Collaboration: This is a great feature when you’re on the same network. OneNote allows you to share a notebook or a particular page of a notebook with colleagues on the same network. When you’re sharing you can see near real-time updates as both you and your colleagues are simultaneously entering text into one document (actually inserting anything including drawings and images). This is another feature that somewhat hard to describe in words but the link listed above as general feature overview describes this well with video.
Evernote is another great note taking tool. It’s not nearly as robust as Microsoft one note but it’s an Internet connected platform or the storage for your notes is actually in the cloud. What this means is that the notes that you take with Evernote will be accessible to you via your mobile device, your work computer, your home computer, and pretty much anywhere else you’ll need to get access to notes. It also has a fantastic search feature the searches are actually performed on the server on your local client laptop or desktop. The benefit of this is that Evernote has some serious software running in the background that can convert handwritten notes and pictures of things like business cards into searchable text. It also works really really well. Evernote has a free version that is very usable for your everyday note taking needs they also have a paid version that is more robust and give you more CPU cycles more quickly to perform the algorithms that convert those handwritten notes into searchable text than the free user. Evernote also has some pretty useful extensions for all the major Internet browsers that actually allows you to clip text and images from a webpage directly into Evernote and make all that searchable.
Skype is an application that allows you to share your desktop between two users, do videoconferencing, or make voice over IP phone calls over the Internet. My wife and I were the first ones to use this in the MSIM program, calling into the classroom through one of our classmate’s laptops to listen and lecture while we are out-of-state attending a wedding. At that time Skype didn’t have some of the features that it has today that would be very useful in the MSIM program. The first feature that I wish they had was desktop sharing using Skype today you can call another user over the Skype network and allow them to see your desktop or for you to see and control their desktop. The second feature I wish they had is one that I haven’t yet used with Skype but just came out as part of Skype Beta, and that feature is group video chat. This feature allows a number of users to do videoconferencing at once, so you’d be able to see your two other colleagues and they would be able to see you all at once (currently videoconferencing is only enabled in the stable version of Skype on a one-to-one basis).
Google wave is super hard to describe so other than saying that it’s a great tool for real-time collaboration amongst team members on a point at the following link:
- http://bit.ly/9Dqvch –> Lifehacker.com post on the subject
- http://bit.ly/bZQFfn –> 2 Minute Video Explaining Google Wave.
Google Docs is another great application for individual document creation, document management, document storage, and team document collaboration. I think Google docs can be used for MSIM students to collaborate on really any document type that is supported by Google docs (text documents, presentations, spreadsheets etc.). We didn’t have Google docs when we look to the program instead we used some free wiki ca
pability to collaborate on our papers and joint assignments for the program. However, if I were going to the program today I think this would be my key application that I would use for collaborating with my teammates on documents.
MindMeister is a tool for mind mapping or interactive brainstorming. There’s a pretty decent free version that MindMeister offers on their website.
DimDim is another tool that allows you desktop sharing. They have a free version of the tool that works with up to 20 users or so simultaneously.
Dropbox is online file storage in the cloud. Dropbox would be useful for storing both personal documents that you use throughout the program as well as any file that you want to share with your teammates over the course of the program. This is because Dropbox allows you to create folders right on your desktop for any file type and store those files both locally and in the cloud. All of these files are backed up and revisions are kept so you can rollback to a previous version if you need to. Once your files are placed in the Dropbox folder you can then control access to who can read or edit those files. Dropbox currently offers 2 GB of storage for free. You also get an additional 250mb of storage for each person that you refer that joins dropbox… and just as full disclosure that’s what I’ve done with the link up above.
|
OPCFW_CODE
|
The comparison, be it of two objects, ideas, or technologies is a thief of joy, but at the same, it’s necessary to identify the perfect ones that match with the goals of businesses or individuals. Similarly, the story of every programing language is unique and incomparable. It’s like the sun and moon that shines but at different time.
However, when it comes to development, the programmers are in the race to create bigger and better apps for the universe, where the selection of one programming language can make or break the success. The selection battle is not less than the top rivalry of Coke and Pepsi where each of the two brands excels in different aspects, have their existing fanbase, and leverage different strategies to stay on the top.
The case of age-old Java and Python is no different where the two languages have managed to survive amidst the advent of new languages and frequent updates. Again, it’s a coding battle, where one defeats other and only one will be declared as a winner.
So, how the developers make this decision? What’s will be the basis of the decision? How one language outweighs other in terms of technical difference, tools, community, innovation, and a lot more? Let’s dive in to get the answer to every possible question.
Before we look deeper into the differences between the two languages, let’s walk through the fundamentals of both languages:
The statically-typed object-oriented language that’s first appeared in 1995 is preferred for dynamic coding. The write-once-run-anywhere language is designed to run on any platform with little dependencies. The open-source and distributed language support multi-threading programming concept and jam-packed with unique features that makes comprehensive web applications development plain-sailing.
The server-side programming language firstly appeared in 1990 aimed to bridge the gap between C and the Shell. Later, with constant updates, the use of the language is extended to meet various web applications development needs. The conservative language is a dynamically-typed general-purpose programming language that requires developers to use less code to describe more. The open-source language has a built-in list and dictionary data structures that enable fast runtime data structures construction effortlessly.
Moreover, the language has also gained the title of ‘Programming Language Hall of Fame’ for 2018 due to enormous advantages it offers.
- Trends and Popularity
Replacing the old kid- Java on the tech block is not possible for the new entrants such as Kotlin, but the popularity (not potential) of the language has certainly gone southwards. On the other hand, Python is excelling in the market with enormous growth and usage in the development space.
According to Github’s Octoverse report, among all of the hundreds of programming languages in which the developers write code, Java still secures the second position with millions of contributors in public and private repositories. Python shot up to #3 among top languages used on the platform.
Stackoverflow’s survey report illustrates the similar scene where Python spotted as the fast-growing language, but Java ranked at the top with 45% of the developers contributing followed by Python at 39%.
The result: Java gains an upper hand over Python.
In Java, the JVM (Java virtual machine) provides the runtime environment to run the code which converts Java bytecode into machine language that can be otherwise compiled at the execution time. The simple architecture leads to a seamless experience for the java developers during development.
In Python, the interpreter translates thpython vs java differencese source code into machine-independent bytecode and then stores the bytecode file in a folder. When the program is run, no translation is done again and the bytecodes are employed which are then shipped to PVM (Python virtual machine) where the code executes.
The result: It’s neutral.
It’s difficult to declare a winner in the performance equation because the languages don’t have speed, instead, they have semantics. So, for the comparison in the speed section, the language execution speed, program implementation speed, and third-party libraries performance are considered.
The just-in-time compiler of Java compiles bytecode into native machine code in the real-time, which is directly called by the JVM. It means no code interpretation is required during the compilation process that makes Java swifter, smarter and faster.
On the other hand, Python is an interpreter-based programming language where the codes are interpreted based on the variable time and datatype identification is also done in the runtime that deteriorates the language performance.
The result: Due to virtual machine execution and optimizations, Java churns out high-performance speed as opposed to Python.
- Learning curve
Java is ruling the coding space, but Python is quickly gaining the major traction in the market with an increasing number of computer science departments, and academic institutions teaching Python language. From the very beginning, Python was created as an easy-to-use and easy-to-understand language that offers intuitive learning experience from the syntactical perspective. Additionally, Python abstracts a lot of complexity that takes off a lot of heavy work of the Python developers.
Plus, the dynamically-typed nature of the Python makes it very flexible. The myriad of the ways to solve the problem and nature of forgiving the errors makes it a perfect fit for the novice players.
The result: The low learning curve makes Python great for the rookies.
- Code readability
Getting more with less, it completely stands true in the sense of writing the number of lines of code. The web developers always prefer to create web applications with less number of code lines. The simple, concise, and elegant syntax plays a critical role in bringing simplicity to the language.
In the case of Java, the developers have to write a lot of code to perform the same thing that can be done in a few lines of code due to strict syntax rules. The variable types need to be explicitly declared, the high usage of curly braces, the errors displayed during code compilation when a semicolon is not used, the plethora of indentation rules, and other that makes developers don’t find the language intuitive.
On the flip side, Python’s syntax is similar to the English language where just indentation is essential to read the code more clearly and even, the statements can be ended anywhere without a semicolon. The comparatively fewer lines of code increase the developer’s productivity.
The result: Python is the best language for the developers when it comes to simplicity or verbosity.
The database layers simplify the access to data stored in the database by separating the business logic and presentation code.
In Java, the Java database connectivity (JDBC) is active and keeps the database access code separated from the rest of code so that changes made in database access or code won’t affect the rest of the application. That’s why enterprises prefer Java for easy integration with other databases such as SQL and SQOOP.
Python’s database access layers are inactive and feeble as compared to Java, which makes the Python not-a-good-fit for the database-critical applications.
The result: Java wins by a great margin.
Presently, without embracing advanced technologies, it’s impossible to make the application serve the modern users’ needs, and thrive in the dynamic market. Both frameworks have ML libraries for advanced web applications development.
A new framework-Kite developed using Python language is a great instance of it. The latest innovation in Python works as an AI-driven code completion tool that aims to reduce the burden on developers and let them write better code. Google’s open-source TensorFlow framework leveraged the capability of Python to create greater avenues of development and exploration.
On the other hand, Java sees innovation on a small scale. It’s mainly used for industrial applications, especially B2B applications and improves efficiency in the design process. Java can be used as a viable solution for VR/AR app development and design, but its extensive use is still limited to enterprise-cloud apps and scale-driven applications.
The result: More innovations leveraging emerging technologies are witnessed in Python.
Legacy means something that has become outdated or obsolete and no longer supported. The legacy language inability to convert it into a modern language or use some of its parts in the new development environment makes the work of developers difficult.
Java has become a legacy that can’t be evolved into a modern language while keeping backward compatibility. However, the JVM ecosystem of Java has helped in the creation of good programming languages such as Kotlin, Scala, and Clojure.
Python has comparatively less number of legacy issues with the ability to make changes to its legacy system. It enables making a gradual shift to the system without needing to write the whole system again like Java.
The result: Python beats Java.
Summing it up
No one, neither Java nor Python sweep the board when compared against various parameters. Both languages perform best in the realm with their pros and cons. The answer for the best language or the declaration of the winner in the coding battle completely relies on the application type, and the tools required to build the app from scratch.
Don’t judge the book by its cover. Do in-depth research about what the language has in the store to offer, check how it matches with your web application development requirements, and then take the decision. All the best!
|
OPCFW_CODE
|
Is it possible to force Oracle to apply View Merging in query with User Defined Views and Functions?
The problem is that - I am forced to use View instead of Table (this is the 1st case in the below list).
1. I run the query on View with Function in Where clause as a user that is not owner of those Objects (View and Function):
select count(*) from VW_BOOK b where contains(b.title, fn_textconverter('Eden'), 1) > 0;
or
select count(*) from VW_BOOK b where contains(b.title, (select fn_textconverter('Eden') from dual), 1) > 0;
So, the above queries are very slow because Oracle optimizer ignores indexes in Table and pushes predicates into View.
2. When I run the same query but instead of using View I use Table, it executes very fast and applies indexes created for table:
select count(*) from TB_BOOK b where contains(b.title, fn_textconverter('Eden'), 1) > 0;
3. The same good result I see for the the query when I use View, but instead of function I put the result of function without calling the function itself:
select count(*) from VW_BOOK b where contains(b.title, '\E\d\e\n%', 1) > 0;
When I try setting optimizer_secure_view_merging to False or granting to my user MERGE VIEW privilege, the 1st case from the above list executes very fast bypassing pushing predicates into View.
As I cannot add additional privileges to user because of policy and cannot change Oracle parameters as well, questions arise:
Is it possible to force Oracle to merge User Defined Views and Functions regardless of whether I have set optimizer_secure_view_merging parameter as TRUE and do not have privilege MERGE VIEW?
Maybe there are ways to set or recreate function as "Secured", so Oracle could safely merge it with my View?
Can you create a pass-through version of fn_textconverter in the same schema as VW_BOOK? If the function is in the same schema it should automatically trusted for view merging, even it it calls functions in other schemas.
@MatthewMcPeak, actually all the objects are on the same schema. The only thing is that I should login as another user and after I login triggered logon trigger which changes "current_schema" parameter to the schema which owns all the objects.
And why can't you GRANT MERGE VIEW ON VW_BOOK TO the user you log in as? But I am starting to see the point of your question...
Such policy requirments. Non extra privileges can be granted to my application user.
How about to PUBLIC? That would also work. Otherwise, I am afraid the answer to your question is "no". You may be able to workaround it like WITH input (txt) AS ( SELECT /*+ MATERIALIZE fn_textconverter('Eden') FROM DUAL) SELECT ... FROM VW_BOOK, INPUT WHERE contains(b.title, input.txt), 1) > 0 or similar. Sorry. Kudos for a really interesting question though. Thank you for that.
@MatthewMcPeak, I suspect PUBLIC user also cannot be granted that privilege. The approach with MATERIALIZE hint also didn't give performance improvement. Nevertheless, thank you for taking the time and effort in suggesting different options.
It has nothing to do with security, but if fn_textconverter('Eden') will always equal \E\d\e\n% every time it runs, try creating function fn_textconverter as deterministic, so Oracle knows it does not have to re-evaluate it for every row.
I.e.,
CREATE OR REPLACE FUNCTION fn_textconverter
( p_in VARCHAR2 ) RETURN VARCHAR2 DETERMINISTIC IS...
Thanks for your comment, you're probably right. My function is deterministic and it can speed up in some cases, but not in my case. I tried setting my function as deterministic, however nothing changed. I think, it has to do with security. As I already said, after setting the value of Oracle parameter optimizer_secure_view_merging to false, my request reponds instantly, since I see from the execution plan that in that case existent index from source table is used.
|
STACK_EXCHANGE
|
package version
import (
"fmt"
"strings"
)
type constraintExpression struct {
units [][]constraintUnit // only supports or'ing a group of and'ed groups
comparators [][]Comparator // only supports or'ing a group of and'ed groups
}
func newConstraintExpression(phrase string, genFn comparatorGenerator) (constraintExpression, error) {
rootExpression := constraintExpression{
units: make([][]constraintUnit, 0),
comparators: make([][]Comparator, 0),
}
if strings.Contains(phrase, "(") || strings.Contains(phrase, ")") {
return constraintExpression{}, fmt.Errorf("version constraint expression groups are unsupported (use of parentheses)")
}
orParts := strings.Split(phrase, string(OR))
for _, part := range orParts {
units, err := splitConstraintPhrase(part)
if err != nil {
return constraintExpression{}, err
}
rootExpression.units = append(rootExpression.units, units)
comparators := make([]Comparator, len(units))
for idx, unit := range units {
theComparator, err := genFn(unit)
if err != nil {
return constraintExpression{}, fmt.Errorf("failed to create comparator for '%s': %w", unit, err)
}
comparators[idx] = theComparator
}
rootExpression.comparators = append(rootExpression.comparators, comparators)
}
return rootExpression, nil
}
func (c *constraintExpression) satisfied(other *Version) (bool, error) {
oneSatisfied := false
for i, andOperand := range c.comparators {
allSatisfied := true
for j, andUnit := range andOperand {
result, err := andUnit.Compare(other)
if err != nil {
return false, fmt.Errorf("uncomparable %+v %+v: %w", andUnit, other, err)
}
constraintUnit := c.units[i][j]
if !constraintUnit.Satisfied(result) {
allSatisfied = false
}
}
oneSatisfied = oneSatisfied || allSatisfied
}
return oneSatisfied, nil
}
|
STACK_EDU
|
I've been reading the docs, including the faq-ipalias page and some
postings on the mailing list.
I am trying to set up a m0n0wall firewall at a site that is currently
using one of their FreeBSD production boxes as a combo server/firewall,
with 2 IPs on the WAN.
I want to have a physically separate firewall, and I'd rather have a
cdrom/flash firewall on something like a soekris machine than set up a
"bigger" machine and just do it all with a normal (FreeBSD) OS.
I also want it to be pretty easy to switch between the new and old
The new (m0n0wall) firewall has a dedicated IP for its LAN IP, with a
The old machine has a dedicated IP for its LAN IP (with a normal
netmask) and it currently uses an IP alias (on a /32) for the default
route to the internet.
My thought is to have the DSL modem connect to a small switch/hub, which
then connects to the WAN interface on each of "the old firewall/gateway"
and "the new m0n0wall firewall/gateway".
As the "old" machine has 2 interfaces, when I want to use it as the
firewall/gateway box I:
ifconfig WAN up
ifconfig LAN inet def.ault.rte/32 alias
and when I want to disable the firewall/gateway on the old box I:
ifconfig WAN down
ifconfig LAN inet def.ault.rte/32 -alias
So far so good.
I figure to make it easy on these folks to enable/disable using the
m0n0wall box as their firewall/gateway I can simply get 2 configs going,
one for "enabled" and the other for "disabled", and they "restore" the
appropriate configuration using the dedicated LAN IP on the m0n0wall box.
I am currently having 2 problems:
First problem: how to enable/disable the def.ault.rte address on the
m0n0wall LAN interface while keeping the "normal" LAN IP address on the box?
I have not found a way to get the m0n0wall box to easily answer on the
def.ault.rte/32 address on its LAN interface.
I would rather not have to install a 3rd NIC in the box, as that gives
me more points of failure.
I would rather not have to choose/switch between using either the
assigned LAN IP and the def.ault.rte IP for the LAN address.
What's a good way to handle this?
Second problem: They have 2 WAN IPs. For one of them, I just want to
send SSH traffic to LAN machine A. I currently am using this IP as the
assigned IP on the WAN interface. While the other IP currently handles
SSH, email, and web all on the (current) server, I'd like to be able to
split each "service" to a different machine. When I configured this the
other day I was able to SSH to the WAN IP and I was connected to the
correct box. I (thought I) set the NAT rules up correctly, but I'm not
able to connect to the other WAN IP (from outside) and I see nothing in
|
OPCFW_CODE
|
When we add numbers, increasing each next term by the same positive value, then we will get more and more values with each new term. For example, the sum of all natural numbers from 1 to 10 is 55, from 1 to 100 will be 5050, and due to an increase in the upper bound, the sum will become larger. It is logical to assume that the sum of numbers from 1 to infinity will give an infinitely large number, but if we do the calculations, we get a value of -1/12. It’s one thing if we got a certain finite number with a plus sign, although this causes dissonance, it’s another matter when we get a fraction with a minus sign, which looks completely absurd, because how can we get negative when adding positive numbers. In this article, we will take a little look at how mathematicians generally got such a meaning.
To do this, first it is worth analyzing what some of the other amounts look like. Let’s start with this: 1-1 + 1-1 + 1-1 … .. This is an infinite sum of ones and minus ones, which alternate. If we knew what sign the given numerical series ends with, then we would have no problem saying what its final value is, but this series is infinite. Let’s designate this set of numbers as A, for convenience. In order to calculate the amount, after the first element we put out the minus, as shown in the figure below. Since the series of numbers is infinite, the expression in brackets will also be the sum of A and after ordinary algebraic calculations we get that A = 1/2, more specifically, then 1-1 + 1-1 + 1-1 + 1 … = 1/2
Now let’s count the next row: 1-2 + 3-4 + 5-6 + 7-8 + … Let’s call this row number B and now add to it the already counted row A = ½, but add starting from the second element like this, as shown in the diagram below. Then we get that the same series B is subtracted from unity. After the usual algebraic actions, we get B = ¼
And finally, that same series is the sum of all natural numbers, let us denote it by X. X = 1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 + … Subtract from it the above-mentioned series of numbers B = ¼. As a result, we get: 1-1 + 2 + 2 + 3-3 + 4 + 5-5 + 6 + 6 + 7-7 …, and for simplicity, we will rewrite it as in the picture below. As a result, it turns out that our sum of all natural numbers X will be subtracted ¼ will be equal to 4 + 8 + 12 + 16 … The sum of all numbers will be a multiple of four, from which we just put 4 out of the parenthesis, and in parentheses we get 1 + 2 + 3 + 4 …, that is, the same row of X, well, after the simplifications shown below, we get X = -1 / 12. Since X = 1 + 2 + 3 + 4 + 5 + 6 …, the sum of all natural numbers is -1/12.
In all this, it is noteworthy that this fact about the sum of numbers is used in many physics articles, especially in quantum mechanics, for example, many articles on string theory operate with it, and also thanks to it, the Casimir Effect was explained. Of course, it is possible and worth questioning the fact that the sum of all natural numbers is equal to a negative number, and perhaps this is a lack of methods for counting series, but the fact that the number -1/12 is used in many scientific articles, and it works, says a lot. …
Author: Vladislav Kigim. Edited by Fedor Karasenko.
Thumbs up to see more articles on space and science in your feed!
Subscribe to my channel here and also to my channels in
|
OPCFW_CODE
|
Have you actually tried the program? Did you "buy" the full version or did you try the Lite Version only?
The full paid version does exactly what you're wanting it to do. It is fully uploadable to your own server, allows you to do more than just S-Drive or Database same as the old one did, you paste the html and upload your own files same as the old one. I don't find the reCaptcha an issue either since you can resize it now in the CSS file. Truly I don't think you played with it very long or you didn't play with the right one.
Bye bye Jennifer.
IT was the post that started with Love It! if that helps
You are aware that it Web Form Builder (not sure what SDF is) has integration with Mailchimp already aren't you?
Form works great. Upload generated files to our host server with no problems. One issue I did run into was using the iframe to embed a form into a webpage. Some of our customers if they "Ctrl +" to enlarge the text, the bottom of the form (ie. The Submit button) would disappear. Just switched to the stand alone page to solve the problem.
Would like a solution to the iframe so I can have links for popups on the same page as the form.
Our customers really like the email confirmations as do I. Google forms come up short with this feature.
Looking forward to having the ability to page the form.
Keep up the good work!
We use iContact, so it's still no more useful to me though.
Since they're asking for suggestion, and since they're already supporting external services, I think a natural next step is to expand support for others: iContact, Aebber, etc.
Maybe I'll look into MailChimp output and then hand editing the HTML output to work with iContact. Kinda defeats the purpose of the tool, but maybe better than nothing in the near-term...
I really need more form capacity on S-Drive, however, but cannot justify the costs for 30 or so forms.
I don't need the other applications ( websites, shopping cart etc.)
Any chance you might address this issue in future upgrades?
Thanks - Gene
What I need and what would be super for sdrive is for the software to be online in my account so I can work right online from my account anywhere. Sometimes I need to use a different computer...
but no software online for me to use. Right now cannot download as software is not supported for the system to function.
Now isn't this a great idea ....Online Software....but is it possible?
I'm hoping that the next version of your tool will be more flexible in the placement of data fields, drag and drop instead of arraying by percentage of each line. The ability to put in graphics and text where I want it will make Form Tool better.
thanks again for 1.0, looking forward to next version.
Have something to add? We’d love to hear it!
You must have an account to participate. Please Sign In Here, then join the conversation.
|
OPCFW_CODE
|
Directsound ac97 audio driver 0 40
WinDDKThis site maintains listings of sound card drivers available on the web, ECS BIOS Upgrades . Harmonic distortion level at -60 dB corresponds to 0,01%, which is obviously not hda_v40a.zip [more], Windows NT 4.0 DirectSound AC97 Audio, ACPI driver support 3.3V/5V PCI bus interface)(for PCB V1.0 and V3.0).I am hearing sound in my PC. 2012-11-29 18:40:25 | By twinreg1. VIA AC'97 driver, 823x_340sp_p.zip [more], Windows 98 VT1708 driver, DirectSound: 0/5 (retail) AC'97 Sound System Software ver:A3.76I3DL2 / A3D 0. 40 alsndmgr. wav unknown alcxwdm. sys v5. 10. 5790 built by Jul 14, 2004 AC'97 codecs are not used in modern quality sound cards (instead, I2S-codecs chipset) Intel Extreme Graphics 2 for mobile (82852/82855 GM), DirectSound, Q2: I downloaded the latest drivers of Windows updates for my sound chip Q5: If one is expected, you should install a sound driver provided by the Will your AC'97 audio CODECs support non-Windows XX or Windows 3. Q6: midi-out.0 0001 0066 Microsoft GS Wavetable SW Synth DirectSound Device Jun 20, 2016 The nVidia nForce3 drivers do not currently install. oldman, Dell Dimension Current version, All versions. Sort: in first time installation of realtek ac97 driver 2400, Intel Pentium4 2.40Ghz, 512MB, ATA-100 40GB, Dell OG1548 (Intel 945GCD-I330 (V1.0) (VS-AD330) driver, 33i_sound_VIA.zip [more], Windows XP Integrated audio SiS 7012 and AC "97 Codec" Realtek ALC655 "on the Properties: Audio Adapter Intel 82801CA ICH3-S - AC'97 Audio Controlleris required to enable advanced features. 0 stars. Write review. Reviews: Please look here: By drivers it supports EAX/Direct Sound 3D/I3DL2/A3D in games. FR Oct 2, 2008 Realtek AC'97 Drivers Applications package include driver and setup DirectX 8 motherboard series drivers (98 / ME / NT / 2000 / XP), EAX / Direct Sound 3D / 0. and here: comment:40 Changed 7 years ago by verdy_p. ICH/ICH0 was When I ran the Microsoft DirectX Diagnostic tool (Dxdiag.exe) sound test on a AC '97 ?? ?? Works (sound and ethernet not auto-recognised at startup).passband ripple (from 40 Hz to 15 kHz), dB: +0.12, -0.26, Very good.files=40. Programs Currently Running Image Name PID Session Name Session# 瑞昱和AMI共同合作发表RealManage2.0,业界首款Wi-Fi DASH 远程管理解决方案.Oct 24, 2007 Integrated Hardware Sound Blaster/Direct Sound AC'97 audio. PCI bus slots (nadded. no sound in Windows Seven (using Directsound+ICH AC97 driver) .
|
OPCFW_CODE
|
Relax wire_order restrictions in circuit visualization
Requested by @ajavadia
Drawing circuits with wire_order has some restrictions, coming from the original design that, I think, can be relaxed a bit. For example if wire_order is not "complete", the rest of the wires can have canonical order. And if wire_order does not mess with classical bits, then cregbundle can be preserver. So this PR relax those restrictions and allows you to write wire_order=[2, 3] and cregbundle=True in a 4 qubit circuit, for example:
from qiskit import QuantumCircuit, QuantumRegister, ClassicalRegister
qr = QuantumRegister(4, "q")
cr = ClassicalRegister(4, "c")
cr2 = ClassicalRegister(2, "ca")
circuit = QuantumCircuit(qr, cr, cr2)
circuit.h(0)
circuit.h(3)
circuit.x(1)
circuit.x(3).c_if(cr, 10)
for method in ['text', 'mpl', 'latex']:
display(circuit.draw(method, wire_order=[2, 3], cregbundle=True))
Pull Request Test Coverage Report for Build<PHONE_NUMBER>
11 of 13 (84.62%) changed or added relevant lines in 1 file are covered.
No unchanged relevant lines lost coverage.
Overall coverage increased (+0.0002%) to 85.411%
Changes Missing Coverage
Covered Lines
Changed/Added Lines
%
qiskit/visualization/circuit/circuit_visualization.py
11
13
84.62%
Totals
Change from base Build<PHONE_NUMBER>:
0.0002%
Covered Lines:
67431
Relevant Lines:
78949
💛 - Coveralls
I personally would have expected that giving a partial wire_map would only draw those
To the contrary (and probably because I'm too famiiar with the drawer), only drawing some wires makes little sense to me. (how do you express a CX(0,1) without drawing 0 or 1).
I saw some numpy Ellipsis usage in slicing, but it is the first time I see it as an element. Is that a thing? I have no problem to add it.
We spoke about this in the dev meeting just now. The notes are:
Ali's original request was mostly just about allowing only the qubits to be specified with the clbits omitted, so he'd be on board with allowing wire_map to have two valid lengths: one with all the qubits and clbits, and one with only all the qubits. If you prefer that, that there's no need for the ....
Matthew also thought that specifying only a partial list of qubits would have a filtering behaviour.
If we do want to allow only partial specification of qubits (or partial specification of clbits), there was general support for using the ... literal as the marker.
Given this, I'd vote to go with option 1: the wire_order list has (at most) two valid lengths: "all bits" or "all qubits". It solves Ali's initial request and it doesn't require extra syntax. (Personally I was surprised to learn that wire_order applies to the clbits at all, and we didn't have separate order fields for qubits and clbits, but that ship has sailed.)
Oh, and about "how do you draw cx(0, 1) if you filter out qubit 1?": that point is mostly why I said "[partial specification] looks like an error", yeah.
Given this, I'd vote to go with option 1: the wire_order list has (at most) two valid lengths: "all bits" or "all qubits".
done in that way. Have a look.
Just a note for all concerned, when we use circuit_drawer for tests that are supposed to have "text" output, can we please put output="text" in the call for those of us who default to "mpl" or otherwise. Thanks.
|
GITHUB_ARCHIVE
|
Robots are used for the most boring and repetitive jobs in
manufacturing. The military and police use robots for dangerous jobs, such as manipulating explosive devices. In this project of simulation of industrial robotic arm to pick and place like in cement factory at rotary packing machine,
The conventional packing system is working manually by hand which need more labors to accomplish the packing process and also require a great effort from these labors which will lead to increase human error ratio. Due to that has
been simulate industrial robotic arm using microcontroller ATmega16 in
Proteus software and BASCOM-AVR program to get a good results to be implemented to industrial application in Cement Company of bag applicator.
First we used Proteus to draw control circuit of microcontroller, stepper motor and start stop push bottom with BASCOM-AVR program to programming code of microcontroller. Also we do more experiment in circuit and code with corrected more time to achieve good result. This design intends to investigate the design and simulation to control of a 3 DOF industrial robotic arm by using stepper motors and microcontroller, the robotic arm will be controlled via the designed controller and it will be able to grab, pick up and move objects to desire point; we have arm to put empty bag in rotary packing machine and drop the fully bag in another way which like belt conveyer, The simulation results have given the positions of the motors to take and put the cement bag.
El-rahman, M (2021). Simulation Of Microcontroller Based Industrial Robotic Arm Controller. Afribary. Retrieved from https://afribary.com/works/simulation-of-microcontroller-based-industrial-robotic-arm-controller
El-rahman, Mohamed "Simulation Of Microcontroller Based Industrial Robotic Arm Controller" Afribary. Afribary, 21 May. 2021, https://afribary.com/works/simulation-of-microcontroller-based-industrial-robotic-arm-controller. Accessed 04 Mar. 2024.
El-rahman, Mohamed . "Simulation Of Microcontroller Based Industrial Robotic Arm Controller". Afribary, Afribary, 21 May. 2021. Web. 04 Mar. 2024. < https://afribary.com/works/simulation-of-microcontroller-based-industrial-robotic-arm-controller >.
El-rahman, Mohamed . "Simulation Of Microcontroller Based Industrial Robotic Arm Controller" Afribary (2021). Accessed March 04, 2024. https://afribary.com/works/simulation-of-microcontroller-based-industrial-robotic-arm-controller
|
OPCFW_CODE
|
<?php
class P4Logic extends BaseController {
/*
* search_do()
*
* Parses, creates, and returns search results. Routine looks for all attributes that were in the POST from the form
* (you must go though all individually since all attributes a potential options). As new attributes as determined to
* be part of the request, they are added to a WHERE statement that is being built.
*
* This statement is then executed by query builder using the whereRaw command. Results are returned as a collection.
*
*/
public function search_do()
{
$state = $_POST["state"];
$whereStmt = "`addr_state` = '$state'";
$city = $_POST["city"];
$whereStmt = $whereStmt." AND `addr_city` = '$city'";
if (isset($_POST['style'])) {
$style = $_POST["style"];
$whereStmt = $whereStmt." AND `style` = '$style'";
}
if (isset($_POST['num_bed'])) {
$num_bed = $_POST["num_bed"];
if ($num_bed == "4")
$whereStmt = $whereStmt." AND `num_bed` > 3";
else
$whereStmt = $whereStmt." AND `num_bed` = $num_bed";
}
if (isset($_POST['num_bath'])) {
$num_bath = $_POST["num_bath"];
if ($num_bath == "3")
$whereStmt = $whereStmt." AND `num_bath` > 2";
else
$whereStmt = $whereStmt." AND `num_bath` = $num_bath";
}
if (isset($_POST['num_halfbath'])) {
$num_halfbath = $_POST["num_halfbath"];
if ($num_halfbath == "2")
$whereStmt = $whereStmt." AND `num_halfbath` > 1";
else
$whereStmt = $whereStmt." AND `num_halfbath` = $num_halfbath";
}
if (isset($_POST['park_spaces'])) {
$park_spaces = $_POST["park_spaces"];
if ($park_spaces == "2")
$whereStmt = $whereStmt." AND `park_spaces` > 1";
else
$whereStmt = $whereStmt." AND `park_spaces` = $park_spaces";
}
if (isset($_POST['sqrfoot'])) {
$sqrfoot = $_POST["sqrfoot"];
if ($sqrfoot == "1500")
$whereStmt = $whereStmt." AND `sqrfoot` < 1500";
elseif ($sqrfoot == "2500")
$whereStmt = $whereStmt." AND `sqrfoot` > 1500 AND `sqrfoot` < 2500";
else
$whereStmt = $whereStmt." AND `sqrfoot` > 2500";
}
if (isset($_POST['lot_sqrfoot'])) {
$lot_sqrfoot = $_POST["lot_sqrfoot"];
if ($lot_sqrfoot == "10K")
$whereStmt = $whereStmt." AND `lot_sqrfoot` < 10000";
elseif ($lot_sqrfoot == "40K")
$whereStmt = $whereStmt." AND `lot_sqrfoot` > 10000 AND `lot_sqrfoot` < 40000";
else
$whereStmt = $whereStmt." AND `lot_sqrfoot` > 40000";
}
if (isset($_POST['garage'])) {
$whereStmt = $whereStmt." AND `garage` = true";
}
if (isset($_POST['pool'])) {
$whereStmt = $whereStmt." AND `pool` = true";
}
$result = DB::table('homes')->whereRaw($whereStmt)->get();
return (array('results'=> $result, 'flash_message'=>'Search Results'));
}
/*
* list_do()
*
* Adds a new listing to the database. First the Home object must be saved. If that save is successful, then the
* Listing object is created (there is a foreign key on the 'listings' table for the home_id.
*
*/
public function list_do() {
$home = new Home;
$home->addr_street = ($_POST['addr_street']);
$home->addr_city = ($_POST['addr_city']);
$home->addr_state = ($_POST['addr_state']);
$home->style = ($_POST['style']);
$home->desc = ($_POST['desc']);
$home->num_bed = ($_POST['num_bed']);
$home->num_bath = ($_POST['num_bath']);
$home->num_halfbath = ($_POST['num_halfbath']);
$home->sqrfoot = ($_POST['sqrfoot']);
$home->lot_sqrfoot = ($_POST['lot_sqrfoot']);
$home->park_spaces = ($_POST['park_spaces']);
$home->pic1 = ($_POST['pic']);
if ($_POST['garage'] = 1) {$home->garage = true;} else {$home->garage = false;};
if ($_POST['pool'] = 1) {$home->pool = true;} else {$home->pool = false;};
if (!$home->save()) {
return (array('type'=>'error', 'text'=>'Error saving Home'));
}
$listing = new Listing;
$listing->user_id = Auth::user()->id;
$listing->home_id = $home->id;
$listing->status = ($_POST['status']);
$listing->price = ($_POST['price']);
if (!$listing->save()) {
Home::destroy($home->id);
return (array('type'=>'error', 'text'=>'Error saving Listing'));
}
// Send back JSON array
return (array('type'=>'success', 'text'=>'Saved Listing', 'listID' => $listing->id));
}
/*
* search_save()
*
* Adds Search object if the user elects to save their search parameters. The search query is saved as the JSON string
* that is used by the search_do() routine.
*
*/
public function search_save()
{
// New Search
$search = new Search;
$search->user_id = Auth::user()->id;
$search->name = $_POST["searchName"];
$search->searchValJSON = json_encode($_POST["searchString"]);
// Save new Search
$search->save();
}
/*
* search_delete()
*
* Search object is deleted if requested. Prior to deleting object, the routine will check to make sure the
* authenticated user requesting the search be deleted owns the Search object.
*
*/
public function search_delete($searchID)
{
$search = Search::find($searchID);
$userID = Auth::user()->id;
if($userID == $search->user_id) {
Search::destroy($searchID);
return Redirect::to('/')->with('flash_type', 'success')->with('flash_message', 'Saved Search deleted successfully');
} else {
return Redirect::to('/')->with('flash_type', 'danger')->with('flash_message', 'Saved Search not Deleted');
}
}
}
|
STACK_EDU
|
Cannot insert sdo_geometry with more than 500 vertices
I have the following table
CREATE TABLE MYTABLE (MYID VARCHAR2(5), MYGEOM MDSYS.SDO_GEOMETRY );
AND the sql statement below:
INSERT INTO MYTABLE (MYID,MYGEOM) VALUES
( 255, SDO_GEOMETRY(2003, 2554, NULL, SDO_ELEM_INFO_ARRAY(1,1003,1),
SDO_ORDINATE_ARRAY(-34.921816571,-8.00119170599993,
...,-34.921816571,-8.00119170599993)));
Even after read several articles about possible solutions, I couldn't find out how to insert this sdo_geometry object.
The Oracle complains with this message:
ORA-00939 - "too many arguments for funcion"
I know that it's not possible to insert more then 999 values at once.
I tried stored procedure solutions, but I'm not Oracle expert, and maybe I missed something.
Could someone give me an example of code in c# or plsql ( or the both ) with or without stored procedure, to insert that row?
I'm using Oracle 11g, OracleDotNetProvider v 12.1.400 on VS2015 AND my source of spatial data comes from an external json ( so, no database-to-database ) and I can only use solutions using this provider, without datafiles or direct database handling.
I'm using SQLDeveloper to test the queries.
Please, don't point me articles if you are not sure that works with this row/value
The query has more than 500 vertices, and I couldn't paste here, because StackOverflow blocked.
I finally found an effective solution. Here: Constructing large sdo_geometry objects in Sql Developer and SqlPlus. Pls-00306 Error
can you please provide more details? I tried to use stored procedure and then invoke in plsql way, did not have luck. still fails at 500+ coordinates
The limitation you see is old. It is based on the idea that no-one would ever write a function that would have more than 1000 parameters (actually 999 input parameters and 1 return value).
However with the advent of multi-valued attributes (VARRAYs) and objects, this is no longer true. In particular for spatial types, the SDO_ORDINATE attribute is really an object type (implemented as a VARRAY) and the reference to SDO_ORDINATE is the constructor of that object type. Its input can be an array (if used in some programming language) or a list of numbers, each one being considered a parameter to a function - hence the limit to 999 numbers).
That happens only if you hard-code the numbers in your SQL statement. But that is a bad practice generally. The better practice is to use bind variables, and object types are no exception. The proper way is to construct an array with the coordinates you want to insert and pass those to the insert statement. Or construct the entire SDO_GEOMETRY object as a bind variable.
And of course, the very idea of constructing a complex geometry entirely manually by hardcoding the coordinates is absurd. That shape will either be loaded from a file (and a loading tool will take care of that), or capture by someone drawing a shape over a map - and then your GIS/capture tool will pass the coordinates to your application for insertion into your database.
In other words, that limitation to 999 attributes / numbers is rarely seen in real life. When it does, it reflects misunderstandings on how those things work.
|
STACK_EXCHANGE
|
/**
* @class EZ3.ShaderMaterial
* @extends EZ3.Material
* @constructor
* @param {String} id
* @param {String} vertex
* @param {String} fragment
*/
EZ3.ShaderMaterial = function(id, vertex, fragment) {
EZ3.Material.call(this, 'SHADER.' + id);
/**
* @property {String} _vertex
* @private
*/
this._vertex = vertex;
/**
* @property {String} _fragment
* @private
*/
this._fragment = fragment;
/**
* @property {Object} _uniformIntegers
* @private
*/
this._uniformIntegers = {};
/**
* @property {Object} _uniformFloats
* @private
*/
this._uniformFloats = {};
/**
* @property {Object} _uniformMatrices
* @private
*/
this._uniformMatrices = {};
/**
* @property {Object} _uniformTextures
* @private
*/
this._uniformTextures = {};
};
EZ3.ShaderMaterial.prototype = Object.create(EZ3.Material.prototype);
EZ3.ShaderMaterial.prototype.constructor = EZ3.Material;
/**
* @method EZ3.ShaderMaterial#updateProgram
* @param {WebGLContext} gl
* @param {EZ3.RendererState} state
*/
EZ3.ShaderMaterial.prototype.updateProgram = function(gl, state) {
if (!this.program)
this.program = state.createProgram(this._id, this._vertex, this._fragment);
};
/**
* @method EZ3.ShaderMaterial#updateUniforms
* @param {WebGLContext} gl
* @param {EZ3.RendererState} state
* @param {EZ3.RendererCapabilities} capabilities
*/
EZ3.ShaderMaterial.prototype.updateUniforms = function(gl, state, capabilities) {
var name;
var texture;
for (name in this._uniformIntegers)
this.program.loadUniformInteger(gl, name, this._uniformIntegers[name]);
for (name in this._uniformFloats)
this.program.loadUniformFloat(gl, name, this._uniformFloats[name]);
for (name in this._uniformMatrices)
this.program.loadUniformMatrix(gl, name, this._uniformMatrices[name]);
for (name in this._uniformTextures) {
texture = this._uniformTextures[name];
texture.bind(gl, state, capabilities);
texture.update(gl);
this.program.loadUniformInteger(gl, name, state.usedTextureSlots++);
}
};
/**
* @method EZ3.ShaderMaterial#setUniformInteger
* @param {String} name
* @param {Number|EZ3.Vector2|EZ3.Vector3|EZ3.Vector4} value
*/
EZ3.ShaderMaterial.prototype.setUniformInteger = function(name, value) {
this._uniformIntegers[name] = value;
};
/**
* @method EZ3.ShaderMaterial#setUniformFloat
* @param {String} name
* @param {Number|EZ3.Vector2|EZ3.Vector3|EZ3.Vector4} value
*/
EZ3.ShaderMaterial.prototype.setUniformFloat = function(name, value) {
this._uniformFloats[name] = value;
};
/**
* @method EZ3.ShaderMaterial#setUniformMatrix
* @param {String} name
* @param {EZ3.Matrix3|EZ3.Matrix4} value
*/
EZ3.ShaderMaterial.prototype.setUniformMatrix = function(name, value) {
this._uniformMatrices[name] = value;
};
/**
* @method EZ3.ShaderMaterial#setUniformTexture
* @param {String} name
* @param {EZ3.Texture} value
*/
EZ3.ShaderMaterial.prototype.setUniformTexture = function(name, value) {
this._uniformTextures[name] = value;
};
/**
* @method EZ3.ShaderMaterial#getUniform
* @param {String} name
* @return {Number|EZ3.Vector2|EZ3.Vector3|EZ3.Vector4|EZ3.Matrix3|EZ3.Matrix4|EZ3.Texture}
*/
EZ3.ShaderMaterial.prototype.getUniform = function(name) {
if (this._uniformIntegers[name])
return this._uniformIntegers[name];
else if(this._uniformFloats[name])
return this._uniformFloat[name];
else if(this._uniformMatrices[name])
return this._uniformMatrices[name];
else
return this._uniformTextures[name];
};
|
STACK_EDU
|
Spyware is a POTENT HERBICIDE and should be used with strict moderation under adult supervision ONLY!
Spyware is the crap that comes bundled with software (or installs when you visit certain websites without knowing what the fuck) that website operators use to collect data about you that they then somehow turn in to money. Often spyware is buggy, uses a lot of RAM, redirects your home page to pr0n, and gives AIDS to your children. Basically, spyware makes your thousand dollar games machine into a useless shitbox. Often the best solution is to turn over your computer to the local 13 year old boy along with $70 (pay him less and he'll get more from spyware companies to install more of the shit) or whatever the asking price is for him to install and run one of the free tools listed below.
There's another type of annoying crap called "Ad-ware". Adware is known for pissing off people with completely useless pop ups, announcing crap that only fucktards and retards are interested in, like PartyPoker or Adultfriendfinder. It is rumored that the person who invented adware and spyware is an Otaku Cuckold who lives with his mom and jacks his dick off to furporn.
- Microsoft Anti-spyware
- Lavasoft's Ad-aware
- Spybot Search & Destroy
- Spy Emergency Spyware Remover
- Spyware Blaster
- Gator/GAIN: comes with .EXE files
- WhenUSave: comes with Kazaa, etc
- eZula: installs itself from various websites
- Java.exe: Actually this is more of a "portal" for viruses and spyware
- Microsoft Internet Explorer: Same as above
- Anti-spyware software.
- The entire System32 folder. Delete it
- The entire HKEY_CLASSES_ROOT folder in regedit. Delete that too
- Windows ME
- Windows 10
- Websites that end in ".info"
- Your mom
- Your face
- Program Files
- AIDS: comes with African software such as Ubuntu
- Phorm: When you sign up for a 12 month broadband contract with British Telecom .
Internet Explorer: The Spyware and Adware Sites Paradise
Internet Explorer, the Microsoft internet navigation software, invented by Bill Gates, is the single best program to use on your computer, if your goal is to fuck it up entirely. It is equipped with thousands of ads, and it will let trillions more enter your computer. Actually, IE makes it easier to screw up your computer; the more up-to-date your IE is, the more screwed up your computer will be. Bill Gates made a dumbass of himself introducing Windows to the public, because it has IE installed, which is why your computer freezes one trillion times per second, and why the so called Blue screen of death appears. Oh yeah, one more thing, when you type passwords and personal information on IE, it will send it to many many other people and big corporations, so in fact, IE is spyware with a very expensive disguise.
Removing spyware (for windows users)
Click on Start, Run, and type cmd then press enter. If that doesn't create a big black window with grey text in it, type command then press enter. When the big black window comes up, click on it and type:
format c: /u
Believe me, this is this only way to completely remove all spyware.
For users that are not familiar with the command line interface, the free spyware removal tool Windows Optimizer is also available.
If for any weird reason that solution doesn't work, open your computer case and throw away that thing that says "HD" or "Hard Drive". Or donate it for some good trolling.
Removing Spyware (for Mac users)
Its too late, Steve Jobs already owns your soul you hipster faggot.
Spyware is part of a series on
Visit the Softwarez Portal for complete coverage.
|
OPCFW_CODE
|
Lets quickly go over what we all have learned over these past 2+ weeks about the Nabi 2 and its Vold.fstab file and mount points. We realized the Nabi's vold.fstab file does NOT declare the "Internal Storage" mount point. This causes us a HUGE problem when it comes to trying to swap the "Internal Storage" and the "Sdcard2" mount points obviously because we cant just go into the vold.fstab file and edit a few lines of code to change the mount points like we would in just about every other tablet and phone there is The Nabi2 also does not show the "Internal Storage" mount when we run mount commands in a adb shell or Androids Terminal Emulator... So with that said I am almost 100% sure we cant swap the mount points using the vold.fstab file or editing vaules in .rc files and reflashing them. So that leaves us with using the script method at boot! So with hours of research I came across a Directory Binding Tool that another fellow xda developer made for almost the same reasons so I just took it and made some changes to it so we can use it with the Nabi 2. All credit for development of the tool goes to member "slig"!
How to Bind "Sdcard" to "Sdcard2" with the Directory Binding Tool
What we are going to do since we cant swap the mount points via vold.fstab/.rc files we are going to bind key ICS (4.0.4) system files to the "sdcard2"! This is the same thing as swaping the "Internal Storage" and "Sdcard2" because the Nabi's "Internal Storage" is really located on the "Sdcard". So we are going to bind the system files which are used when you install a new App or when a new App wants to download additional files to the "Sdcard2"! So instead of occupying the little space there is on the "Sdcard" after completing this guide it will install on the "sdcard2"
1. Download the DirectoryBind Tool to your Nabi from here [Download Now]
2. Install the apk to your tablet
3. After installation is complete open the "Directory Bind" app
4. At the bottom left corner make sure it says "Root Access Ok" in green
5. Then at the bottom right corner click the button so it reads "On"
6. Click on the "Settings Button" then click "Preferences"
7. In Preferences make sure the following is checked: Bind on boot, Handle USB connection, Alert on unbind fail, and Alternate dbase mgmt! Nothing else should be checked off!
8. Now click the back button once to go back to the app's main screen
9. Click the "Settings Button" then click "Add new entry"
10. You should now see 2 text fields one named "Enter source (data) path" and another named "Enter mount (target) path"... In the "Enter source (data) path" field enter the following "/mnt/sdcard2/" then go down to "Enter mount (target) path" field and enter the following "/sdcard/Android/" Then make sure "Transfer files from target to data" is NOT checked and click the ADD button.
11. You should now be back at the app's main screen but now you should see your first Directory Bind script there if you did everything correctly.
12. Now click the "Settings Button" again and click "Add new entry"
13. You now should see those 2 text fields again... This time in "Enter source (data) path" enter the following "/mnt/sdcard2/" then go down to "Enter mount (target) path" and enter the following "/sdcard/data/" then click the ADD button.
14. Now your back at the apps main screen showing the 2 directory bind scripts you created... Put a check in each scripts box then click the "Settings Button" and click "Bind checked". If you followed this guide correctly the icons next to each script will turn green which means they mounted with no issues and your all done!
You now have ur /Sdcard/Android/ folder and /Sdcard/data/ folder binded to ur Sdcard2 which means now each folder will have 8GB/16GB/32GB's of space depending on how big the sdcard is you put into ur Nabi! You can check to see if it worked by using any Root File Explorer and going to sdcard then to the Android or data folder and looking at the space used and space free! Now when you install games and apps they will be really installing onto the sdcard2 which is great when installing games like Asphalt 7 which is 1.4GB+ in size! And of course when you reboot the scripts will auto run and bind at boot since we checked "Bind on boot" in preferences! ENJOY and post any questions hopefully there will not be many since this guide is pretty much a click by click step guide
|
OPCFW_CODE
|
- Worked in an Agile environment.
- Have experience working with different version control tools like SVN, GIT.
- Strong working knowledge of HTML5, CSS3.
- Used Junit for writing Application and action level unit test cases for the Unit testing.
- Experience with Firebug for Mozilla, Developer Toolbar for Chrome and IE Developer Toolbar for Internet Explorer.
- Strong Working experience in Design, Development and implementation of several J2EE frameworks spring Core, spring IOC, spring MVC, spring ORM, spring JDBC, Spring Data, Hibernate and Struts 1.1/1.2,2.z
- Developed and deployed multi-tier Enterprise Applications using Tomcat, Web logic and web-sphere application server, Micro services.
- Developed J2EE applications on IDE’s like Eclipse, WID(web sphere Integration development)and NetBeans
- Expert experience with styling and responsive design techniques, mobile first website development using technologies such as HTML5, LESS and SASS
- Experience with Python, Hadoop, Mongo DB.
- Good Experience working with High Traffic Websites.
Web Designing Tools: Adobe Dreamweaver, Adobe Photoshop and Adobe Illustrator.
Web/Application Servers: HTTP Web Server, Web logic Apache Tomcat, Pivotal cloud foundry
Database: Oracle 11g, SQL Server 2008 and 2012, MYSQL, MS Access.
Debugging Tools: Google Chrome Web Debugger, Fire bug, Mozilla Firefox.
IDE: Eclipse, Sublime text, Notepad++.
Operating Systems: Linux, Unix, Windows XP (Prof), Win 7, 8, Mac OS X.
Java Technologies: Java, J2EE, JDBC, Servlets, JSP, JSTL, JavaBeans, JMS, EJB, JNDI, Custom Tag Libraries, Applets, microservices
Confidential, Silver Spring, MD
Java Full stack Developer
- Involved in the lifecycle of the software design process including, requirement Definition, prototyping, design, interface implementations, unit testing and maintenance.
- Used Spring boot, Micro Services, REST API for building the application.
- Designed and developed business components using Spring Boot, Spring Dependency Injection (Core), Spring AOP and Spring Annotations.
- Implemented the logging functionality using Logging Tools Log4j, slf4j.
- Implemented on Java 8 including features like lambda, streams, multi-threading etc.
- Used Spring Boot which is radically faster in building cloud Micro services and develop Spring based application with very less configuration.
- Implemented Java and J2EE Design patterns like Business Delegate and Data Transfer Object (DTO), Data Access Object and Service Locator.
- Designed the application using Micro-services Architecture based on Spring Boot.
- Experience in coding business components using various API's of Java like Multithreading, Exception handling, Collections, Generics, JDBC, Lambda and Streams.
- Used Apache Kafka as the messaging infrastructure for asynchronous processing.
- Designed and developed complex SQL queries, stored procedures using MySQL.
- Consuming both Restful and SOAP web services depending on the design need of the project.
- Developed Restful API's, which takes in an HTTP request and produces the HTTP response in JSON Format using micro services.
- Implemented data ingestion and handling clusters in real time processing using Kafka.
- Extensively used JMS for Asynchronous Messaging to produce/consume messages.
- Extensively used GIT for version controlling and regularly pushed the code to Bit bucket and GitLab.
- Worked on creating the Docker containers and Docker consoles for managing the application life cycle.
- Experience in Continuous Integration and automated build/deploy using GOCD.
- Used Unix commands to go through the server logs and identify the issues.
- Used JUnit to write unit tests and integration test and used Mockito to mock/stub classes.
- Involves in Sprint planning for the estimation of efforts for user stories and bugs.
- Followed agile methodology and participated in stand-up meetings to update the status of daily tasks and weekly team meetings.
Java Full stack Developer
- Designed CSS templates for use in all pages on the website working with CSS Background, positioning, text, border, margin, padding, and table.
- Applied optimization techniques to reduce page size and load times to enhance user experience using sprites.
- Developed user interface by using the React JS, Flux for SPA development.
- Used React-Router to turn application into Single Page Application
- Worked in using React JS components, Forms, Events, Keys, Router, Animations and Flux concept.
- Used Web services (SOAP and Restful) for transmission of large blocks of XML/JSON.
- Worked on responsive design and developed a single ISOMORPHIC responsive website that could be served to desktop, Tablets and mobile users using React.js.
- Maintained states in the stores and dispatched the actions using redux.
- Implemented the Drag and Drop functionality using React-Drag gable
- Used Excel Builder 3rd Party open source library and tweak it to make sure it will work with IE11.
- Used flickity.js for creating carousel-images.
- Component for UX-Library consisted of Button, Checkbox, Input, Icons, Toggle Button, Dropdown, Multi-Level Dropdown and many more.
- In Phase Two, worked closely with the Back-End team to display data using the Custom Components, library Components, and Redux.
- Used Middleware, Redux-Promise in application to retrieve data from Back-End and to also perform RESTFUL services.
- Worked with backend engineers to optimize existing API calls to create efficiencies by deprecating unneeded API calls.
- Used React flux to polish the data and for single directional flow.
- Extensively used Git for version controlling and regularly pushed the code to GitHub.
- Used JIRA as the bug tracking system to track and maintain the history of bugs/issues on everyday basis.
- Interacted with Testing Team, Scrum Masters and Business Analysts for fixing of Issues.
- Performed the System Testing, Regression Testing for Complete UI after fixing the Issues which are reported by Testing Team.
Java Full stack Developer
- Involved with all stages of Software Development Life Cycle.
- Closely worked with business system analyst to understand the requirements to ensure that right set of UI modules been built.
- Responsible for setting up AngularJS framework for UI development. Developed html views with HTML5, CSS3, JSON and AngularJS
- Involved in developing in hibernate using the spring DAO layer to do the operations using oracle data base.
- Used Pivotal cloud Foundry to deploy and see logs to track the issues in different environments
- Spring boot is used to create new module and deploy the application in tomcat.
- Used Test Driven Development (TDD) for develop the new applications such as member search in delta dental.
- Developed user interface by using the Flux for SPA development.
- Defined and developed the presentation layer of the application using HTML 5, CSS3 and AJAX.
- Proficient understanding of server side CSS pre-processors including SASS and LESS
|
OPCFW_CODE
|
As a machine learning practitioner, there was one graphic that stayed with me more than any other in 2016, and not because it delighted me. The Gartner Hype Cycle for Emerging Technologies 2016 was trending on social media. Every technology wonk, people whom I greatly respect, was posting, commenting and retweeting that graphic. I couldn’t help but notice that my field of work, machine learning, was perched, ingloriously, at the very pinnacle of the cycle.
At the time that it seemed at least one out of every ten entries on my Twitter feed made reference to this ignominious infographic, I was working for several companies, implementing these self-same technologies in their enterprises and growing almost giddy at the ease with which we were able to do things faster and better. I was left with a feeling of great unease that, like all episodes of cognitive dissonance, spurs one on to resolve the contradiction. It seemed my experience was at odds with that of the people in the know.
In resolving this seeming paradox, I came to the conclusion that two very different technological perspectives are at play here. They conspire to create what amounts to a disjointed view, of a key technology, that threatens to leave enterprises literally behind the curve.
There are several forces aiding and abetting this divergence. Firstly, big data, in all it dimensions (amount of data), speed of data, and range of data types and sources), has created fodder for training machine learning algorithms like never before. Secondly, computational power and cloud computing, especially using graphics processing units (GPUs), have largely beaten back the problem of the “combinatorial explosion”, which made certain algorithm unworkable. Thirdly, data science has become a hot career choice, widely accessible through MOOCs, which has somewhat relieved the pressure on companies to find good hires in the field. Fourthly, companies like Google, Facebook, and the Chinese firm Baidu have established very strong research units, e.g. Google DeepMind, Facebook AI Research and Andrew Ng’s team at Baidu.
The wresting away of research leadership from educational institutions by large corporates has been an interesting development, and somewhat unforeseen. Large corporations have realized that to get the very best talent, they needed to not just provide top salaries, but opportunities to engage in groundbreaking research. The extent to which the center of mass has shifted is evidenced by Yann LeCun, head of Facebook AI Research, delivering the keynote address at NIPS (the premier machine learning and computational neuroscience conference) in December of 2016. NIPS 2016 had more than 5,000 attendees, and unheard of number in the industry.
Finally, there has been a tidal shift towards open source software within the machine learning movement. Google open sourced it’s Tensorflow system in November 2015. Since then it has seen phenomenal growth and a torrent of contributions from the community. Other frameworks, like Theano, Caffe, Torch and MXnet, already enjoyed support within the open source community. Apart from that, the most popular languages to implement these frameworks (e.g. Python) are open source.
The Nadir of the AI Movement
Engaged in all these frenzied activities, one can be forgiven for forgetting a second perspective with a much more jaundiced eye. This perspective was born out of the so-called AI winter. By most accounts, the nadir of the AI movement came in 1973 with the publication of the Lighthill report. The report was deeply skeptical of AI to deliver on its grandiose promises, and led to funding being cut for most of the artificial intelligence projects in the United Kingdom. It had a knock-on effect, with similar results around the globe. Academics became fearful of calling themselves AI researchers, should they be labelled as fanciful or eccentric. AI continued to live on under the guise of “informatics”, “machine learning”, or even “cognitive systems”.
Even as artificial intelligence research took on a decidedly clandestine character, a very different development occurred. Sometimes referred to as the Lotus revolution, the launch of Lotus 1–2–3 in the early eighties, changed the way industry conducted itself. Financial modeling went mainstream. Enterprises started looking towards proprietary productivity tools for an advantage. In recent years, the promises of these tools have themselves proven to be illusory, as we’ve seen productivity growth stagnate.
Artificial Intelligence in 2017
What, then, is a realistic view of artificial intelligence in 2017. The truth is that we have not attained what practitioners call artificial general intelligence. There are no truly intelligent bots out there. Beating opponents at chess or Go, is not what we have in mind when we talk about AI.
Having said that, machine learning has made tremendous progress. We can now use our algorithms to optimize call centers, drive cars, fly drones, manage our security, recommend what to read, watch, buy and listen to. Machine learning algorithms outperform physicians at diagnosis, predict epidemics, and lower your insurance premiums, without ever achieving the Hollywood version of artificial intelligence, because they know more (they have vastly more data) and they compute far better then the human brain ever will.
AI did not disappear during the AI Winter, it just went silent. It’s development continued, and is now accelerating beyond our wildest expectations because of industry and open source working together. The AI Winter is over.
The danger to business leaders is that their reliance on proprietary and legacy systems within their enterprises, and the profound shift in thinking required towards machine learning, will leave them paralyzed to take advantage of the bonanza machine learning is about to deliver. As Andrew Ng is fond of saying, “AI is the next electricity”, and that might not be hype.
|
OPCFW_CODE
|
Problems with March 2017 Security Rollup
The moderator from the server forum suggested I post here (please see https://social.technet.microsoft.com/Forums/windowsserver/en-US/a894761b-963e-4e4a-a309-d28999209448/march-2017-security-updates-breaks-ntlm-authentication-of-samba-shares-over-netbios?forum=winserversecurity).
We had a production down weekend after installing Microsoft's security March 2017 rollup. This question is to help us understand what was included in the March rollup that broke production in order that we can properly document the workaround.
- Windows 2008 R2 domain controllers.
- March 2017 security rollup applied.
- SAMBA shares hosted on AIX using NTLM authentication stopped working, giving access denied (client message)
Error on AIX host is: "FAILED with error NTSTATUSNOLOGONSERVERS"
Error on AIX host is: "SPNEGO login failed: NTSTATUSIO_TIMEOUT"
Observed UDP 137 packets sent from AIX to DC, but no response from DC (packets ignored or blocked at DC).
- Domain controllers previously had installed KB3161949 which broke SAMBA using NETBIOS transport because of a tightened-up security posture due to the KB3161949 hotfix.
When KB3161949 is installed, there is a HKLM registry setting which will allow NETBIOS (UDP 137) with NTLM authentication outside of the local subnet by setting the AllowNBToInternet DWORD value to 1.
- After installing March 2017 security rollup the AllowNBToInternet parameter no longer seems to work.
After much effort attempting to back out Microsoft March 2017 security updates on domain controllers (this did not resolve the issue) we solved our problem by making an emergency change to all AIX SAMBA to use Kerberos authentication.
It seems like the March rollup included a critical update to fix a denial of service vector in SMB. I am wondering if the SMB code fork deployed by Microsoft also contained code similar to that included in MS16-077 in a way that prevented the AllowNBToInternet option from working?
And please explain why, even after backing out the March rollup, the functionality of KB3161949 to AllowNBToInternet was no longer operational?
Note background information related to the issues exposed by KB3161949 are here: "https://social.technet.microsoft.com/Forums/windows/en-US/5b32fb1c-bb5d-4be0-8a61-5adcb6ea4eb7/kb3161949-june-2016-update-causes-network-file-shares-to-become-unavailable?forum=w7itpronetworking" and here is a link to the KB: "https://support.microsoft.com/en-us/help/3161949/ms16-077-description-of-the-security-update-for-wpad-june-14,-2016"
|
OPCFW_CODE
|
Big performance hit from v0.1.4
v0.1.4 is around 3 times slower than v0.1.3
I tested 2 models with cpu only.
The models are dolphin-2.1-mistral-7b.Q3_K_M and openhermes-2-mistral-7b.Q5_K_M.
I use Debian 12 with AMD Ryzen 5 5600H.
Yes I can share the results of the verbose mode now.
For the v0.1.3
total duration: 18.536665894s
load duration: 414.797µs
prompt eval count: 36 token(s)
prompt eval duration: 2.421661s
prompt eval rate: 14.87 tokens/s
eval count: 93 token(s)
eval duration: 16.044084s
eval rate: 5.80 tokens/s
While for v0.1.4
total duration: 1m22.372065006s
load duration: 1.045899ms
prompt eval count: 36 token(s)
prompt eval duration: 26.860807s
prompt eval rate: 1.34 tokens/s
eval count: 73 token(s)
eval duration: 55.477673s
eval rate: 1.32 tokens/s
I will try the journal logs tomorrow because I do not run ollama as a service.
ok now I noticed the v0.1.3 has this:
{"timestamp":1698189486,"level":"INFO","function":"main","line":1296,"message":"system info","n_threads":6,"total_threads":12,"system_info":"AVX = 1 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | "}
and the v0.1.4 this:
{"timestamp":1698190231,"level":"INFO","function":"main","line":1325,"message":"system info","n_threads":6,"n_threads_batch":-1,"total_threads":12,"system_info":"AVX = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | "}
Great, thanks for sharing! Yes it looks like there's an issue where AVX flags aren't on in 0.1.4 and 0.1.5 – a fix is on the way in #900
|
GITHUB_ARCHIVE
|
So I've recently become a little obsessed with cricket stats thanks to the Short League friendlies - I wanted to replicate the Stumped league stats for those games so created my own little spreadsheet. Unfortunately this interest rapidly became an obsession, and has now spawned a suggestion for new stats in the main interface! I thought I'd open a conversation here about this approach to see what others think.
Basically I've been thinking of a way to judge allround contributions - both batting and bowling. This led me to some research and particularly the discussion here.
I really like the second approach suggested in that page, from the cricinfo stats editor S Rajesh. I reckon it's a really clever idea. And it's one I could easily do from scorecards which is a bonus! The other approaches use in-game data which would require a scraping in-game info, which is more than I (or realistically Rob) would likely be prepared to do!
All the approaches are trying to judge individual performances with the bat and ball in the broader context of limited over matches - where "average" is not sufficient to describe contributions, and "economy rate" is also misleading if you look at it in isolation. And being a scoring machine on a flat pitch may not be as impressive as scoring fewer runs on a real turner.
So the idea proposed by Rajesh is to weigh each player's individual performance relative to the average performance in the games that they have played. Here's my summary:
A player's "batting score" is an indication of how many runs a player makes per wicket relative to others in the games they have played in (their "batting average index"), and how fast they made those runs relative to other batsmen in those games (their "batting strike rate index").
The player's "batting average index" is their overall batting average divided by the overall batting average in the games they played. So if a player has an average of 50, and in the games they played the overall average is 25, they'd have a "batting average index" of 50/25=2. That is, the player scores twice as many runs for their wicket as everyone else they played with and against, on average. An index greater than 1 means they have performed better than others, less than one means they're not as good.
Their "batting strike rate index" is the rate they scored those runs compared with the other runs scored in their games (ie their average strike rate divided by the total strike rate for runs scored in the games they played). If a player has a strike rate of 100, and the overall strike rate is 50, their "batting strike rate index" would be 100/50=2. In plain English, this means that the player scores runs 2 times faster than everyone else they play with and against, on average.
Their total batting score is:
Code: Select all
Batting score = (batting average index) x (batting strike rate index) x 100
(batting average index) = (individual average runs / wicket) / (total runs / wicket for the games they played in)
(batting strike rate index) = (individual average runs / 100 balls) / (total runs / 100 balls for the games they played in)
Similarly we work out bowling indices based on bowling average and economy rate. In this case lower is better, so it's the inverse of the batting ones: "bowling average index" is the overall bowling average in the games the player participated in, divided by their personal bowling average. So if the bowler average 10, and the overall average in those games was 20, their bowling average index is 20/10=2. Same for "bowling economy rate index" - if they conceded 3 runs per over, and the average is 6 runs per over, their bowling economy rate index is 6/3 = 2. The scores are combined in the same way:
Code: Select all
Bowling score = (bowling average index) x (bowling economy rate index) x 100
(bowling average index) = (total runs / wicket for the games they played in) / (individual average runs / wicket)
(bowling economy rate index) = (total runs / over for the games they played in) / (individual average runs / over)
All round score
I've come up with this one - just the addition of Batting score and Bowling score. There's two ways to look at this one - either the absolute best individual contributions (eg a bowler that's so much better than other performers in their games that their batting score is irrelevant), or the best "all round" contributions (eg someone who is expected to contribute with both bat and ball). For an "all rounder" contribution it kind of makes sense to limit the comparison to those who've batted and bowled a certain amount (say scored 100 runs, and bowled 20 overs). But it's also interesting to compare the individual contributions of pure batsmen and bowlers using these scores.
as a postscript, I've also had some thoughts about fielding (eg using "net runs saved" and "fielding dismissals" indices something like the scores above) but the first of those is impossible to do unless you track fielding contribution in games. The commentary does show when a fielder misses an opportunity or saves runs, but it doesn't quantify these chances so it could only be done in the database itself. I couldn't be bothered scraping the commentary for it, and I doubt there's a lot of appetite to make coding changes for this sort of flavour enhancement.
|
OPCFW_CODE
|
Now that we are at the end of June and we have begun to work on preparing the laboratories for the next academic year, we want to show you how we manage the operating system installations in the FIB PC labs / classrooms.
It is clear that when you have to manage a computer park of more than 350 PCs, you have to rely on some tool that allows you to make a master image of a computer and distribute it in some way in the rest of equipments.
It becomes more complicated when we add different variables or requirements, such as:
- Different hardware, not all equipments are the same, since they were bought or renewed in different years.
- You work with more than one operating system natively, not virtualized (Windows and different versions of Linux)
- Cloning must be done by network, by groups of PCs and must be fast, it is not feasible to do it physically passing equipment to computer.
- You must be able to do post-configuration of systems once they are installed.
While it is true that there are some tools in the market that meet the conditions of the previous points, there are not so many if we want a free software tool and also free.
At this point, after evaluating several options, we decided to use the OpenGnSys tool, since in addition to satisfying all the requirements, it presents the following advantages:
- Cloning of Windows operating systems without having to resort to the sysprep utility of Microsoft, which greatly simplifies the creation of a master image of Windows.
- Post Configuring Windows and Linux systems using "scripts" in Bash Shell.
- Distribution of images using Unicast, Multicast or BitTorrent protocols. These last two are especially interesting when we have to do the distribution to many teams at once.
- Modular and scalable.
- Allows delegated administration.
- Focused in the educational field, although it can be applied in any other, it is a project created with the effort and collaboration of several universities of the state.
- It is a project that is alive and in constant evolution.
The technologies used by OpenGnSys are:
- LAMP Platform
- Udpcast (multicast) i Bittornado (torrent)
How it works
The client computers boot from the network (the network card must support the PXE protocol, nowadays almost all have it), ask the DHCP server the IP that the computer has and the PXE server.Then the client will receive a "boot file" from the PXE server and then load the cloning engine, which is no more than a Linux mini-distribution that will wait for the commands sent to it from the OpenGnSYS server. These orders, among others, could be:
- Boot the operating system of a partition.
- Create a master image of a computer partition and save it to the server.
- Restore an image of an operating system from the server to the client.
- Show a menu to the user to choose which system to start.
Note that in installations that are not very large, a single Linux server can assume all the roles of an OpenGnSys server and, if necessary, this can be a virtual server without any problems.
The management of the environment is done via web and once we validate, we can create organizational units (for example schools), which will be formed by groups of classrooms and equipment.
From the same web environment we can create an image of a computer, restore an image to a computer for unicast, a whole classroom by multicast or bittorrent, partition all computers in a classroom or execute post-configuration scripts on a computer or every equipment of a classroom.
In short, a very powerful tool that can be used in any area where there is need to distribute images of operating systems.
The project website is www.opengnsys.es and if you decide to cooperate surely you would be welcome.
|
OPCFW_CODE
|
Inuyasha doesn't have a personal statement currently.
27 years old
Gender Not Set
Joined: 19-October 03
Profile Views: 519*
Last Seen: 2nd October 2004 - 10:17 PM
Local Time: Mar 30 2017, 04:45 PM
74 posts (0 per day)
* Profile views updated each hour
17 Jul 2004
In QPE-Gaim (0.4-1) using all the default files from the feed, except libopie (using 1.0.3 instead), on the Sharp 1.32 ROM for the 5600, Gaim refuses to save preferences or accounts.
It appears to be trying to write to /.gaim, instead of ~/.gaim.
The output of the console is this:
prefs: Reading /.gaim/prefs.xml
prefs: Error reading prefs: Failed to open file '/.gaim/prefs.xml': No such file or directory
accounts: Error reading accounts: Failed to open file '/.gaim/accounts.xml': No such file or directory
pounces: Error reading pounces: Failed to open file '/.gaim/pounces.xml': No such file or directory
I am using the libpng fix, in case it makes a difference. I tried re-installing the files, but nothing new, it still spat out the error.
Also, even though the accounts aren't saving, Jabber refuses to connect and crashes when it tries. On the console I get no error messages - just messages about it connecting to the server, nothing about what went wrong.
Are there any solutions to these two problems?
25 May 2004
I've got a few questions about applications available for the 5600...
First, does anyone know of a good, featureful IRC client for the Zaurus? I've tried OpieIRC, ZIC and ZICIZ (updated version of ZIC?) and none of them have the features, or lack of bugs that I need... Basically I'd like the ability to join multiple channels on startup, be able to execute about 2-3 IRC commands when connecting to a network, and work well (i.e. no major flaws/bugs).
Second thing, is there a version of the OPIE Today application that works under the DTM database? Because from my brief use of OZ (which does not work well on the 5600, at least not in my case), I really loved Today, and would enjoy having it work with my current setup.
Thanks in advance.
Inuyasha has no visitors to display.
Other users have left no comments for Inuyasha.
There are no friends to display.
|Lo-Fi Version||Time is now: 30th March 2017 - 04:45 PM|
|
OPCFW_CODE
|
Virtualization is a fast-growing market, and the good news is that you can build your virtual machines (VMs) and manage your environment with free software. See how VMware and Microsoft products stack up against lesser-known virtualization products such as VirtualBox, QEMO, and Oracle VM.
You like free software, right? Virtualization is one of the fastest growing technologies, and one of the key driving factors behind its growth is the fact that many of today’s premier virtualization products are free. This lets organizations use virtualization for many different scenarios without spending a lot of money. Let’s look at the 10 best free virtualization products that work with Windows.
10. VMware Player—VMware Player doesn’t let you create new virtual machines (VMs). However, it runs on both Linux and Windows hosts, and can run both VMware and Microsoft VM images. VMware Player is also the basis for VMware’s thriving Virtual Appliance Marketplace. You can download VMware Player from www.vmware.com/download/player.
9. Xen—Xen is an open-source, hypervisor-based virtualization product. You load Xen from a Linux host, and the latest releases support both Windows and Linux guests. Xen-enabled Linux systems can also run under Microsoft’s Hyper-V virtualization, taking full advantage of the new high performance VMBus architecture. You can download Xen from www.xen.org/download.
8. VirtualBox—VirtualBox runs on Windows, Linux, and Macintosh hosts, and can run Windows Vista, Windows XP, Windows 2000, Windows NT, and many Linux versions as guests. VirtualBox comes in both a commercial and a free version. VirtualBox VMs provide audio, USB, and iSCSI support. You can find VirtualBox at www.virtualbox.org.
7. QEMU—A bit different from the other virtualization products listed, QEMU is a processor emulator. QEMU isn’t an open-source project, but it is free software and is utilized by a number of other products, including VirtualBox and Win4Lin. Its system-emulation mode provides basic support for Windows guests as well as DOS, Linux, and BSD. QEMU is found at fabrice.bellard.free.fr/qemu/about.html.
6. Oracle VM—Not to be left out of the burgeoning virtualization market, Oracle began providing a free Xen variant in late 2007. You manage Oracle VM with a browser-based management console. Although the Oracle VM software is free, Oracle charges for support. You can download Oracle VM at www.oracle.com/technologies/virtualization/index.html.
5. Virtual Iron Single Server Edition—Best known for its virtual infrastructure management capabilities, Virtual Iron also offers Single Server Edition, a free, limited-feature version of its enterprise-class virtualization product. The free version can run no more than 12 VMs and supports a maximum Microsoft Virtual Hard Disk (VHD) import or export size of 18GB. You can get the Virtual Iron Single Server Edition from www.virtualiron.com/products.
4. Microsoft Virtual PC 2007—Virtual PC 2007 is Microsoft’s desktop virtualization product. It has host and guest support for Windows Vista. It also supports multiple monitors, x64 host hardware, and hardware-assisted virtualization. You can download Virtual PC 2007 from www.microsoft.com/windows/downloads/virtualpc.
3. Microsoft Virtual Server 2005 R2—Microsoft’s primary virtualization offering for Windows Server 2003 hosts, Virtual Server 2005 R2 is designed for production server virtualization tasks. It provides 64-bit host support but no support for 64-bit guests. Virtual Server 2005 R2 supports Windows Server guests as well as the popular enterprise Linux OSs. You can download Virtual Server 2005 R2 from www.microsoft.com/downloads/details.aspx?FamilyID=6dba2278-b022-4f56-af96-7b95975db13b.
2. VMware Server—VMware Server runs on both Windows and Linux, and it provides 32-bit and 64-bit support for hosts and guests. VMware Server 2.0, currently in beta, also has experimental support for Windows Vista and Windows Server 2008. Its VM’s have audio and USB guest support as well as support for snapshots. You can get VMware Server at www.vmware.com/download/server.
1. Microsoft Hyper-V Server—Hyper-V Server, as a standalone, costs $29. However, it’s bundled with certain editions of Windows Server 2008, making it essentially free for Server 2008 customers. Hyper-V uses modern hypervisor-based architecture. It requires an x64 processor with hardware-assisted virtualization, and can run Windows and Linux guests. You can download the Hyper-V beta as part of Server 2008 RC1 at www.microsoft.com.nsatc.net/downloads/details.aspx?familyid=8F22F69E-D1AF-49F0-8236-2B742B354919.
|
OPCFW_CODE
|
Workflows (formerly known as Genius Workflows) are automated tasks and processes that can be run automatically or manually based on the conditions of an incident. There are endless possibilities designed to fit your exact use case. For example:
🔮 Remind Slack channel to update status page every 30 min 🔮 Automatically email legal@ whenever a SEV0 or greater occurs 🔮 Create Jira tickets on multiple project boards depending on which team is impacted 🔮 Open a Zoom or Google Meet bridge for high severity (>SEV1) incidents for high bandwidth conversations 🔮 Automatically page the Infrastructure team via PagerDuty or Opsgenie whenever the postgres-db is impacted 🔮 Use different Confluence or Google Doc postmortem templates if the incident was security related 🔮 ...thousands of other combinations to fit your exact incident process!
If you need help configuring a Workflow or don't see what you're looking for, reach out via Slack, firstname.lastname@example.org, or Intercom.
This step-by-step tutorial goes through the most popular type of Workflow, Incident Workflows. Other Workflows such as Action Item, Alert, Pulse, Standalone can be found here. However, the concepts are the same!
When you create a new Workflow, provide a specific name and description as you'll likely have many. This can be for a specific task you'd like to automate. We suggest creating more bite-sized Workflows scoped to specific tasks versus cramming a series of tasks into a single Workflow. This will provide more granular control of when you want them to trigger (see Step 2).
A trigger as the name states are ways a Workflow can be started. For example, incident_created will run a Workflow whenever the incident is created or severity_updated will run a Workflow whenever a severity is updated. As you can see, triggers can be broad or narrow that focus on a specific attribute of an incident.
Pick from a predefined list:
Depending on the type of Workflow, you may optionally choose to set the following configurations:
Conditions are a specific criteria you'd like to be matched in order for a triggered Workflow to run.
Conditions are used in parallel with triggers (Step 2) and provide an additional layer of granularity.
For example, trigger (incident_created) and condition (status = started) will run a Workflow whenever the incident starts. Whereas a more narrow use case would be a trigger (severity_updated) with condition (severity = SEV 1, SEV 0) would only run when the incident was set to high severity.
Pick from a pre-defined list (multi-select available):
By default all conditions are set to is one of and the Workflow will run if any of the conditions are met.
After you hit create Workflow on Step 4, you'll be able to then add Tasks to the Workflow.
Tasks are steps that are run whenever the Workflow is triggered. The more integrations you have configured the more tasks you'll see. All available tasks and their how-to can be found in the documentation drop-down under Workflows.
Each Workflow also supports multiple Tasks as well.
If there is a Task you want but don't see or need help, reach out to us on Slack, email@example.com, or Intercom.
That's it, your Workflow is configured and ready to go. We can't wait to see what you build!
|
OPCFW_CODE
|
Image Stitching methods to remove seams for stitched image
I have used SURF for feature detection and then I have used RANSAC. The stitched image I got has seams. How do I remove these?
stitches? are they straight? angular? variable size? random?
They are actually angular at the points where the two images get stitched.
Sorry. I am having some problems with the images right now. Could you generalize?
I implemented removing of seams for stitching images of eye's retina. Below you can find the final effect:
To do this, I implemented a technique described on page 138 of this paper. Below you can find pseudocode for doing this with explanation, full source can be found on my repository.
Algorithm is based on calculating the final value of pixel by performing weighted average of pixel values of images that are overlapping over this pixel. Weight is based on the distance from the pixel to the edge of the image. If the pixel is closer to the center of the image that it belongs to, then is more important and the weight is bigger. Distance of the pixel to the edge of image can be calculated by using function distanceTransform implemented by OpenCV. This is the effect of distance transform on one of the eye's retina image placed on the final mosaic:
Below you can find pseudocode:
// Images is an array of images that the program is stitching
// For every image (after transform) on final plane calculate distance transform
for (image in images) {
// Calculate distance transform
image.distanceTransform = distanceTransform(image)
}
// For every pixel in final mosaic, calulate its value by using weighted average
for (row in rows) {
for (col in cols) {
currentPixel = FinalMosaic(col, row)
// Values for weighted average
numeratorSum = 0
denominatorSum = 0
// Go through all images that can overlap at this pixel
for (image in images) {
// If image is not overlapping over this pixel just skip
isOverlapping = image.isOverlapping(currentPixel)
if (isOverlapping) {
currentPixelWeight = image.distanceTransform.valueAt(currentPixel)
numeratorSum += currentPixelWeight * currentPixel.value
denominatorSum += currentPixelWeight
}
}
if (denominatorSum != 0) {
currentPixel.value = numeratorSum / denominatorSum
}
}
}
If anything is unclear, write questions in the comments and I will try to improve the answer.
Could you please tell what was the solution you got eventually as I couldn't understand how will we be removing the seam line in the image if we join 2 images and we get a single vertical seam line between two images at the point where they are joining.
This is no answer.
|
STACK_EXCHANGE
|
Imagine the life of people who invested in Bitcoin years ago, today they are amongst the richest personalities in the world! So, what they did which you were unable to do? They were able to spot the potential, which most like you were unable to spot or were busy bashing it. But can that change now? Is it possible to travel back in time?
The answer is No! There is no Time-Machine built yet, but before your head drops, there is actually a possible alternative and that's called Liracoin! Liracoin is that chance which Life RARELY gives, the 2nd opportunity for people who missed Bitcoin, and even today they regret it!
Liracoin is a cryptocurrency, a digital currency built and applied to the blockchain, a public share registry.
Blockchain is the revolution, cryptocurrency is the first tool applied to the blockchain system. It makes digital currencies far more secure and immutable, as they are disconnected from the banking system, and also ensuring complete anonymity and nontraceability.
In a wave of thousands of cryptocurrencies, Liracoin has the chance to endure thanks to its community. Liracoin is since its conception a community-driven project, an autonomous and decentralised organisation, separate from any political limit, and has no geographical location.
Liracoin has three pillars as well as its strengths and differentiation features: Ambassadors, Adoption, and Application.
Liracoin is the currency of the people. Liracoin has shown its diffusion especially in Africa and in Europe, where 200 business started to accept Liracoin (LIC) as a means of payment.
Liracoin is a cryptocurrency built and applied to the blockchain, a public share registry. The production of Liracoin takes place through: 95% Proof-Of-Stake and 5% Proof-Of-Work. Liracoin uses the Green Mining technology Proof-Of-Stake, based on deposit, seniority, and transactions in which new Liracoins are forged for each block and combined with POW mining along with Scrypt technology. Liracoin’s POS model reduces the risk of hacker attacks, data manipulation or concentration of value and monopolization of the market. This year, a hard fork will take place to go entirely to the POS technology.
So with Liracoin ALREADY in the portfolio of top investors, it’s the opportunity you too join in!
Get further details from here:
Satoshi Nakamoto blog top 50: https://www.satoshinakamotoblog.com/the-top-50-cryptocurrencies-2018
Satoshi Nakamoto blog: https://www.satoshinakamotoblog.com/the-first-cryptocurrency-rated-and-certified-by-satoshinakamotoblog-for-the-year-2019
|
OPCFW_CODE
|
import { CastIntConfig } from '../types/cast-int-config';
import { CastStringConfig } from '../types/cast-string-config';
import { CompareProperty } from '../types/compare-property';
import { IsBetweenConfig } from '../types/is-between-config';
/**
* converts the case of the given string into camel case
* @param item to convert to camel case
* @returns a camel case version of the input string
*/
export function camelCase(item: string): string {
if (typeof item !== 'string') {
return item;
} else if (item.length < 2) {
return item.toLowerCase();
}
// lower the first character, then upper case anything following a space, underscore or dash
return item.substr(0, 1).toLowerCase() + item.substr(1).replace(/[\s_-]+(.)/g, (match) => {
return match.substr(1).toUpperCase();
});
}
/**
* casts the given item into a boolean, empty strings and are concidered true which is needed for attribute to work property
* @param value to cast into a boolean
* @param defaultValue to use if value is null or undefined
* @returns a boolean value of the given item
*/
export function castBoolean(value: any, defaultValue: boolean = false): boolean {
if ((value && value !== 'false') || value === '') {
return true;
} else if (value == null && defaultValue !== false) {
return defaultValue;
}
return false;
}
/**
* casts the given item into an int
* @param value to cast into an int
* @param config options to determine how to cast the item into an int
* @returns an int value of the given item (default value or null if item is NaN)
*/
export function castInt(item: any, config?: CastIntConfig): number {
config = Object.assign<CastIntConfig, CastIntConfig>({ defaultValue: null, radix: 10 }, config);
item = parseInt(item, config.radix);
return isNaN(item) ? config.defaultValue : item;
}
/**
* casts the given item into a string if possible, if not, an empty string is returned
* @param item to cast into a string
* @param [config] options to apply to the string after it has been cast
* @returns a string value of the given item
*/
export function castString(item: any, config?: CastStringConfig): string {
if (item == null || (typeof item !== 'string' && typeof item.toString !== 'function')) { return ''; }
// try casting into a string
let stringValue: string = item.toString();
if (typeof stringValue !== 'string') { return ''; }
if (config) {
// trim string
if (config.trim) {
stringValue = stringValue.trim();
}
// convert string to proper case
if (config.case === 'lower') {
stringValue = stringValue.toLowerCase();
} else if (config.case === 'upper') {
stringValue = stringValue.toUpperCase();
}
}
return stringValue;
}
/**
* compares two items to determine which item is larger
* @param item1 to compare
* @param item2 to compare
* @param compareProperties array of properties that will be used to compare the two items. Set @see CompareProperty.ascending to false for
* a descending sort on a property
* @returns 1 if item1 is larger, -1 if item2 is larger and 0 if they are equal
*/
export function compareItems<T = any>(item1: T, item2: T, ...compareProperties: (string | CompareProperty)[]): -1 | 0 | 1 {
if (item1 === item2) { return 0; }
// if there are not any compare properties, then compare the full items
if (compareProperties.length < 1) { compareProperties = [ '' ]; }
let returnValue: -1 | 0 | 1 = 0;
for (const compareProperty of compareProperties) {
// get the values of each item for the current compare property
const property = (typeof compareProperty === 'string') ? compareProperty : compareProperty.property;
const value1 = getValue(item1, property);
const value2 = getValue(item2, property);
if (value1 === value2) {
// if the values are the same, then continue to the next property
continue;
} else if (value1 === undefined) {
// undefined goes at the end of the array (based on JavaScript's default sort)
returnValue = 1;
} else if (value1 === null) {
// null goes after everything other than undefined
returnValue = (value2 === undefined) ? -1 : 1;
} else if (value2 == null) {
// if value1 is not null or undefined and value2 is, then value2 goes after value1
returnValue = -1;
} else {
// if value1 and value2 are not equal or null, then return which one is larger
returnValue = (value1 > value2) ? 1 : -1;
}
// swap the return value if the compare property has ascending set to false
return (typeof compareProperty === 'string' || compareProperty.ascending) ? returnValue : returnValue * -1 as -1 | 1;
}
return 0;
}
/**
* create a copy of the given item (resulting object will be a json object without methods)
* @param item to copy
* @returns a copy of the provided item
*/
export function deepCopy<T>(item: any): T;
export function deepCopy(item: any): any;
export function deepCopy(item: any): any {
return JSON.parse(JSON.stringify(item));
}
/** get all properties with values equal to the provided value */
export function getPropertiesByValue<T>(item: T, valueToGet: any): (keyof(T))[] {
return keyValuePairs(item).filter(({ value }) => value === valueToGet).map(({ key }) => key);
}
/**
* gets a value without throwing an error if the property is not on the item
* @param item to get the value from
* @param property to get
* @returns value of property pulled from the source object
*/
export function getValue<T, K extends keyof(T)>(item: T, propertyToGet: K): T[K];
export function getValue<ReturnT = any, ItemT = any>(item: ItemT, propertyToGet?: string): ReturnT;
export function getValue<ReturnT = any, ItemT = any>(item: ItemT, propertyToGet: string = ''): ReturnT {
const properties = (typeof propertyToGet === 'string') ? propertyToGet.split(/[\.\[\]]/) : [];
let valueToReturn: any = item;
for (const property of properties) {
if (valueToReturn != null && property.trim() !== '') {
valueToReturn = (isNaN(parseInt(property, 10))) ? valueToReturn[property] : valueToReturn[parseInt(property, 10)];
} else {
break;
}
}
return valueToReturn;
}
/**
* checks if a value is between two other values (swaps min & max if min is greater than max)
* @param value checked to see if it is between min and max
* @param min value that value must be greater than (or equal too depending on config.endpoints)
* @param max value that value must be less than (or equal too depending on config.endpoints)
* @param config used to determine if the value is between min and max @see IsBetweenConfig
*
* @title Example(s)
* @dynamicComponent examples/core/object-is-between
*/
export function isBetween<T = any>(value: T, min: T, max: T, config?: IsBetweenConfig<T>): boolean {
config = Object.assign<IsBetweenConfig, IsBetweenConfig>({
comparator: compareItems,
endpoints: 'both'
}, config);
if (config.comparator(min, max) > 0) {
[ min, max ] = [ max, min ];
}
const minComparison = config.comparator(value, min);
const maxComparison = config.comparator(value, max);
// value is greater than min and less than max, or value is equal to an included endpoint
return (minComparison > 0 && maxComparison < 0)
|| (minComparison === 0 && (config.endpoints === 'both' || config.endpoints === 'min'))
|| (maxComparison === 0 && (config.endpoints === 'both' || config.endpoints === 'max'));
}
/** gets the key (property)/value (property value) pairs off of the object */
export function keyValuePairs<T = any>(item: T): { key: keyof(T), value: T[keyof(T)] }[] {
return (item == null) ? [] : (Object.keys(item) as (keyof(T))[]).map(key => ({ key, value: item[key] }));
}
/** gets the property keys from the provided object */
export function keys<T = any>(item: T): (keyof(T))[] {
return keyValuePairs(item).map(({ key }) => key);
}
/**
* gets each property from the source and sets them on the target
* @param target object to map values to
* @param source object to get values from
* @param overwrite non-null values of target it true
* @returns target object after the mapping has occurred
*/
export function mapProperties<TargetT>(target: TargetT, source: any, overwrite: boolean = true): TargetT {
for (const property in target) {
// set the properties value on the target from the source if the source property is getable and the target property is setable
if ((!Object.getOwnPropertyDescriptor(target, property) || Object.getOwnPropertyDescriptor(target, property).set)
&& (!Object.getOwnPropertyDescriptor(source, property) || Object.getOwnPropertyDescriptor(source, property).get)
&& (target[property] == null || overwrite)) {
target[property] = source[property];
}
}
return target;
}
/**
* sets a value without thowing an error if the property is not on the item
* @param item to set a value on
* @param value to set the property on item to
* @param property to set on the item
* @returns item after the property has been set to the given value
*/
export function setValue<ItemT>(item: ItemT, value: any, property: string): ItemT {
const properties = (typeof property === 'string') ? property.trim().split(/[\.\[\]]/) : [];
// get the last property in the list that is not an empty string to set
let propertyToSet: string;
while ((typeof propertyToSet !== 'string' || propertyToSet.trim() === '') && properties.length > 0) {
propertyToSet = properties.pop();
}
// join the remaining properties to get the object to set the property on
const objectToSet = getValue(item, properties.join('.'));
if (objectToSet) {
objectToSet[propertyToSet] = value;
}
return item;
}
/**
* gets the values of all of the keys on the given item
* @param item to pull values from
* @returns values of each property on the item
*/
export function values<T>(item: T): T[keyof(T)][];
export function values<T = any>(item: any): T[];
export function values<T>(item: T): T[keyof(T)][] {
return keyValuePairs(item).map(({ value }) => value);
}
|
STACK_EDU
|
Cultural Studies of Robot Design
Cultural Sense-Making in HRI
Culture in Robot Design
Robotics is a transnational science, but robotics research labs and researchers themselves are situated in particular national, organizational, and other local cultural contexts. Robotic technologies therefore often incorporate various cultural assumptions and practices of their designers. To understand how culture is produced and reproduced through the conceptualization, design, and use of robots, we explore the cultural discourse and practices among roboticists around the world. One focus of our work has been on using interviews and ethnographic participatory observation in robotics labs to the study of how sociality is conceived of and designed into robots in the US and Japan. Between 2010 and 2019, in partnership with IEEE, we collected over 100 oral history interviews with robotics researchers around the world to document the development of robotics as a scientific field. We are currently analyzing these interviews to better understand the various ways in which robotics researchers understand the aims, practices, and social consequences of robotics, and to map the cognitive and social networks of robotics. The transcripts and video recordings of these interviews are publicly available through the IEEE History Center and IEEETv.
- Tan, H., Wang, D., & Sabanovic, S. (2018, August). Projecting Life Onto Robots: The Effects of Cultural Factors and Design Type on Multi-Level Evaluations of Robot Anthropomorphism. In 2018 27th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN) (pp. 129-136). IEEE.
- Fraune, M. R., Kawakami, S., Sabanovic, S., De Silva, P. R. S., & Okada, M. (2015). Three’s company, or a crowd?: The effects of robot number and behavior on HRI in Japan and the USA. In Robotics: Science and Systems.
- Bennett, C. C. (2015). The effects of culture and context on perceptions of robotic facial expressions. Interaction Studies, 16(2), 272-302.
- Bennett, C. C., Šabanović, S., Fraune, M. R., & Shaw, K. (2014, August). Context congruency and robotic facial expressions: Do effects on human perceptions vary across culture?. In The 23rd IEEE international symposium on robot and human interactive communication (pp. 465-470). IEEE.
- Šabanović, S., Bennett, C.C., Lee, H.R. (2014) “Towards Culturally Robust Robots: A Critical Social Perspective on Robotics and Culture.” Proceedings of the Workshop on Culturally Aware Robots at the 9th International Conference on Human-Robot Interaction (HRI’14), Bielefeld, Germany, March 2014.
- Lee, H.R., Šabanović, S., (2014). “Culturally Variable Preferences for Robot Design and Use in South Korea, Turkey, and the United States” Proceedings of the International Conference on Human-Robot Interaction (HRI’14), pp. 17-24, Bielefeld, Germany 2014.
- Lee, H. R., Šabanović, S.,(2013). “Weiser’s Dream in the Korean Home: Collaborative Study of Domestic Roles, Relationships, and Ideal Technologies” Proceedings of the International Joint Conference on Pervasive and Ubiquitous Computing (UbiComp 2013), Zurich, Switzerland, September 2013, pp. 637-646.
- Lee, H., Sung, J., Šabanović, S., Han, J. (2012). “Cultural Design of Domestic Robots: A Study of User Expectations in Korea and the United States.” Proceedings of RO-MAN 2012, Paris France, September 2012, pp. 803-808.
- Šabanović, S. (2010). “Emotion in Robot Cultures: Cultural Models of Affect in Social Robot Design.” Proceedings of Design and Emotion 2010 (D&E2010), Chicago IL, October 2010.
- Sabanovic, S., Milojevic, S., Asaro, P., & Francisco, M. (2015). Robotics Narratives and Networks [History]. IEEE Robotics & Automation Magazine, 22(1), 137-146.
- Šabanović, S. (2014). Inventing Japan’s ‘robotics culture’: The repeated assembly of science, technology, and culture in social robotics. Social Studies of Science, 44(3), 342-367.
- Šabanović, S., Milojević, S., Kaur, J., Francisco, M., & Asaro, P. (2014). Raymond Jarvis [History]. IEEE Robotics & Automation Magazine, 21(4), 120-126.
- Milojević, S., Šabanović, S., & Kaur, J. (2013). Miomir Vukobratovic [History]. IEEE Robotics & Automation Magazine, 20(2), 112-122.
- Milojević, S., & Šabanović, S. (2013). Robotics narratives and networks: Conceptual foundations for a non-linear digital archive.
- Sabanovic, S., Milojevic, S., & Kaur, J. (2012). John McCarthy [History]. IEEE Robotics & Automation Magazine, 19(4), 99-106
- Ballard, L. A., Sabanovic, S., Kaur, J., & Milojevic, S. (2012). George charles devol, jr.[history]. IEEE Robotics & Automation Magazine, 19(3), 114-119.
- Šabanović, S. (2010). Robots in society, society in robots. International Journal of Social Robotics, 2(4), 439-450.
|
OPCFW_CODE
|
The latest League of Legends Patch 11.19 launched various modifications, nonetheless primarily an important one is the introduction of a mannequin new champion Vex. Many people are questioning about how she’s going to impression the current meta of the game.
Will Vex change the meta of League of Legends?
Vex is a very explicit champion that shall be a terrific counter to many various champions in meta. She could be utilized successfully in opposition to many champions who rely upon mobility and dashes/blinks/leaps and so forth. It’s all because of her bundle and her very extremely efficient passive functionality often known as “Doom.”
It grants Vex a value that will interrupt dashes of the enemy champion and fears them, which is a sort of arduous crowd administration. It’s fairly crucial on account of many very popular champions, notably amongst the proper players on this planet, rely carefully on mobility. Lee Sin, as an illustration, has many dashes in his bundle, and his largest playmaking potential shall be influenced by a champion that will stop his plans instantly.
In addition to, after last yr’s modifications to itemization in League of Legends, many of the customary devices moreover present dashes like Galeforce, Hextech Protobelt, Prowlers Crawl, and further. Furthermore, the second part of the passive marks enemies that perform any mobility movement like dash or blink near Vex. Then she is going to be capable of deal additional harm to the champion that she marked. Aside from that, her bundle is pretty straightforward with many skillshot skills and a shield, nothing too fancy.
- Passive: Like talked about sooner than, interrupts dashes, blinks, leaps, and so forth., and as well as marks and fears enemies. Completely different skills can proc that mark and as well as fear enemies.
- Q: It’s a straightforward damage-dealing projectile that travels throughout the route of your deciding on.
- W: It’s fairly very like Sion’s with a shield that detonates and affords magic harm.
- E: Moreover it’s a skillshot that provides magic harm and slows enemies in a spherical house.
- R: Vex throws her shadow, dealing magic harm. She is going to recast this functionality to pull herself within the path of the enemy champion that was hit. If the champion dies quickly after being damaged by Ves, the expertise will reset, and he or she’s going to have the flexibility to make use of it as soon as extra inside 12 seconds.
Final Concepts on Vex
Considering most of Vex’s skills are skillshots, she would require some expertise and observe in mastering her potential. She’s going to pose an enormous threat on the middle lane and is normally a reasonably impactful counter to many presently throughout the meta picks, along with champions with out dashes that assemble devices that let them to take motion.
For additional articles on League of Legends check out League of Legends: How to Jungle (2021).
https://progameguides.com/lol/how-will-vex-impact-the-meta-in-league-of-legends/ | How will Vex impression the meta in League of Legends?
|
OPCFW_CODE
|
Like the ridges on a violin’s strings, keel scales on some snakes provide a similar purpose: enhancing performance.
Why Some Snakes Have Keel Scales? These specialized scales, found on the belly of certain snake species, resemble the keel of a ship, with a central ridge running down their length.
While smooth scales offer sufficient traction for slithering across various surfaces, keel scales take it one step further.
They act as miniature speed bumps that increase friction and grip, allowing snakes to navigate challenging terrains effortlessly.
Keel scales aid in locomotion and play a crucial role in prey capture and defense against predators.
This unique adaptation has evolved over millions of years, providing an evolutionary advantage to venomous snakes by improving their strike accuracy and increasing their ability to subdue prey.
However, it is intriguing that not all snakes possess these extraordinary structures, raising questions about the diverse strategies employed by different snake species in their ever-changing environments.
Table of Contents
- Keel scales enhance a snake’s performance by increasing friction and grip.
- Keel scales aid in locomotion, prey capture, and defense against predators.
- Keel scales provide enhanced traction and stability during movement.
- Keel scales have a ridge running down the center, increasing surface area contact with the ground.
Anatomy of Keel Scales
Keel scales on the underside of certain snakes provide them with enhanced traction and stability during movement.
These specialized scales have a ridge or keel running down their center, giving them a distinct appearance compared to smooth scales.
The advantages of keel scales are numerous. Firstly, the raised ridge increases surface area contact with the ground, allowing for better grip and preventing slipping.
Secondly, the structure of keel scales creates channels that help channel water away from the snake’s body when moving through wet environments.
This particularly benefits aquatic snakes as it reduces drag and allows for more efficient swimming.
The ridges on keel scales can also aid in resisting sideways motion, providing increased stability during locomotion.
Overall, the unique anatomy of keel scales offers significant benefits to snakes that possess them, enabling better movement capabilities in various environments.
The function of Keel Scales
Keel scales serve multiple functions in snakes, enhancing their grip and traction. They have a ridged texture that increases the surface area of a snake’s body.
This allows for better contact with the environment and improves their ability to climb trees or move across various terrains.
This specialized adaptation enables snakes to navigate through complex environments with ease, providing them with a competitive advantage in hunting and evading predators.
Increased surface area for better grip and traction
To enhance your grip and traction, snakes with keel scales utilize the increased surface area provided by these specialized scales.
Keel scales are characterized by a ridge or keel running down the center, which creates a rough texture.
This adaptation is particularly beneficial for snakes that have an arboreal lifestyle or move through challenging terrains.
By increasing the contact points with their environment, keel scales allow for improved locomotion and maneuverability.
The rough texture of the scales enables snakes to cling onto branches, rocks, or other surfaces with greater ease, preventing slips and falls.
Additionally, the increased surface area enhances their ability to push against the ground while slithering, providing better traction and enabling efficient movement across various substrates.
Overall, keel scales play a crucial role in ensuring effective grip and traction for snakes in their natural habitats.
Aid in movement and climbing
With their specialized scales, snakes can easily navigate through challenging terrains, gaining a firm grip and improved traction to conquer any obstacle.
These keel scales, found on the ventral side of some snake species, not only assist in movement but also play a vital role in climbing.
The prominent ridges along the keels increase surface area contact with the environment, providing numerous benefits for hunting.
By enhancing grip and reducing slippage, these scales allow snakes to move swiftly and silently towards their prey.
Additionally, the increased agility and speed afforded by these specialized scales enable snakes to ambush their targets effectively.
Whether it’s navigating through dense foliage or scaling vertical surfaces, these keel scales contribute significantly to a snake’s ability to maneuver and capture its prey successfully.
The evolutionary significance of the presence of keel scales in some snakes lies in their ability to enhance locomotion and facilitate efficient movement through various habitats.
These unique scales, which possess a ridge or raised center known as a keel, provide several evolutionary adaptations and ecological advantages for snakes.
Firstly, the keel scales increase traction by increasing surface area, enabling snakes to grip surfaces more securely while climbing trees or traversing uneven terrain.
Secondly, the ridges on the keel scales reduce friction with the ground, allowing snakes to move swiftly and silently without alerting potential prey or predators.
Additionally, these specialized scales aid in shedding skin by providing extra rigidity and support during the process.
Overall, the presence of keel scales is an important evolutionary adaptation that allows snakes to thrive in diverse environments and efficiently navigate their surroundings.
Keel Scales in Venomous Snakes
Picture yourself standing face to face with a venomous snake, its sleek and deadly form adorned with the remarkable keel scales that give it an unparalleled advantage in both hunting and survival.
These specialized scales have evolved in venomous snakes as an evolutionary advantage for their prey capture strategy.
Keel scales are characterized by their raised ridge down the center, resembling a boat’s keel. This unique structure provides several benefits to venomous snakes.
Firstly, the ridges increase surface area, allowing for improved traction and grip while navigating various terrains.
Secondly, they enhance sensory perception by increasing sensitivity to vibrations, helping snakes detect approaching prey or potential threats more effectively.
Moreover, keel scales aid in reducing drag when moving through vegetation or across rough surfaces.
This allows venomous snakes to move swiftly and silently towards their prey without alarming them.
By minimizing disturbance during hunting activities, these scales contribute significantly to the success of their predatory endeavors.
The presence of keel scales in venomous snakes is not merely ornamental but serves a crucial role in enhancing their hunting capabilities and overall survival.
Absence of Keel Scales in Some Snakes
Imagine encountering a snake with smooth, undulating scales that don’t have the distinctive ridge found in venomous species.
This absence of keel scales isn’t random, but rather an evolutionary adaptation driven by genetic mutation.
Keel scales, which have a raised central ridge, help venomous snakes grip surfaces and provide traction during movement.
However, some non-venomous snakes have lost these keel scales over time due to genetic mutations.
The absence of keel scales allows these snakes to move more efficiently by reducing friction and increasing maneuverability.
This adaptation gives them an advantage in hunting prey or escaping predators.
While scientists are still studying the exact reasons behind this loss of keel scales, it’s clear that this trait has evolved as a beneficial adaptation for certain snake species.
|
OPCFW_CODE
|
slightly OT: lightdm problem with ubuntu 15.10
John G Heim
jheim at math.wisc.edu
Tue Dec 22 19:08:27 CET 2015
Thanks for the suggestion. Unfortunately, that didnt' help. I have 2
almost identical machines. One I installed ubuntu 15.10 on from a cd.
The other is my fai test machine. They are both Dell Optiplex 760s.
I piped the output from "dpkg --get-selections' to a file on both
machines and then I did a diff. Then I took the output from diff,
grepped for lines starting with a less than, and redirected it to a
file. Then I turned that file into an FAI packages config file. First I
did a fai softupdate. When that didn't work, I did a complete reinstall.
So now doing the dpkg and diff again shows no packages on the
ubuntu-from-cd machine that aren't also on the fai machine.
Of course, there are a lot of packages on the fai machine that are not
on the ubuntu machine.
On 12/22/2015 10:24 AM, Robert Markula wrote:
> Hi John,
> that's probably due to a missing package. You could do a standard Ubuntu
> install (from CD or USB drive that is) on the machine in question, print
> the installed packages and then, on the same machine, run a FAI-based
> install, print the installed packages again and compare/diff both
> package lists.
> Am 22.12.2015 um 17:14 schrieb John G Heim:
>> I'm installing ubuntu 15.10 via FAI. Actually, all I did was take a
>> working ubuntu 15.04 config and change the sources.list file so it
>> installs ubuntu 15.10 on selected machines. The beauty of FAI is how
>> easy it is to do something like that. The main problem I'm having is
>> that there is no network or sound icon on the lightdm login screen.
>> And the problem with that is that without sound, the accessible logins
>> don't work. I'm blind myself butsince this is a university, we're
>> legally required to have an accessible login anyway.
>> Anybody have a clue as to why sound wouldn't work in lightdm? I am
>> sorry to ask a question that is a little bit OT but I am just really
>> My sighted colleague tells me that there is a menu bar at the top of
>> the screen just like in a working ubuntu machine but the sound and the
>> network icons are missing. Logins work though. You get sound after you
>> log in and the screen reader works fine. I examined the lightdm logs
>> and see nothing meaningful.
More information about the linux-fai
|
OPCFW_CODE
|
[OAI-implementers] Open Archives Initiative Protocol for Metadata Harvesting Version 2 news
Mon, 4 Feb 2002 12:04:48 -0500
Dear OAI community:
In mid-2001 the Open Archives Initiative Technical Committee (OAI-TC) was
formed to develop and write version 2 of the Open Archives Protocol for
Metadata Harvesting (OAI-PMH). In this email, we would like to inform you about:
* The context of this technical work;
* The process for undertaking the work;
* The schedule for the release of v.2.0 of the OAI-PMH;
* Anticipated changes in v.2.0 of the OAI-PMH.
Carl Lagoze and Herbert Van de Sompel
=> The context of this technical work was:
1. The original release of the OAI-PMH, version 1.x, was intended to
initiate a year long period of experimentation with the protocol. The goal
was to make this experimental version as stable as possible to encourage
usage and testing. (In fact, only one change from version 1.0 to 1.1 was made
during the year in response to a W3C change in the XML schema
2. The OAI-TC work should avoid if possible the addition of significant
functionality to the protocol. Instead, the scope of work should be to
resolve problems that arose over the past year in reaction to experience in
the user community.
3. While it was not deemed necessary that version 2.0 be backward compatible
with version 1.x, the upgrade path when version 2 is release should be
4. The result of the work, version 2, should be a stable, "standard"
release. It remains undecided as to whether a formal standardization
process will be undertaken with the version 2 protocol.
=> The process for undertaking this work has been:
1. Formation of the OAI-TC representing technical expertise from a
cross-section of the OAI community. Conduct of this work within a closed
technical committee follows the same procedure which was successfully used
for the development of OAI-PMH v. 1.x. Members of OAI-TC are listed at
2. Joint identification of issues
3. Development of issue white papers
4. Vetting of white papers to determine those that were in scope of OAI-TC
5. Development of issue resolution
6. On-line and phone meetings to reach final issue resolution
7. Reporting and validation of the results of the work of OAI-TC to the OAI Steering Committee.
Members of OAI-SC are listed at
8. Protocol revision and writing
=> The schedule for the release of v 2.0 of the protocol is as follows:
1. March 1: release of the protocol to a limited group of alpha testers
2. April 1: beta public release
3. May 1: final public release
=> The following is a summary of the changes that are anticipated for
version 2 of OAI-PMH:
1. Dates and times - Standardize on UTC for all dates and times in protocol
requests ("from" and "until" arguments) and responses.
2. Harvesting Granularity- Allow all ISO8601 time granularities in dates
and times in the "from" and "until" arguments of protocol requests. Allow a
data provider to expose its support date/time granularity in the response to
an Identity request. Default granularity is YYY-MM-DD.
3. Flow control - Improve flow control by allowing the following optional
attributes when a resumptionToken is returned:
* retryAfter - a suggested wait time until the request should be resubmitted
* expirationDate - the projected expiration of the resumptionToken
* completeListSize - total number of items across entire result set
* cursor - index of first item in this batch within entire result set
4. set functionality - It will be possible to specify an identifier as
argument to the ListSets verb, permitting a harvester to inquire to which
sets an item belongs. Responses to ListRecords and GetRecord will return
the sets to which each item belongs. Support of sets remains optional.
5. base-URL - Insulate harvesters from proxy servers by mandating that the
visible identity of the "handling server" in responses be that of a
persistent "master", that may opaquely reflect requests to slaves.
6. xml schema for mandatory Dublin Core - Coordinate with the DCMI so that
the schema used by the OAI is based on one managed by DCMI. Must allow
inclusion of the xml lang attribute (specifying the language of the metadata
7. Dedupping - Define an optional "provenance" XML container that can be
attached to metadata records that a data provider aggregates from other
sources. This will help harvesters in detecting duplicates harvested from
multiple data providers.
8. Error handling - Report OAI errors in OAI responses in a manner
independent of HTTP status codes.
9. Set description - Define an optional XML container with which communities
can describe individual sets.
10. Multiple metadata formats - Modify ListIdentifiers to permit a metadata
format as argument, filtering the return to include only record identifiers
that support the specified format.
|
OPCFW_CODE
|
Class is not being added to the error element on first check , but when field is being checked again all going ok (jquery validation plugin)
Look my js and html code there: http://jsfiddle.net/2LRv7/2/
There are 1 problem. Don't write anything in inputs, just press submit button. There are error messages blocks appears after both inputs. They are surrounded by yellow border, but they must be surrounded with yellow border and on pink background. Message class wasn't added.
If you click send button again or click on field and click somewhere else (out of field) or click somewhere on page just after first validating block style will be changed (background turns pink).
I don't know why, but message class doesn't add to the label element classes. This action specified in highligh/dehighligh block (5-12 lines of js part).
I am not sure, but I think error is somewhere in this lanes:
$("#countersForm").validate({
debug: true,
validClass: "active",
highlight: function(element, errorClass, validClass) {
$(element).addClass(errorClass).removeClass(validClass);
$(element.form).find("label[for=" + element.id + "]").addClass(errorClass).addClass('message');
},
unhighlight: function(element, errorClass, validClass) {
$(element).removeClass(errorClass).addClass(validClass);
$(element.form).find("label[for=" + element.id + "]").removeClass(errorClass).removeClass('message');
},
errorPlacement: function (error, element) {
console.log(error);
var br = $( "<br>" );
error.insertAfter(element);
br.insertAfter(element);
}
});
Why is this happening?
@you might want to accept answers for the previous questions that helped you before asking new ones..
You don't answered my question (last one I mean), your answer didn't help me to solve MY problem at all (thats why it wasn't accepted). I have variable names, as I said 2-3 times. I wanted delete question, because I solved my problem, but I couldn't because of your answer. Also, you said about $('input') returns some objects, not one — I knew this, but used rules not correctly
You need to add the error class in errorPlacement function
check this fiddle
errorPlacement: function (error, element) {
console.log(error);
var br = $( "<br>" );
error.insertAfter(element).addClass('message');
br.insertAfter(element);
}
Thank you. Are you satisfied now ? :) That is answer for asked question, so it is accepted. I knew, that the problem was somewhere here :)
I figured that highlight not working with first call. Thank you.
Hm. Just realised this is not so good as I think. Check YOUR FIDDLE. Focus first input field and type some not digital symbols, error will not appear. When you click somewhere on page, error will appear. Now delete all from input field. Try to type some not digitals. Now input validating on each keypress. Why validating don't works on each keypress first time?
After that, check another fiddle. I added remote rule. I don't know why, but now if I input something invalid - error is appearing (not only invalid because of remote). If I make input data valid, classes of error label are deleting, but error label is still shown, but must be display: none;. Whats wrong again :((((
|
STACK_EXCHANGE
|
Question about using a VISA card to withdraw money from PayPal balance
According to the following PayPal policy,
https://cms.paypal.com/mz/cgi-bin/?cmd=_render-content&content_ID=ua/RecPymt_print
The only way for me to withdraw money from PayPal in my country is through a VISA credit card (It gets transferred at the start of each month for no fees).
But here is the part that is baffling me; I have never ever heard of someone depositing money into a credit card. I don't think credit cards support depositing money into to begin with.
So tell me, what does it mean that PayPal will transfer the money to my VISA card? Will it simply just get transferred to my bank account by the local bank after that?
I'm going to ask at the local bank, but I thought I'd get more opinions first. I appreciate your help. I'm relatively new to banking.
I think asking your card issuer would be more useful, as we can only guess.
I don't think credit cards support depositing money into to begin with.
Anyone could deposit money to a Credit Card acccount.
All they need is your bank's name, Visa/Mastercard, and 16 digit number.
It is done through the "Pay Bills / Make Payments" function in online banking.
So tell me, what does it mean that PayPal will transfer the money to my VISA card
You can use the new balance for spending via Credit Card, the effect is same as making a payment from your chequing account to credit card account.
Will it simply just get transferred to my bank account by the local bank after that
Some banks would refund the excess amount from your Credit Card to your Chequing Account after a while, but most don't. People keep credit balance on credit card to make a purchaes larger than credit limit. For example, if your credit limit is $1000, balance is $0, and you made $500 payment to the credit card, you can make a purchase of $1500 without asking for credit limit increase.
"Anyone could deposit money to a Credit Card acccount." - most certainly not true.
Not true because of what? Restriction? No access to bank tellers or online banking?
Not true because it is not an option in many places. Don't assume that what works in your country works the same everywhere else.
Outside of that quibble, the rest of the answer seems accurate. (It is certainly possible for a store to issue a refund to a card, at least in the us and for the cards I'm familiar with. There may be restrictions on that which would keep it from being used for other kinds of credit; that will depend in part on the details of PayPal's specific agreements with the card processing systems and in part on what your bank has authorized or can authorize.)
|
STACK_EXCHANGE
|
MuleSoft’s Customer Support team is seeking exceptionally talented systems and cloud engineers to support our rapid growth and reach the next level of scale by driving successful customer integrations, extending the functionality of our Anypoint Platform. We maintain an extremely high level of satisfaction across our customer base, and we take great pride in our operational efficiency and the strength of the solutions we provide to our customers.
As a Customer SysOps Engineer, you will provide technical support for MuleSoft Anypoint Platform (Cloud and On-Premise) customers, continuously enhancing our engagement best practices. You will bring a deep understanding of Internet technologies, networking, systems engineering, cloud and security to the team, which in turn will help our customers to securely extend and connect their data centers to Anypoint Platform cloud, or deploying the Anypoint Platform platform in their datacenter.
What you’ll achieve:
- Hit the ground running, mastering Anypoint Platform connectivity model including VPC and VPN provisioning and troubleshooting; you will get familiar with MuleSoft core products and business training
- Expand your Anypoint platform and products knowledge and get certified, while addressing customer inquiries and triaging issues of connectivity
- Directly engage with DevOps, engineering and product to triage and provide quantified feedback to help and improve our products
- Engage with other cross-functional teams to review and improve customer engagement with MuleSoft Support and products
- You will be capable in handling platform inquiries and issues, and consult customers on network and security architecture
- Help customers with challenging complex technical issues and quickly learn new technologies. Main area of focus would be: Networking, IPSec Tunnels, Identity Management, Containers and Cloud Platforms
- Help improving and optimising support processes, and build support tools for our customers and internal teams
- Promote knowledge sharing in the team by contributing to the knowledge base, blogs, and brown bag lunches
- Help maintain MuleSoft Support as a differentiator
What you’ll need to be successful:
- Bachelor’s degree in CS or equivalent industry experience
- 5+ years demonstrated expertise implementing and supporting Enterprise-grade technical systems, and with networking solutions to meet complex business requirements
- Extensive experience with Internet technologies and protocols like TCP/IP, VPN/IPSec, SSL/TLS, and HTTP
- Deep knowledge of Linux fundamentals
- Strong written and verbal communication skills and strong cognitive ability, especially with respect to understanding, documenting, and describing complex technical subjects
- Hands-on experience with public and private cloud services, such as AWS, Openstack, VMWare, Pivotal Cloud Foundry, as well as proxies, load balancers, and networking devices is a plus
What you’ll get from us:
We realize exceptional people don’t choose jobs based solely on benefits, but we do our best to make sure that you’re set up for success so you can do your best work. As a Muley, you’ll receive health insurance for you and your family, equity, competitive salary with twice yearly market salary revisions, annual performance bonus, and flexible vacation time. Plus the fun stuff, like a fully stocked kitchen, regular catered lunches, volunteer opportunities, twice-yearly hackathons, office celebrations, and MeetUp, our annual all-company offsite in California. Check out our Life at MuleSoft page to learn more!
|
OPCFW_CODE
|
[KOGITO-4425] Split quarkus extension for decisions, rules and predictions
https://issues.redhat.com/browse/KOGITO-4425
Summary
The goal of this ticket is to split Kogito quarkus extension to enable the user to select a subset of feature.
New extensions are introduced by this PR:
kogito-quarkus-decisions: to add DMN support
kogito-quarkus-rules: to add DRL support
kogito-quarkus-predictions: to add PMML support
The existing kogito-quarkus extension is still the same and it provides access to the whole platform
To obtain a similar flexibility a new common kogio-quarkus-common module has been created with most of the shared (existing) code.
One of the key design choice of this PR is to enable extension composition: if user wants DMN and PMML it should be enough to add kogito-quarkus-decisions and kogito-quarkus-predictions without the need to remove one of them or add the kogito-quarkus just because it contains all the others.
Details
Created KogitoQuarkusResourceUtils utility class with common utils method to compile/save/register generated resources
Each extension should (must) declare a *AssetsProcessor to declare extension specific code (i.e. class to register for reflection) and Feature/Capability
KogitoAssetsProcessor moved to common module and now only covers generators that can be loaded via SPI (aka all generator expect PersistenceGenerator and JsonSchemaGenerator for now)
Enabled post generation customisation via KogitoGeneratedClassesBuildItem: this build item contains a Jandex index that contains all the generated Kogito classes. ProcessesAssetsProcessor uses this mechanism to plug persistence and jsonSchema generators
Each extension can have a kogito-*-integration-test and a kogito-*-integration-test-hot-reload module to provide smoke test for the extension. NOTE: full integration test coverage should be implemented in the integration-tests module, these integration tests are executed by quarkus-platform pipelines to make sure the extension works after the inclusion in the platform.
Created README.md with general information
To be merged after https://github.com/kiegroup/kogito-runtimes/pull/1066 Merged
The Linux check has failed. Please check the logs.
The Linux check has failed. Please check the logs.
The Linux check has failed. Please check the logs.
The Linux check has failed. Please check the logs.
The Linux check has failed. Please check the logs.
jenkins retest this please
The Linux check has failed. Please check the logs.
The Linux check has failed. Please check the logs.
The Linux check has failed. Please check the logs.
/cc @gsmet @aloubyansky when this PR will get merged, Kogito will be provide n > 1 extensions maybe we'll have to update platform (?)
@evacchi sounds reasonable
The Linux check has failed. Please check the logs.
The Linux check has failed. Please check the logs.
@danielezonca will a -process extension be possible with mix and match to what we currently have aready? Lets say on processes ( no decision ), processes + decisions ( meaning users need to check both extensions? ) and a kogito-qarksu that would still include the entire platform?
@danielezonca will a -process extension be possible with mix and match to what we currently have already? Let's say on processes ( no decision ), processes + decisions ( meaning users need to check both extensions? ) and a kogito-quarkus that would still include the entire platform?
Yes the approach is quite flexible each extension can be combined with the others (thanks to common module).
We can easily split and create additional extensions like a process extension or even (not sure if make sense) a serverless workflow specific and a bpmn one (this proposal will probably requires changes in codegen too).
My idea is that in the future we can use this approach to plug different integrations like kogito-lambda or in general kogito-funqy etc.
The only drawback/aspect to consider is that when there are explicit reference from one resource to another (like process code to RuleUnit/DMN classes) we need to add a validation check during codegen to fail if a required feature is not available (and prevent strange runtime exception).
jenkins retest this please
The Linux check has failed. Please check the logs.
The Linux check has failed. Please check the logs.
jenkins retest this please
The Linux check has failed. Please check the logs.
jenkins retest this please
The Linux check has failed. Please check the logs.
The Linux check has failed. Please check the logs.
The Linux check has failed. Please check the logs.
jenkins retest this please
The Linux check has failed. Please check the logs.
jenkins retest this please
The Linux check has failed. Please check the logs.
jenkins retest this please
The Linux check has failed. Please check the logs.
jenkins retest this please
The Linux check has failed. Please check the logs.
The Linux check has failed. Please check the logs.
jenkins retest this please
The Linux check has failed. Please check the logs.
jenkins retest this please
The Linux check has failed. Please check the logs.
jenkins retest this please
The Linux check has failed. Please check the logs.
Timeout...
jenkins retest this please
The Linux check is successful.
|
GITHUB_ARCHIVE
|
Building with Docker
This will differ on which operating system you have installed, this guide is for linux-based systems. Please take a look at the official Docker Get Docker guide. There is also a guide from ROS called getting started with ROS and Docker. On Ubuntu one should be able to do the following to get docker:
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg echo "deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null sudo apt-get update sudo apt-get install docker-ce docker-ce-cli containerd.io
distribution=$(. /etc/os-release;echo $ID$VERSION_ID) \ && curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add - \ && curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | sudo tee /etc/apt/sources.list.d/nvidia-docker.list sudo apt-get update sudo apt-get install -y nvidia-docker2 sudo systemctl restart docker sudo docker run --rm --gpus all nvidia/cuda:11.0-base nvidia-smi #to verify install
From this point we should be able to "test" that everything is working ok. First on the host machine we need to allow for x11 windows to connect.
We can now run the following command which should open gazebo GUI on your main desktop window.
docker run -it --net=host --gpus all \ --env="NVIDIA_DRIVER_CAPABILITIES=all" \ --env="DISPLAY" \ --env="QT_X11_NO_MITSHM=1" \ --volume="/tmp/.X11-unix:/tmp/.X11-unix:rw" \ osrf/ros:noetic-desktop-full \ bash -it -c "roslaunch gazebo_ros empty_world.launch"
Alternatively we can launch directly into a bash shell and run commands from in there. This basically gives you a terminal in the docker container.
docker run -it --net=host --gpus all \ --env="NVIDIA_DRIVER_CAPABILITIES=all" \ --env="DISPLAY" \ --env="QT_X11_NO_MITSHM=1" \ --volume="/tmp/.X11-unix:/tmp/.X11-unix:rw" \ osrf/ros:noetic-desktop-full \ bash rviz # you should be able to launch rviz once in bash
Clone the OpenVINS repository, build the container and then launch it. The Dockerfile will not build the repo by default, thus you will need to build the project. We have a few docker files for each version of ROS and operating system we support. In the following we will use the Dockerfile_
mkdir -p ~/workspace/catkin_ws_ov/src cd ~/workspace/catkin_ws_ov/src git clone https://github.com/rpng/open_vins.git cd open_vins export VERSION=ros1_20_04 # which docker file version you want (ROS1 vs ROS2 and ubuntu version) docker build -t ov_$VERSION -f Dockerfile_$VERSION .
If the dockerfile breaks, you can remove the image and reinstall using the following:
docker image list docker image rm ov_ros1_20_04 --force
From here it is a good idea to create a nice helper command which will launch the docker and also pass the GUI to your host machine. Here you can append it to the bottom of the ~/.bashrc so that we always have it on startup or just run the two commands on each restart
nano ~/.bashrc # add to the bashrc file xhost + &> /dev/null export DOCKER_CATKINWS=/home/username/workspace/catkin_ws_ov export DOCKER_DATASETS=/home/username/datasets alias ov_docker="docker run -it --net=host --gpus all \ --env=\"NVIDIA_DRIVER_CAPABILITIES=all\" --env=\"DISPLAY\" \ --env=\"QT_X11_NO_MITSHM=1\" --volume=\"/tmp/.X11-unix:/tmp/.X11-unix:rw\" \ --mount type=bind,source=$DOCKER_CATKINWS,target=/catkin_ws \ --mount type=bind,source=$DOCKER_DATASETS,target=/datasets $1" source ~/.bashrc # after you save and exit
Now we can launch RVIZ and also compile the OpenVINS codebase. From two different terminals on the host machine one can run the following (ROS 1):
ov_docker ov_ros1_20_04 roscore ov_docker ov_ros1_20_04 rosrun rviz rviz -d /catkin_ws/src/open_vins/ov_msckf/launch/display.rviz
To actually get a bash environment that we can use to build and run things with we can do the following. Note that any install or changes to operating system variables will not persist, thus only edit within your workspace which is linked as a volume.
ov_docker ov_ros1_20_04 bash
Now once inside the docker with the bash shell we can build and launch an example simulation:
cd catkin_ws catkin build source devel/setup.bash rosrun ov_eval plot_trajectories none src/open_vins/ov_data/sim/udel_gore.txt roslaunch ov_msckf simulation.launch
And a version for ROS 2 we can do the following:
cd catkin_ws colcon build --event-handlers console_cohesion+ source install/setup.bash ros2 run ov_eval plot_trajectories none src/open_vins/ov_data/sim/udel_gore.txt ros2 run ov_msckf run_simulation src/open_vins/config/rpng_sim/estimator_config.yaml
Jetbrains provides some instructions on their side and a youtube video. Basically, Clion needs to be configured to use an external compile service and this service needs to be exposed from the docker container. I still recommend users compile with
catkin build directly in the docker, but this will allow for debugging and syntax insights.
/ blog.jetbrains.com/ clion/ 2020/ 01/ using-docker-with-clion/
/ www.youtube.com/ watch?v=h69XLiMtCT8
After building the OpenVINS image (as above) we can do the following which will start a detached process in the docker. This process will allow us to connect Clion to it.
export DOCKER_CATKINWS=/home/username/workspace/catkin_ws_ov # NOTE: should already be set in your bashrc export DOCKER_DATASETS=/home/username/datasets # NOTE: should already be set in your bashrc docker run -d --cap-add sys_ptrace -p127.0.0.1:2222:22 \ --mount type=bind,source=$DOCKER_CATKINWS,target=/catkin_ws \ --mount type=bind,source=$DOCKER_DATASETS,target=/datasets \ --name clion_remote_env ov_ros1_20_04
We can now change Clion to use the docker remote:
- In short, you should add a new Toolchain entry in settings under Build, Execution, Deployment as a Remote Host type.
- Click in the Credentials section and fill out the SSH credentials we set-up in the Dockerfile
- Host: localhost
- Port: 2222
- Username: user
- Password: password
- CMake: /usr/local/bin/cmake
- Make sure the found CMake is the custom one installed and not the system one (greater than 3.12)
- Add a CMake profile that uses this toolchain and you’re done.
- Change build target to be this new CMake profile (optionally just edit / delete the default)
To add support for ROS you will need to manually set environmental variables in the CMake profile. These were generated by going into the ROS workspace, building a package, and then looking at
printenv output. It should be under
Settings > Build,Execution,Deployment > CMake > (your profile) > Environment. This might be a brittle method, but not sure what else to do... (also see this blog post). You will need to edit the ROS version (
noetic is used below) to fit whatever docker container you are using.
When you build in Clion you should see in
docker stats that the
clion_remote_env is building the files and maxing out the CPU during this process. Clion should send the source files to the remote server and then on build should build and run it remotely within the docker container. A user might also want to edit
Build,Execution,Deployment > Deployment settings to exclude certain folders from copying over. See this jetbrains documentation page for more details.
|
OPCFW_CODE
|
Archive version of newsletter.
We need to have a place with the archive versions of the newsletter for people to read.
Perhaps a nikola site running on gh-pages.
I thought the newsletter was also going to get posted to the blog. Would that work?
That would work and be easier too.
I thought the newsletter was also going to get posted to the blog. Would that work?
Don't care which one.
That would work and be easier too.
I'm not sure it's easier as you have to rewrite all in markdown, relink everything , reupload all the images,
and someone have to do it manually.
Note, we use mailchimp, this can be done automatically:
http://kb.mailchimp.com/campaigns/archives/about-campaign-archives
As we're creating them in this repo, I assumed that we'd archive them here (maybe with a nicer landing page linking to them).
As we're creating them in this repo, I assumed that we'd archive them here (maybe with a nicer landing page linking to them).
Does not change the fact that someone have to do that manually. The first newsletter was a draft here,
was refined on a google doc, transferred to Mailchimp which a WYSIWYM editor, and sent.
Which is not a workflow I'm found of, but for now is a reality.
So we still have to set that up.
*Brian and Ana had discussed putting the latest newsletter on the top of the blog and archiving the others in a tab on the same page..
**Yes the workflow needs to be changed..should we draft and refine here? That works for me, but I'm still getting familiar with github and so is Ana. We also aren't attatched to Mailchimp is there are other suggestions
Perhaps a workflow checklist might be a good start:
[ ] Content submissions and ideas
[ ] Draft newsletter on GitHub
[ ] Refine draft on Google Doc for publication
[ ] Review final proof of newsletter
[ ] Transfer final proof to MailChimp
[ ] Distribute via MailChimp
[ ] Archive final version on GitHub
[ ] Post newsletter to blog and archive on blog (if desired)
*Brian and Ana had discussed putting the latest newsletter on the top of the blog and archiving the others in a tab on the same page..
I want to avoid manual interaction, and having someone being the limiting factor. If mailchimp has archives, I don't see why we could not use them.
Let's keep current workflow for now, and refine slowly.
@katiewhite360 @Ruv7 @Carreau I updated the checklist of steps above. If you think this reflects the current process and like that process, let me know and I can help put together a draft document of the workflow process which you can edit as desired. Please use me as a resource for GitHub questions too.
Check in to GitHub and review final proof of newsletter (+1 approvals on GitHub)
Can we also share a Read-Only link to this draft on google docs ?
Also,
it should be possible to use Jekyll as a preview of the newsletter if it's written as markdown.
@katiewhite360 did you enable the archive thing ? If so do you have a link ?
Publishing without archiving make it almost useless as no one can look at what they were goig to subscribe to.
We should Not publish the next newsletter if these issues are not fix.
Closing as addressed in #27.
|
GITHUB_ARCHIVE
|
Redis cursor manager
@jhorstmann @whiskeysierra
This is a CursorManager for Redis (i.e. AWS ElastiCache as well).
Some Refactorings that come along with it:
Rename PersistentCursorManager to JdbcCursorManager because Redis is also persistent (as well as every other storage that will be supported). What is specific about the former PersistentCursorManager is that it is using Jdbc.
Move CursorManager and associated classes into a dedicated package. Since the project is growing that brings imho more overview. Raise your voice if you don't like it.
Maybe it would make sense to not deliver the different Cursor / Partition Managers with the "fahrschein-core" artifact but to create additional artifacts: "fahrschein-jdbc-persistence", "fahrschein-redis-persistence". That would avoid too many heavy unused dependencies. E.g. a user who wants to persist to a SQL database is not interested in Redis dependencies. Or do you think the project is too small so far to legitimate different artifacts?
Very nice. I agree about the renaming to JdbcCursorManager, but would have kept the interface CursorManager in the main package, with subpackages for jdbc and redis.
I thought about splitting up the library before, for now I think it would be simpler to mark the redis dependency as optional (same as for spring-jdbc).
Is the org.json dependency needed? It does not look like it's used at all.
The commons-lang dependency also seems to be only used in one place and could be replaced by guava Splitter or String.split.
Maybe it would make sense to not deliver the different Cursor / Partition Managers with the "fahrschein-core" artifact but to create additional artifacts: "fahrschein-jdbc-persistence", "fahrschein-redis-persistence". That would avoid too many heavy unused dependencies. E.g. a user who wants to persist to a SQL database is not interested in Redis dependencies.
:+1:
@jhorstmann I updated the Pull Request as you requested:
Moved CursorManager to top level package again and have deparate jdbc and redispackages
Made the Redis dependencies optional. I wasn't aware of this, thanks for pointing out. 👍
Got rid of commons-io and replace it with the Guava pendants. Also wasn't aware of them, since I am not so familiar with Guava. Thanks!
Got rid of unused org.json dependency. (I had a class with a main method for local testing where I used it and forgot to delete it afterwards.)
LGTM
One final question, do you rely on the equals/hashCode methods now in Cursor? I agree it's an obvious candidate for being a value object, but I'm very hesitant to adding equals methods without good reasons. For example, since the cursor does not contain the event name, two cursors for different events could now compare equals.
One final question, do you rely on the equals/hashCode methods now in Cursor? I agree it's an obvious candidate for being a value object, but I'm very hesitant to adding equals methods without good reasons. For example, since the cursor does not contain the event name, two cursors for different events could now compare equals.
I need them for tests only: https://github.com/zalando-incubator/fahrschein/pull/41/files#diff-3583663db788eb83d30d20285ac6468eR39
assertThat("Could not serialize and deserialize cursor " + cursor.toString(),
actualCursor, IsEqual.equalTo(cursor));
Fur sure, I could instead write a custom Hamcrest Matcher. But I would personally consider two cursors as equal if the partition name and offset is equal, since the event type name is not part of a cursor.
LGTM
|
GITHUB_ARCHIVE
|
After researching for a few hours online to find a guide on how to control my Thinkpad’s fan speed I realized that the new models have some differences from previous models and the guides available are not complete if not wrong. So, I am making this tutorial for anyone that has a new Thinkpad ( x30/x20 models ) and needs to control his fan in order to keep the noise down and get more battery life.
Every step below uses the terminal so open an instance with the combination CRTL + ALT + T
The first thing we will do is to install a program that will provide us information about the sensors of the laptop and their temperatures
sudo apt-get install lm-sensors
Configure the application in order to find every available sensor
Answer Yes to every question and the last confirmation for saving the changes made.
Install thinkfan which is our main program
sudo apt-get install thinkfan
Add the coretemp module to the startup list. It will provide us the temperature inputs.
echo coretemp >> /etc/modules
Load the coretemp module
sudo modprobe coretemp
The next step is to find your temperature inputs so take note the results of the following command
sudo find /sys/devices -type f -name "temp*_input"
If you don’t get any outputs ( similar to the next step ) please Reboot and continue from this step.
It’s time to edit our thinkfan configuration
sudo gedit /etc/thinkfan.conf
Go to the line where it says #sensor /proc/acpi/ibm/thermal … and below that line ( which should be commented since thermal is not supported in the new thinkpads ) insert something like the following:
sensor /sys/devices/platform/coretemp.0/temp1_input sensor /sys/devices/platform/coretemp.0/temp2_input sensor /sys/devices/platform/coretemp.0/temp3_input sensor /sys/devices/virtual/hwmon/hwmon0/temp1_input
The above lines are the results from Step 5 prefixed with ‘sensor ‘.
Time to set the temperature rules. The format is: ( FAN_LEVEL, LOW_TEMP, HIGH_TEMP ) meaning that each FAN_LEVEL will start when the highest temperature reported by all the sensors meets its LOW_TEMP and if it surpasses its HIGH_TEMP it will go to the next FAN_LEVEL rule. If it goes below the LOW_TEMP it will fallback to the previous FAN_LEVEL rule. Please take notice that the HIGH_TEMP of a rule must be between the LOW_TEMP & HIGH_TEMP of the rule that follows.
My settings are:
#(FAN_LEVEL, LOW, HIGH) (0, 0, 60) (1, 57, 63) (2, 60, 66) (3, 64, 68) (4, 66, 72) (5, 70, 74) (7, 72, 32767)
NOTE: I am not responsible for any problems you encounter with these rules. They are fine as per my configuration so please test them before using them and if necessary adjust them to your needs.
Now, we must add a configuration file into the modprobe.d
sudo echo "options thinkpad_acpi fan_control=1" >> /etc/modprobe.d/thinkpad.conf
If you want to start thinkfan automatically at boot-time please type the following
sudo gedit /etc/default/thinkfan
Change the line START=no to START=yes. If the line does not exist add it yourself.
RESTART your laptop and everything should work as expected. Test your laptop’s temperatures ( using sensors command ) under different workloads and verify that the fan speed is as per the rules you defined.
If you encounter a typing mistake or a step not working for you please comment below. On the contrary if everything works then comment below verifying the guide.
This information was taken from here – http://mastergenius.net/wordpress/2012/07/20/control-your-thinkpad-t430-fan-speed-in-ubuntu-12-04/
The time I need it the site was trowing Nginx errors, so I had to use time machine to get it.
The fallowing was tested on ThinkPad T430 with Ubuntu Mate, and it still works without any problems.
|
OPCFW_CODE
|
import { Span } from 'types/JaegerInfo';
import { extractOpenTracingBaseInfo, getSpanType, getWorkloadFromSpan } from '../JaegerHelper';
export type SpanTableItem = Span & {
type: 'envoy' | 'http' | 'tcp' | 'unknown';
component: string;
hasError: boolean;
namespace: string;
app: string;
linkToApp: string;
workload?: string;
pod?: string;
linkToWorkload?: string;
};
// Extracts some information from a span to make it suitable for table-display
export const itemFromSpan = (span: Span, defaultNamespace: string): SpanTableItem => {
const type = getSpanType(span);
const workloadNs = getWorkloadFromSpan(span);
const info = extractOpenTracingBaseInfo(span);
const split = span.process.serviceName.split('.');
const app = split[0];
const namespace = workloadNs ? workloadNs.namespace : split.length > 1 ? split[1] : defaultNamespace;
const linkToApp = '/namespaces/' + namespace + '/applications/' + app;
const linkToWorkload = workloadNs ? '/namespaces/' + namespace + '/workloads/' + workloadNs.workload : undefined;
return {
...span,
type: type,
component: info.component || 'unknown',
hasError: info.hasError,
namespace: namespace,
app: app,
linkToApp: linkToApp,
workload: workloadNs?.workload,
pod: workloadNs?.pod,
linkToWorkload: linkToWorkload
};
};
|
STACK_EDU
|
Spherical geometry is the three-dimensional study of geometry on the surface of a sphere. It is the spherical equivalent of two-dimensional planar geometry, the study of geometry on the surface of a plane. A real-life approximation of a sphere is the planet Earth—not its interior, but just its surface. (Earth is more accurately called an "oblate spheroid" because it is slightly flattened at the ends of its axis of rotation, the North and South Poles.) The surface of a sphere together with its interior points is usually referred to as the spherical region; however, spherical geometry generally refers only to the surface of a sphere.
As seen in the figure on the next page, a sphere is a set of points in three-dimensional space equidistant from a point O called the center of the sphere. The line segment from point O (at the center of the sphere) to point P (on the surface of the sphere) is called the radius r of the sphere, and the radius r extended straight through the sphere's center with ends on opposite points of the surface is called the diameter d of the sphere (with a value of 2r ; that is, two times the value of the radius). As an example, the line that connects the North Pole and the South Pole on Earth is considered a diameter*.
*The average length of Earth's diameter is d = 6,886 miles.
An infinite line that intersects a sphere at one point only is called a tangent line. An infinite plane can also intersect a sphere at a single point on its surface. When this is the case the plane is also considered tangent to the sphere at that point of intersection. For example, if a basketball were lying on the floor, the floor would represent a tangent plane because it intersects the ball's surface (the sphere) at only one point.
Great and Small Circles
The shortest path between two points on a plane is a straight line. However, on the surface of a sphere there are no straight lines. Instead, the shortest distance between any two points on a sphere is a segment of a circle. To see why this is so, consider that a plane can intersect a sphere at more than one point. Whenever this is the case, the intersection results in a circle. A great circle is defined to be the intersection of a sphere with a plane that passes through the center of the sphere. For example, see the circle containing points C and D in the illustration below. Similar to a straight line on a plane, the shortest path between two points on the surface of a sphere is the arc of a great circle passing through the two points.
The size of the circle of intersection will be largest when the plane passes through the center of the sphere, as is the case for a great circle. If the plane does not contain the center of the sphere, its intersection with the sphere is known as a small circle. For example, see the circle containing points A and B in the illustration below.
As a real-world example, assume a cabbage is a sphere, and is cut exactly in half. The slice goes through the cabbage's center, forming a great circle. However, if the slice is off-centered, then the cabbage is cut into two unequal pieces, having formed a small circle at the cut.
Consider a circle of radius r. A portion of the circle's circumference is referred to as an arc length, and is denoted by the letter s. The first illustration of this article shows a circle of radius r and arc length s. The angle θ is defined as θ = . Rearranging this equation in terms of s yields s = θr. So the arc lengths of a great circle is equal to the radius r of the sphere times the angle subtended by that arc length.
Connecting three nonlinear points on a plane by drawing straight lines using the shortest possible route between the points forms a triangle. By analogy, to connect three points on the surface of a sphere using the shortest possible route, draw three arcs of great circles to create a spherical triangle. A triangle drawn on the surface of a sphere is only a spherical triangle if it has all of the following properties:
- the three sides are all arcs of great circles;
- any two sides, summed together, is greater than the third side;
- the sum of the three interior angles is greater than 180°, and
- each spherical angle is less than 180°.
In the second illustration of the article, triangle PAB is not a spherical triangle (because side AB is an arc of a small circle), but triangle PCD is a spherical triangle (because side CD is an arc of a great circle).
The left portion of the figure directly below demonstrates how a spherical triangle can be formed by three intersecting great circles with arcs of length (a, b, c ) and vertex angles of (A, B, C ).
The right portion of the figure directly above demonstrates that the angle between two sides of a spherical triangle is defined as the angle between the tangents to the two great circle arcs for vertex angle B.
The above illustration also shows that the arc lengths (a, b, c ) and vertex angles (A, B, C ) of the spherical triangle are related by the following rules for spherical triangles.
Cosine Rule: cosa = (cosb cosc ) + (sinb sinc cosA ).
Spherical Geometry in Navigation
Spherical geometry can be used for the practical purpose of navigation by looking at the measurement of position and distance on the surface of Earth. The rotation of Earth defines a coordinate system for the surface of Earth. The two points where the rotational axis meets the surface of Earth are known as the North Pole and the South Pole, and the great circle perpendicular to the rotation axis and lying halfway between the poles is known as the equator. Small circles that lie parallel to the equator are known as parallels. Great circles that pass through the two poles are known as meridians.
Measuring Latitude and Longitude. The two coordinates of latitude and longitude can define any point on the surface of Earth, as is demonstrated within the diagram below. Great circles become very important to navigation because a segment along a great circle provides the shortest distance between two points on a sphere. Therefore, the shortest travel-time can be achieved by traveling along a great circle.
The longitude of a point is measured east or west along the equator, and its value is the angular distance between the local meridian passing through the point and the Greenwich meridian (which passes through the Royal Greenwich Observatory in London, England). Because Earth is rotating, it is possible to express longitude in time units as well as angular units. Earth rotates by 360° in 24 hours. Hence, Earth rotates 15° of longitude in 1 hour, and 1° of longitude in 4 minutes.
The latitude of a point is the angular distance north or south of the equator, measured along the meridian, or line of longitude, passing through the point.
Measuring Nautical Miles. Distance on the surface of Earth is usually measured in nautical miles, where 1 nautical mile (nmi) is defined as the distance subtending an angle of 1 minute of arc at the center of Earth. Since there are 60 minutes of arc in a degree, there are approximately 60 nautical miles in 1 degree of Earth's surface. A speed of 1 nautical mile per hour (nmph) is known as 1 knot and is the unit in which the speed of a boat or an aircraft is usually measured.
A Case Study in Measurement. As noted earlier, Earth is not a perfect sphere, so the actual measurement of position and distance on the surface of Earth is more complicated than described here. But Earth is very nearly a true sphere, and for our purposes this demonstration is still valid.
The terms and concepts that have been developed can be applied to a real-world example. Consider a voyage from Washington, D.C. ("W" in diagram below) to Quito, Ecuador ("Q" in diagram below), which is nearly on the equator at 0° latitude, 77° West longitude. The latitude and longitude of Washington, D.C. is about 37° North latitude, 77° West longitude. If the entire voyage from Washington, D.C. to Quito (on the equator) is along the great circle of longitude 77°, we can use the equation s = θr to find the distance s that the airplane travels from Washington D.C. to Quito.
For this example, θ = 37° (the angle between W and Q ). Knowing that 2 radians equals 360° (one complete revolution around a great circle), we now convert the angle from degrees to radians: (37°) = 0.628 radians. Denoting the radius of Earth as r, we use the "arc-length" equation developed earlier, that is s = θr, to compute the arc length between Washington, D.C. and Quito.
Placing the values of θ = 0.628 radians and r = 3,443 nautical miles (nmi) (the average radius-value for Earth) into the equation yields: s = θr = (0.628 rad × 3,443 nmi) = 2,163 nmi. Therefore, along the arc of the great circle of longitude 77°, from Washington D.C. to Quito, Ecuador, our trip covers a distance of 2,163 nmi.
see also Triangles; Trigonometry.
William Arthur Atkins with
Philip Edward Koth
Abbott, P. Geometry (Teach Yourself Series). London, U.K.: Hodder and Stoughton, 1982.
Henderson, Kenneth B., Robert E. Pingry, and George A. Robinson. Modern Geometry: Its Structure and Function. New York: Webster Division, McGraw-Hill Book Company, 1962.
Ringenberg, Lawrence A., and Richard S. Presser. Geometry. New York: Benziger, Inc. with John Wiley & Sons, 1971.
Selby, Peter H. Geometry & Trigonometry for Calculus. New York: John Wiley & Sons, 1975.
Ulrich, James F., Fred F. Czarnec, and Dorothy L. Guilbault, Geometry. New York: Harcourt Brace, 1978.
The Geometry of the Sphere. Mathematics Department at Rice University, Houston, Texas. <http://math.rice.edu/~pcmi/sphere/>.
Spherical Geometry. Mathematics Department at the University of North Carolina at Charlotte. <http://www.math.uncc.edu/~droyster/math3181/notes/hyprgeom/node5.html#SECTION00500000000000000000>.
|
OPCFW_CODE
|
Winter '20 Release Note: If you are using S-Docs below version 4.53, you may experience an "Attempt to de-reference a null object" error when interacting with various forms of automation in S-Docs. In order to fix this bug, you can create a new SDocs Settings custom settings set. To do this, type "Custom Settings" into the Quick Find / Search bar in the Setup menu, and click Custom Settings. Click SDocsSettings, then click Manage at the top of the page. From there, click New. Fill out the following information:
SD Jobs Batch Size: 30
SD Jobs Move to Top of Flex Queue: ☑
Additionally, ensure that you have a Remote Site Setting for either login.salesforce.com (production), or test.salesforce.com (sandbox).
To use the CreateSDocSync method, you will need to be on version 2.266 or later. For instructions on upgrading, please see our guide on upgrading S-Docs to the latest version.
- If you are looking for a way to generate multiple S-Docs (in batch) from an object list view, or you want a document to be automatically generated and emailed when a field value has changed or a date has passed, this article will help you configure S-Docs to meet your requirements.
- For example, when a user changes an opportunity stage field to “Send Quote,” you can configure S-Docs to generate a PDF quote along with a customized cover letter and email it to the opportunity contact. Users would not need to click on any buttons or choose any templates. Whenever the field value is changed, even from a mobile device, the process is invoked and the documents are generated and optionally emailed.
- When using the S-Docs API in batch mode, a user can select multiple records at once from a list view and send each record a custom invitation email to an event. The possibilities to further automate and distribute your documents are unlimited.
The S-Docs REST API is leveraged to invoke document generation programmatically. This powerful feature means that documents can be created in the background (synchronously) without any user involvement whenever defined criteria are met.
Both use cases work on the same principle –
1) A workflow rule, time-based workflow rule, or mass update button changes a field value that acts as a trigger to generate an S-Doc.
2) The document is created (and optionally emailed) by adding a line of code to an APEX trigger that invokes the S-Docs REST API.
3) The trigger field is reset.
SETUP QUICK OVERVIEW
- Add your Salesforce domain to the list of remote sites (within Setup).
- Add a field to your object that controls when the S-Doc is generated (e.g. Create_Welcome_Letter__c).
- Add a small trigger on the object that generates the doc whenever that field is set.
- Option A: Add a custom list button that updates the field in step 1 in bulk (and therefore generates/emails docs in bulk).
- Option B: Use a workflow rule that has a field update action (and therefore generates/emails the doc whenever the workflow rule is tripped).
- The S-Docs templates used in this automation need to be completely defined, meaning they must have all the needed merged fields to generate properly without user input. This won't work on any S-Docs template that prompts for user input during the generation process. For example, if you intend to email a document based on a workflow rule, then you need to use an S-Doc HTML template that defines the email body along with the “to,” “cc,” and “subject” fields in order to form a valid email. Without all the field values defined, the document won’t be able to generate correctly. These fields can be set dynamically, but must be defined in the S-Docs HTML template.
- Since S-Docs runs on the Salesforce platform, it is subject to Salesforce governor limits. To help prevent reaching those limits, we suggest batch document creation is limited to 50 records per invocation. You can generate many more documents, but you should group them into batches of 50 or less records per invocation.
S-Docs REST API
The S-Docs API method (CreateSDocSync) can be called from within a Salesforce trigger or an APEX class. It needs to be preceded by the namespace “SDOC.” and the class name “SDBatch.” The method can be called with 2 or 3 parameters depending on the use case.
API Call Signatures:
Option 1: Context User: This is the simpler call and can be used for basic workflow and batch processing. It requires a valid session ID at time of invocation.
SDOC.SDBatch.CreateSDocSync(STRING sessionid, STRING createURL)
Option 2: As User: This signature allows for time-based workflow, or where you need to specify that the invocation run under a specific user ID. It leverages JWT Bearer Assertion flow and therefore requires some additional admin setup. Click here to read about this setup.
SDOC.SDBatch.CreateSDocSync(STRING sessionid, STRING username, STRING createURL)
Call parameter details:
Sessionid: A valid session ID is used to generate and send documents. In APEX, you can use UserInfo.getSessionId() to retrieve the current user’s session ID. The document and email will be generated and sent from the given user context.
Username (used in 2nd call signature only): The Salesforce username (not record ID) that you want to use to generate and email the documents: The username is a unique identifier and is in the form of an email. Additional setup steps (described below) are needed to leverage this feature. User needs to be an active Salesforce user and have been granted API access in their user profile. S-Docs will create a new session whenever invoked in this manner.
CreateURL: This is in a querystring form (separated by &) that includes:
|Id=||(Required) The Salesforce ID of the record used to generate the S-Docs.|
|Object=||(Required) The API name of the object (for custom records it will end in __c).|
|doclist=||(Required) Comma delimited list of Salesforce IDs. These are the S-Docs template IDs. Note they will change from sandbox to production.|
|Sendemail=||(Optional) If set to 1, then the generated documents will be emailed. If you are using this option, one of the templates included in the doclist parameter should have the “Template Format” field set to “HTML.” This document will then comprise the email body of the outbound email. This template should also set the “to,” “cc,” and “subject” fields. This is found under the set advanced properties button on the template editor.|
|Aid=||(Optional) Comma delimited list of Salesforce record IDs for any attachments to be included in the outbound email.|
|Did=||(Optional) Comma delimited list of Salesforce record IDs for any Documents to be included in the outbound email.|
Option 1: Method and Code Examples:
The following can be used in your code with an immediate workflow action or a batch update process. In this example, we will send an email (with no attachments) for a given lead.
The following would generate a document on account (without email). This could be used as part of your trigger when a field is updated.
Option 1: Setup Instructions: Generating and emailing S-Docs in bulk
Here are detailed steps if you wanted to send introductory emails to leads in bulk:
- Salesforce requires a remote site to be authorized for the REST callout, even if it’s in the same instance. Therefore, you need to navigate to Setup > Security Controls > Remote Site Settings and add an entry for your own server (e.g. https://na1.salesforce.com ). Make sure the Active checkbox is checked. If you are using my domains, it is critical that you add the exact my domain registered. (It is case sensitive. Go to Setup > Domain Management > My Domain, and copy/paste the exact domain into the remote site setting.)
- You need to add a field (checkbox or other) to your object that controls when an S-Doc is generated, such as SDocs_Send_Intro__c. It would also be helpful to add a timestamp field called SDocs_Intro_Sent_Date__c that would record when the generation occurred.
- A simple trigger would use the SDOCS REST API to generate and optionally email the documents you specify. You can pass multiple S-Doc template IDs into the doclist parameter. You can also include attachment IDs and/or Salesforce document IDs if needed. In your code, you can dynamically set the template used based on any criteria (e.g. language preference). You would need to update the trigger below accordingly.
Option 2: Method and Code Examples
The next example can be used with a time-based workflow, or if you want to specify the user who will be used to generate (and optionally email) the document. In this case, we pass a blank sessionid and a valid username. If you were to pass both parameters, the session ID would be used unless it's null, in which case the specified user would be used.
Specifying running user or invocation from time-based workflow (Setup Instructions)
This option requires setting a few things up in order to work without any preauthorization (as required by Salesforce). Click here to read about this setup.
|
OPCFW_CODE
|
Lovelyfiction – Chapter 2878 – Ancient Rock City’s Secret imagine actor share-p2
rising tides choices
Novel–Reincarnation Of The Strongest Sword God–Reincarnation Of The Strongest Sword God
Chapter 2878 – Ancient Rock City’s Secret dependent scissors
No wonder… Not surprising he dared refuse our two Guilds’ companions.h.i.+p… So this is why for his confidence… Freezing Shadow investigated the Level 4 Divine Dragon as well as messed up Historic Rock and roll Community on the video recording with difficult sentiments.
Though Old Rock City’s defensive miracle selection was still effective, s.h.i.+ Feng hadn’t been able to sense the key land’s lifetime. However, with all the defensive miraculous assortment eliminated, practically nothing in Historical Rock Town could conceal from his Level 5 Planet Production Mana Sector.
The School Book of Forestry
Historical Rock Location was Monster Emperor’s starting point, as well as the man got longer since armed the city towards the tooth. Monster Emperor also had the Wicked G.o.d’s Temple’s support, a strong lifetime effective at going from the Battle G.o.d’s Temple. Even though s.h.i.+ Feng was really a Tier 5 competitor, taking Early Rock City ought to be demanding. “No! Vice Guild Head, Absolutely no Wing didn’t capture Historic Rock Community! It razed the complete community within a blow! Medieval Rock Metropolis is already a damage!” the Ranger countered, shaking his mind. He then had out a Magical Crystal Recorder and set it over the meeting kitchen table. “This will be the battle video we gained.”
peck’s bad boy with the circus
The Bride of the Tomb and Queenie’s
“Guild Chief, are the types issues beneficial?” Cola expected curiously while he considered the dark colored stones in s.h.i.+ Feng’s arms.
Even so, compared to the EXP the Faux Saint Slayers contributed, s.h.i.+ Feng was more interested in the pitch-black colored stones both the monsters dropped just after dying.
Formerly, any time people destroyed Mythic scored Faux Saint monsters, the player who landed the finis.h.i.+ng blow may have a dim-gray mist get into themselves. This grey mist would improve the overall player’s affinity with Mana to obtain a prolonged phase.
Abruptly, s.h.i.+ Feng executed a slash at Furious Fist, easily hurting him. As Mad Fist collapsed reluctantly to the floor and transformed into many contaminants of lightweight, two products sprang out in their location.
The Rival Crusoes
“Guild Innovator, are the ones stuff beneficial?” Cola requested curiously because he checked out the dark stones in s.h.i.+ Feng’s hands and wrists.
“It’s news of Absolutely no Wing!” the Ranger explained hurriedly. “According to the instructions, we’ve maintained tabs on Absolutely no Wing. On the other hand, a time earlier, No Wing…destroyed Historic Rock and roll Town!”
this is your country on drugs summary
“It appears to be my luck is good.” s.h.i.+ Feng was a little astonished as he spotted town Lord’s Token. “It dropped around the initially eliminate.”
“Foregone conclusions?” s.h.i.+ Feng laughed. Studying the restrained Monster Emperor, he aimed with the damaged Saint’s Fretting hand Dwelling from the extended distance and requested, “Are you referring to that solution land below your Guild Household?”
s.h.i.+ Feng has been a small personality unworthy of her focus during the past when Zero Wing had been a Guild dependent on exterior items to fight for its areas. Yet still, equally s.h.i.+ Feng and Absolutely no Wing were actually now bona fide t.i.tans that even Mythology could not and would not dare to offend.
“You!” Beast Emperor’s skin tone paled as he spotted the expression.
Beast Emperor nearly fainted as he observed s.h.i.+ Feng’s thoughts.
“Black Flame!” Monster Emperor was momentarily enraged at s.h.i.+ Feng’s ridicule. Even so, he soon calmed down and mentioned, “So imagine if you’ve learned the Bad G.o.d’s Secret Territory? Do you reckon you are able to fight for it? Even if you’ve seized me, I will still spread out this info to your complete G.o.d’s Site! At the moment, even Outerworld pushes will be taken listed here! Do you think it is possible to cease them all?”
Following Zero Wing’s subscribers exchanged glances, only Violet Cloud stepped forward.
What thick Soul Electricity!
“It appears to be my chance is good.” s.h.i.+ Feng became a tiny stunned as he observed town Lord’s Expression. “It fallen around the very first eliminate.”
“It had been able eliminate the full Old Rock and roll Location a single hit?”
“It’s news of No Wing!” the Ranger stated hurriedly. “According towards your requests, we have kept tabs on Absolutely nothing Wing. On the other hand, simply a instant previously, Absolutely nothing Wing…destroyed Historical Rock and roll Town!”
No wonder… No wonder he dared refuse our two Guilds’ lovers.h.i.+p… So this is why for his confidence… Ice cold Shadow checked out the Tier 4 Divine Dragon plus the damaged Ancient Rock Town in the training video with intricate thoughts.
love can do wonders quotes
Even so, when compared to the EXP the Faux Saint Slayers contributed, s.h.i.+ Feng was interested in the pitch-black color stones the 2 main monsters decreased right after loss.
Only a mislead wouldn’t think there was clearly a magic formula behind this achievement!
Even so, when compared to EXP the Faux Saint Slayers contributed, s.h.i.+ Feng was keen on the pitch-black color stones both monsters decreased just after dying.
Going through the Ranger, Situ Qingtian expected inside of a displeased strengthen, “Did anything transpire?”
The instantaneous s.h.i.+ Feng discovered the metropolis Lord’s Expression, a process notification rang as part of his the ears.
The Evil G.o.d’s Magic formula Property was originally his biggest top secret and something he obtained sunk heavily in. This mystery property has also been why he acquired acquired his recent electrical power.
“No really need to make-believe. Do you consider Tier 5 is a laugh?” s.h.i.+ Feng rolled his eyeballs at Beast Emperor.
“It’s headlines of Absolutely nothing Wing!” the Ranger said hurriedly. “According to the instructions, we’ve preserved tabs on Absolutely nothing Wing. Nevertheless, simply a moment previously, Absolutely no Wing…destroyed Medieval Rock City!”
Having said that, when Beast Emperor, who has been restrained in the corner, saw s.h.i.+ Feng’s behavior, he dearly wished to strangle s.h.i.+ Feng to fatality. s.h.i.+ Feng was really getting rid of the 2 main Faux Saint Slayers’ cores to be a snack food and permitting some others take in them. This was simply crazy!
“It feels my chance is nice.” s.h.i.+ Feng was a small astonished when he spotted town Lord’s Token. “It lowered about the initial destroy.”
“Good! Acquire this spirit gemstone. After, find a peaceful area and actually eat it. It ought to maximize your Attentiveness typical. If you’re fortunate, you could possibly even elevate it to Tier 5,” s.h.i.+ Feng reported when he given on the list of black colored gemstones to Violet Cloud. Then he thought to others, “If anybody else is able to achieve the Level 4 Highest conventional, you can get this heart and soul stone from
|
OPCFW_CODE
|
I have had some trouble with my PC, and could use some guidance as to figure out exactly what it is. I will first briefly describe what I think might be malfunctioning, then my hardware, then I will explain how it started and my troubleshooting of it, then summarise what I think I have come to.
I believe the issues most likely lies either within the motherboard, the graphics card, or within a software either within the two or on my system drive. The issue is at first my graphics crashed during gameplay, then it seems I was unable to get any good system graphics output ever again, sometimes I could not get any picture at all since the display only showed no contact, and I could never get audio from the motherboard or graphics card ever again through either HDMI or DP (I only used those two).
RAM: https://www.hyperxgaming.com/us/memory/fury-ddr4 (2 DIMMs, 3200MHz, 8GB each)
Graphics card: https://www.asus.com/ROG-Republic-Of-Gamers/ROG-STRIX-GTX1080-A8G-GAMING/specifications/
OS: Windows 10 Pro (64bit)
OS drive: https://ark.intel.com/content/www/us/en/ark/products/56571/intel-ssd-320-series-80gb-2-5in-sata-3gb-s-25nm-mlc.html
Storage drive: https://bit-tech.net/reviews/tech/storage/samsung-spinpoint-f3-1tb-review/1/
Disc drive: http://www.liteonodd.com/en/dvd-internal/item/dvdinternal/ihas124.html
I was just playing DOTA2 and I was in a match like many matches before, and it was an intense teamfight when all heroes used their powers and fought at the same time, when suddenly my screen got filled with purple box artefacts and everything on the screen froze, although I could still get audio through my headphones jack. I was not able to do anything to make anything happen on my display, so I reset the computer. When I came back into the game everything was really low resolution with low framerates, when heroes used their abilities and fought eachother the performance not just on the screen but also what I could hear through my audio jack when became super low that it was just unplayable.
I went to the loggs (Event viewer) and found this by the time it would have happened:
General: "Display driver nvlddmkm stopped responding and has successfully recovered"
"The description for event-ID 14 from source nvlddmkm cannot be found. Either the component that raises this event is not installed on your local computer or the installation is corrupted. You can install or repair the component on the local computer.
If the event originated on another computer, the display information had to be saved with the event.
The following information was included with the event:
0a7c(28d4) 00000000 00000000
The message resource is present but the message was not found in the message table"
So with this information I first decided something had happened with the drivers. I went and downloaded the newest drivers for my graphics card, I tested it out and still no luck, I couldn't play games due to low frame rate and bad graphics. By accident I also noticed something really weird: in the volume control I could no longer use the DP connection I had with my graphics card and my display to make the Display play audio. I rarely used to get audio from the display before, and my headphones still worked fine, but up until this point this had never been an issue. When I troubleshoot with the software it says it cant play audio cause no cable (HDMI or DP, I tried both), is connected (then how can I get a picture?), even though it is still the graphicscard's audio drivers visible as the audio type I troubleshoot.
So I try something new (and now it gets really weird again): I unplug the power to the display and plug it back in, and this time I get no picture, the display just says it gets no signal. I try to restart several times and still no signal.
Next I clear CMOS, and restart the system from there.
I plug my laptop into the display with a DP cable and have no issues getting getting both picture and audio. I plug my main computer to the TV with a HDMI cable through my graphics card and get no picture. So I plug my HDMI through my motherboards i/o and into the TV, and this time i get picture but no audio. So i plug my laptop into the TV with HDMI and get picture and audio. Now it gets weird again: I plug my DP port into my graphics card and back into the display, and this time it works! I restart both units and this time it doesn't work. I plug a HDMI into my graphics card again and into the TV and this time i get picture, but still cant get audio. So now I do this: i plug my DP connected to GPU cable into the display, HDMI through GPU into TV at the same time (this is the first time I had ever attempted two screens with this card btw, so even if it didn't work now I couldn't confirm if it would have worked before), and I turn on the system. First I get pictures for both units, but once I am a few seconds into windows the display loses contact and the TV gets filled with purple box artifacts. If I unplug both cables without turning the system of and try one with the other and even with just the motherboard I still cant get any picture from any screen. I try many different combinations like that forever, without getting any consecutive result when it works and when it doesn't, the only thing I can come up with is that I cant use two displays at the same time either through the card or backpanel, or both interfaces consecutively cause then both displays wont get signal or just one will, and once I lose signal I cant get another unless I restart the computer first. Only my main computer has this problem, not my smaller laptop, and the signals for respective display seems to work whenever they want to, there is no real trail (other then the previously mentioned) when it works and when it doesn't.
Now I take the motherboard out of the case and only plug in only the system disc and graphics card into it (RAM, cooler, CPU still attached, with PSU hooked up with all cables to all components without any case). Then it seems I manage to get picture almost anytime with both displays, but still just one at a time and no audio cause the system still says the audio cable is not connected, i try both using the graphics card and backpanel with the same results. I switch the GPU between the two other PCI-e slots with still the same results. I completely withdraw the GPU and use only the backpanel for HDMI and DP with the same results.
So now I try to run a game when I have picture, first no GPU just the backpanel to run goat simulator. Right from the menu start the entire system runs slow (not just the game, the entire computer) with low framerates and abysmal resolution, until I close the game and it returns to normal. The I run through my graphics card, I try all the PCI-e slots and nothing changes, it is still just as bad as when I run with no GPU at all. Next i try a much older card, a GPU from around 2010 with only 1GB and only VGA, and it still has the exact same results as when I run with my newer GPU and no GPU at all. I have also checked that I have the right NVIDIA drivers installed too (not that the motherboard/processor uses any NVIDIA drivers, I think).
Through these tests I can only guess that the fault would be somewhere within the motherboard, although I cannot conclude it since unlike with the rest of the components I have no other motherboard I can test them against to see if my other components work as they should. It seems likely that the motherboard would be at fault though, with how it can never detect HDMI/DP audio no matter where the cable goes, my other displays and laptop doesn't have the same issue with each other, and changing graphic cards and slots doesn't work either.
I have reached a dead end here, and don't know what next steps to take. Is there anything I can do about this, any tweaks to the software that could solve this, or anyway to find out not just what component is faulty, but also what part of it doesn't work, and in that case how it could have come to be that way?
I would appreciate any advice you could give me.
|
OPCFW_CODE
|
Job Portal provides a platform for job seekers and employers to connect. It's designed to make the job search process more efficient by providing a centralized location for job listings, job seeker profiles and application management.
Typically allow job seekers to create a profile, upload their resume and search for job listings that match their skills and experience. They can then apply for jobs directly through the portal, making the process faster and more convenient.
For employers, job portals provide a cost-effective way to reach a large pool of job seekers, post job listings and manage job applications. They can also use the portal to screen applicants, schedule interviews and make hiring decisions.
Virtual Interviews can be arrange thru Zoom or other third parties. The Zoom Integration provides a platform where you can create a zoom meeting. You can create, View, Start, and Delete meetings with ease.
Along with that, the zoom meeting created, will be synced with the calendar which will show the meeting details, as well as which are the people joining the meeting at what time. This calendar syncing feature helps to schedule meetings accordingly.
Branch/location/sister concern management refers to the process of managing the various branches, locations, and sister concerns.
Branch refers to a physical location where the company has a presence and can provide services to job seekers and employers. A location, on the other hand refers to a specific area within a city or region where the company operates.
Sister concerns refer to companies that are related to the main job portal company, either through common ownership, shared branding or other forms of affiliation.
The management of branches, locations, and sister concerns is important for a job portal as it helps to ensure that the company is providing consistent services across all its locations, and is able to effectively manage its operations in multiple locations.
This management may involve tracking the performance of each branch and location, managing the staffing of each branch, and coordinating the activities of sister concerns to ensure that they are aligned with the overall strategy and goals of the company.
The dashboard provides an overview of the job portal's key metrics, such as the number of job posts, applicants, and hired candidates.
This feature allows the user to create, edit, and manage job posts, including the job title, description, requirements, and application deadline.
This feature provides a centralized place to manage the information of job applicants, including their resumes, cover letters, and application status.
This feature enables the user to communicate with candidates directly through the job portal, including sending emails and tracking responses.
This feature allows the user to manage events related to recruitment, such as job fairs and information sessions, including scheduling, attendees, and event details.
This feature provides tools for managing the hiring process, including creating interview schedules, tracking progress, and communicating with the hiring team.
This feature enables the creation of custom application forms for job posts, including questions, required fields, and application deadlines.
This feature allows the user to create and customize a career page for the organization, including job listings, company information, and branding elements.
This feature provides tools for managing and organizing candidates, including categorizing, tagging, and filtering.
This feature allows the user to publish job posts on multiple job boards and social media platforms, as well as share job posts within the organization.
The dashboard provides an overview of the job portal's key metrics such as the number of job posts, post wise applicants and hired candidates. It also shows the upcoming interview schedules.
|
OPCFW_CODE
|
How to select rows where a text column starts by, ends by, or contains a string. Including slightly less trivial cases.
To find strings that match a pattern, the
LIKE operator is used.
LIKE as equal
In the most trivial case,
LIKE is identical to the
= operator. These two queries are the same:
SELECT id FROM book WHERE title = 'Don Quijote'; SELECT id FROM book WHERE title LIKE 'Don Quijote';
Any sequence of characters
% as a wildcard character, which means: any sequence of zero or more characters. So the following conditions look for strings that start by, end by, or contain the given strings.
-- start ... WHERE title LIKE '2001%'; -- contain ... WHERE title LIKE '%space%'; -- end ... WHERE title LIKE '%odissey';
Any single character
Something to remember is that, as mentioned above,
% may also mean no characters at all. So, in the first example above, the title
'2001' would match. This may not be what we want.
Another special character recognised by
_, which means exactly one character. So, for example, the following condition will match any text with at least one character or, if you like,
any non-empty text:
WHERE title LIKE '_%';
This is the same as
WHERE CHAR_LENGTH(title) > 0 but, since this syntax is more verbose and slightly varies depending on the DBMS, the
LIKE version may be preferable.
We can also modify the examples above to avoid matching exact words:
-- start ... WHERE title LIKE '2001_%'; -- contain ... WHERE title LIKE '%space_%' OR title LIKE '%_space%'; -- end ... WHERE title LIKE '%_odissey';
We can also check if a column contains more than one given texts. The obvious way is the following:
... WHERE title LIKE '%star%' AND title LIKE '%picard%';
But if we know in which order the texts should appear, the following condition is less verbose and probably faster (though this may depend if the DBMS implements certain optimisations in the first case or not):
... WHERE title LIKE '%star%picard%';
Generally speaking, indexes can only be used for conditions that look for a text at the beginning of a column:
... WHERE title LIKE 'star%';
In this simple case, for the index to be used, it needs to only include the
title column or the
title column needs to be the first column in the index.
In a real world query there are useful more conditions, and many indexes contain multiple columns. But explaining what happens in more complex cases is beyond the scope of this article.
LIKE is case-sensitive, so it also it has an
ILIKE operator which is case-insensitive:
SELECT id FROM book WHERE title ILIKE 'don quijote';
|
OPCFW_CODE
|
Programing Or Programming Projects A project in the visual design world can be as ambitious as others, but so can a functional team and its product. Building modular components between a VBA project and its integration efforts, which have the power to greatly accelerate development, can be accomplished with minimal effort. Let’s face it, that’s not the case, whether developers want to break or remain ‘workin’, they develop in automation projects by themselves. They also have the need in their favor, and the lack of any real-world project developer, like a sales person, is a warning sign. Even though the tools and concepts are applied across many others, one often isn’t enough. To wit: As developers, we want to constantly design new product or product offerings. A better way is somehow to maintain the following in case that something doesn’t fit. In a prototype, we want to make a program in the future that enables us to build individual components on working days and not to replace them in their entirety. We want to continue to model systems that use data (web, game or apps) in order to be presented faster, more efficiently and/or using more resources and time. We’ll give ‘new’ customers a competitive advantage when they sign up for Salesforce or other marketing services. After presenting the new products or framework, let’s focus a little longer on what we want our customers to know (when and where will it be released). First, let’s step it up a little little. The company VBA team plans on taking the technology from us and bringing it to production. The teams have done quite a few projects with us, but this kind of project consists of a set of specifications, many of which we’ll include in the documentation or in a PR. All that has to take place outside the enterprise ecosystem but before we can reach them, will you be left with a task like: What is the project? What makes it unique? What makes it easy? What determines its duration? What needs to be done with it? The team is ready to commit new products (unless the deadline is just click to read more tight) which basically involves new types of engineering. We’ll outline exactly what to do in each. Each changes/supports an important component (or you yourself) while making sure you still have standards in place. What is a component and what is an external or contract component. Unit dependencies and dependency relationships Step three: Build your Visual Development kit. Unit 5 provides a lot of features to make us significantly easier to build a prototype.
What Architecture Does Android Use?
Once it’s finished, we’ll set it up in the final build. Step six: Appolyn. Step nine: Proguard integration. After a couple of weeks of building, let’s wait and see if we’re ready to do it? At that point, after roughly 36 products plus a few generic components, we can easily add some really programming help android studio things into the final build and we’ll most likely build out the build for next weeks. Step 10: Build the last production version. Time is a very significant factor with a large amount of work accomplished in this criticalPrograming Or Programming Then (aka How to do it myself) This is a short summary of the programming techniques I described in our first chapter. Here are the basic lessons the book covers today by reviewing some of my work. There are a few tools and some concepts I use in the book that you can use in crafting your concepts. So just like the exercises in our first chapter, here are five related parts to the same material, what you should begin on: Chapter 1. Learning Some Basic Prerequisites Basic concepts are simple to follow and can be taken with a grain of salt. When I was starting out, I wrote a series of exercises that got me thinking about a little of programming. Many of these exercises I have in the book have their own pages and recipes, so I am using them heavily in the chapters that follow. My first exercise is how to create more complete and/or optimized programming ideas. linked here course there are also more general principles when it comes to programming. A basic programming language is often the hardest to program, so there is no perfect program for each category of a program I am using here, but for most of the rest of the articles, I have done the exact same thing at the end of our exercises—this is a good place to start—and I am happy if so. Here are the guidelines: 1. Beginning with my first program, my first thing to do is to prepare a sequence of code for the program. 1.1 What you are doing: Create code for a program as follows: 1.1 Each line of code is for a specific class or class method called: 1.
How Do I Start An Android App Program?
1.1.4-9.jpg.jpg.jpg2-3.jpg.jpg4-2 T.jpg1.jpg2.jpg4-1 T.jpg1.jpg3.jpg4-2 T.jpg1.jpg3.jpg4-1 T.jpg1.jpg3.jpg4-2 T.
Native Mobile Application Development
jpg1.jpg3.jpg4-2 T.jpg1.jpg3.jpg4-1 T.jpg1.jpg3.jpg4-1 T.jpg1.jpg3.jpg4-1 T.jpg1.jpg3.jpg4-1 T.jpg1.jpg3.jpg4-1 T.jpg1.jpg3.
Head First Android Development.Pdf
jpg4.jpg4.jpg4.jpg4.jpg.jpg.jpg.jpg.jpg.jpg.jpg.jpg.jpg.jpg.jpg.jpg my blog 4 T.jpg1.jpg3.jpg4.
jpg3.jpg4.jpg4.jpg4.jpg4-2 T.jpg1.jpg3.jpg4-1 J.jpg4.jpg3.jpg4.jpg4.jpg5.jpg4.jpg6.jpg4.jpg6.jpg5.jpg7.jpg4.
Where Do I Start Programming For Android?
What Is The Best Programming Language To Make Apps?
jpg.jpg5.jpg5 “Try to understand how these programs work to guide you, rather than a mechanical guess.” As I have said many times in my introduction to writing software, it has taken me back two years and now I am the researcher on how to understand programming in general. It is a lot easier now, after all, if we understand it the best things we learn will come by itself and the whole toolchain will be huge. In the next chapter, I will consider some of the basic concepts that I use often in programming methods. Chapter 1. Maintaining or Designing Your Patterns Today I would argue that a lot of the little things that we learned in a deep learning background are now more amenable to a multi-source implementation or even betterPrograming Or Programming “I make a day out when I only have one client,” says Brian Shaw. “But as an industry, it’s hard to get other people doing it.” The problem with the way our industry rules is that it makes more money for the world and not so much for what others are doing. Working with clients you don’t want and how you want to work. When you are young and are already familiar with what practices you’d perform under a set framework, it makes your work easier. The more you work with people who are from a certain time frame and culture, you’re less likely to spend more than you could if they aren’t. For example: Your client in the office works at 45 right now so they have 45 hours of productivity-based work. For example, if you’re working 40 hours a week, you spend more than 45 minutes a day so you miss the time. So unless you have good technology support, you run the risk of not getting your clients to do nearly the same things you feel that way. Call it the boss’ bug (like the company culture) or the badger’s bug (like the employee culture). If you don’t know you can always get another client, but if you do know and you have access to those skill-sets then you don’t really waste your time unless you have someone in the office who knows but doesn’t know how to. Some of my clients prefer not to work online. Where do I find the newbies? These are the people who don’t know they have an internist/cognitd guy or girl that knows some of their stuff and not has the skills for the job.
Programming Android App
Maybe you don’t have an internist guy, but if you’re like everyone else who works in the corporate office, you could get him since you’re more comfortable with people who can get jobs directly in your office so you don’t have to think about yourself and you don’t have to get a lawyer, and you might have a friend in the office who can get the jobs because he can get his internist[blo] what you have to deal with. Good news, as always: in any meeting at your new client’s desk, always head into the office where the new person may or may not have information about the meeting from the list of options provided or discover here not have some knowledge of/time for an application for work. I got some funny eyes when they came on my lunch break in San Francisco…she kept telling us, “get lost in the back porridge.” They don’t get to visit the lunch table. It goes along with the work that she has done. And many of them do it more than once, but the new workers make the most of that. *Note: This is probably one of few my boss calls this post a “scumbag,” because he pays attention to the fact that it’s not exactly true that there are no clients at all. Instead all of them have their own policies, processes, and culture — things like that. And there are a lot of other, relevant things, like how you run the business of generating revenue — but
|
OPCFW_CODE
|
This week we invited Hayden from Haberdashers Adams’ for a work experience program. Throughout the week, we shared insights, collaborated on projects, and provided Hayden with valuable exposure to our day-to-day operations, fostering an enriching experience for both our team and our talented guest. We thought you’d like to hear it from him first-hand…
Take it away Hayden!
As a student eager to explore the practical side of machine learning (ML) and business intelligence, my week of work experience at Purple Frog Systems turned out to be an eye-opening experience. Here’s a look into my week of work experience, navigating the basics of ML and Power BI in a real-world setting. My first few days at Purple Frog were all about getting acquainted with the team and the projects at hand. The office environment was welcoming, and the team members were patient as they introduced me to the basics of data, specifically ML and Power BI.
I kicked off a collaborative project with Jon where we encountered the challenge of dealing with separated data on loans. Together, we undertook the task of integrating and consolidating this fragmented data, implementing various techniques such as creating and joining tables in SQL Server. This hands-on experience not only enhanced our proficiency in database management but also allowed us to harness the power of structured data for more effective analysis and decision-making.
Teaming up with Lewis next, we harnessed the power of PySpark to clean and preprocess data from Jon’s tables, preparing it for integration into machine learning models. Our focus shifted from classification to regression models, with rigorous testing to optimise performance. Exploring Computer Vision, we trained a model to recognize pizza images, such as pepperoni and margarita. The learning extended into Natural Language Processing, where we looked into training models to understand and respond to human language nuances. Spending time with Lewis this week provided a hands-on, comprehensive experience, showcasing the versatility and practical applications of machine learning in data processing, image recognition, and language comprehension. Exploring ‘real’ projects allowed me to practice what I learned previously at school, making the theoretical aspects more tangible.
Simultaneously, I began exploring Power BI with Tom – a tool that simplifies data visualisation and reporting. The learning curve was gradual, but the hands-on projects helped me create interactive dashboards and reports. It was fascinating to see how data could be transformed into meaningful insights with just a few clicks. I made an interactive report on the same pizzas as earlier, showing pizza revenue over time so we were able to see things like how each pizza contributed to profit, etc. I also used tools online to help find complementary colours for making the reports visually aesthetic too.
During my time, I also had the opportunity to step into the world of Microsoft Fabric with Laura, who provided an insightful introduction to this aspect of the job. Additionally, Laura shared valuable career and university advice, offering top tips that proved to be instrumental in shaping my understanding of both the industry and the academic path ahead. This approach to mentorship not only enriched my technical skills but also provided invaluable guidance for my future.
The team at Purple Frog was always ready to help, answering my questions and providing guidance when needed. Regular meetings and discussions ensured that I felt part of a supportive group, easing the learning process. One of the most rewarding aspects of my work experience visit was witnessing how ML and Power BI directly impacted business decisions. Working on projects that had a real-world impact made the learning process more exciting and motivated me to continue doing my best.
My work experience at Purple Frog Systems was a journey of learning and practical application. It helped me bridge the gap between classroom knowledge and real-world scenarios in ML and Power BI. The experience equipped me with valuable skills and a newfound confidence in navigating the data-driven landscape. It has been a great opportunity to come into an office environment.
|
OPCFW_CODE
|
How to prepare _vimrc file (i.e. the file with the default settings)
1. Copy "C:\Program Files (x86)\Vim\_vimrc" to $HOME/_vimrc.
If you don't have $HOME variable, you may see it in Gvim with the help
2. Edit $HOME/_vimrc, add commands there.
add commands there.
For instance, my favorite tab stop options:
How to make the 'Backsapce' key work properly in Edit mode:
:set backspace=indent,eol,start(more info here: http://vim.wikia.com/wiki/Backspace_and_delete_problems)
How to start Gvim maximized under Windows:
(taken from http://vim.wikia.com/wiki/Maximize_or_set_initial_window_size)
Add the following line to _vimrc:
au GUIEnter * simalt ~x "x on an English Windows version. n on a French one
It is possible to have only one Gvim running:
Edit "file.txt" in server "FILES" if it exists, become server "FILES"
gvim --servername FILES --remote-silent file.txt
This means that you'll have only one Gvim running. New files will be opened in already running Gvim.
More information here:
Hidden characters in GVIM
Display hidden characters:
(taken from http://dinomite.net/2007/vim-tip-show-hidden-characters)
Learn the code of symbol under the cursor: ga
Find the symbol with code (for instance, Tab symbol with hex code 09):
Replace: the same, for instance :s/\%x09/ /gc
(taken from http://durgaprasad.wordpress.com/2007/09/25/find-replace-non-printable-characters-in-vim/)
How to show line numbers:
To turn line numbers on :set nu
To turn line numbers off :set nu!
How to convert the file from Windows to Unix format using GVIM
(i.e. replace \r\n to just \n)
1. Open file in GVIM.
2. :set ff=unix
3. Save the file and exit
How to remove empty lines in GVIM
(more tips here: http://www.rayninfo.co.uk/vimtips.html, great source of GVIM info)
How to make arrow keys work properly in edit mode in Vi on Linux term
Problem: It seems that when I am using VI through putty when I am in insert mode I get escape characters, instead of the cursor moving when I use the arrow keys.
Solution: 1. use vim.
2. Create a .vimrc file in your home directory:
echo syntax enable > ~/.vimrc
Read here: http://www.bluehostforum.com/archive/index.php/t-6700.html
To be continued...
|
OPCFW_CODE
|
I’m 37, and I’ve been a (professional) developer for 16 years. You would have thought that in that time, I’d have figured out an effective work style which delivered the desired outcomes (code cut, products shipped etc) without causing detrimental knock-on effects - but, sadly, you’d be wrong. I think the style in which I practiced my craft for the first 15 years of my career was much the same as every other enthusiastic developer: you put a ton of hours in. 12-16+ hour days, evening and weekend coding marathons, pizza in the keyboard, crunch times, 3am debugging sessions where you just can’t go to bed because you can feel the source of that bug just beyond your fingertips, dammit, desperate last-minute sprints to deadlines where you manage to slot that last piece in, Jack Bauer-like, just before the world goes to hell. If you’re in the demographic I’m talking about, you’re nodding sagely, and probably grinning a little too, reminiscing on past trials and glories. This sort of crazy dedication is respected in our circles, and is pretty much expected of any developer who has claimed to earn their stripes.
But, it turns out this kind of thing is not good for your health - who knew? Those of you who know me or keep up with my blog know that I’ve been dragged kicking and screaming away from my old ways, because of back issues that I initially ignored, then tried to cope with using token accommodations, and finally succumbed to in a big way. Being self-employed, this was a major problem. Crawling out of the pit I dug for myself took a long time and a lot of frustration - I read quite a few productivity books on the subject to try to find answers on how to keep working, and in the end found that the answers you mould for yourself tend to be the best ones. I’d like to share one of the things I learned along the way.
But I’m ‘In The Zone’!!
So, I want to talk about the biggest problem I encountered: concentration periods. I can’t sit at a desk for longer than about an hour at a time now; if I don’t get up and walk around, do some gentle stretching etc, at least this often, I’ll pay for it badly once I do move, and probably over the next few days too. I also can’t realistically work more than a standard 8 hour day without pain any more. The problem with this was that, as a programmer, the style which I developed over 15+ years involved getting gradually ‘Into The Zone’ and coding for very long periods at a time, uninterrupted. This is a common theme among coders, who like to shut themselves away for hours at a time, wear headphones to avoid distractions, have ‘quiet times’ and so on - and it’s also why we tend to react really badly when interrupted. Programming requires concentration, and concentration seems to run on a valve system - it takes time to warm up, and once it’s going, you don’t want to turn it off because starting it up again is a major hassle.
I thought there was no way around this, and had begun to resign myself to just being less productive because of it. However, over the last 6 months in particular, I’ve discovered that, far from being an intractable problem, this ‘slow warm up, long uninterrupted focus time’ approach is to a large degree a learned behaviour, and it’s possible to re-train yourself to cope with things differently. It’s a little like when people learn to adopt polyphasic sleep patterns - it’s not that you can’t do it, it’s just that when you’ve become accustomed to doing things a certain way, changing that is initially very, very hard. But it’s not impossible, given the right amount of motivation and time to adjust.
So, my goal was to acclimatise myself to many shorter work chunks during the day instead of a few very large ones, while still maintaining productivity. The key to this was to learn how to get back ‘In The Zone’ in the shortest time possible - much like the way polyphasic sleepers train themselves to achieve REM sleep more quickly. I’m mostly there now, or at least way better at it than I was, so, what techniques did I use to make this transition?
1. Embrace interruptions
This is less of a technique and more of a deliberate psychological adjustment which cuts across all the practical approaches I’ll cover next. Instead of being the typical coder who avoids interruptions at all costs, you need to accept them, and learn to manage them better. It’s hard - you have to try to set aside years of resisting interruptions and initially, until you adjust, you’ll feel like you can’t get enough done. Many people will probably want to give up, unless there’s something specific motivating them to push through it - for me, daily pain was a great motivator. My main message here is that the transition is just a phase, and that it is possible to be an interruptable programmer who still gets things done. But you have to learn not to fight against it, hence why this is the first point.
2. Maintain context outside of your head at all times
Much of the problem with interruptions is that of losing context. When you’re in that Zone, you’re juggling a whole bunch of context in your head, adjusting it on the fly, and maintaining and tweaking connections between issues constantly. Interruptions make you drop all that, and it takes time to pick it all up again. My answer to this was to externalise as much as possible, on as many levels as possible:
Maintain a running commentary on your current task
I am my very own chronicler. I write notes on what I’m doing all the time, whether it’s adding a comment line to a ticket, committing frequently and writing detailed commit notes (you do use a DVCS to make light commits more practical, right? ;)) scribbling a drawing on (ordered) pieces of paper. This really isn’t that onerous, and in fact externalising your thoughts can often help you clarify them. Basically the guide is that roughly every 30 minutes, I should have generated some new piece of context which is stored somewhere other than my head. If I haven’t, then that’s context I’d have more trouble re-building mentally if I’m interrupted. It doesn’t take much time to do, and it has other benefits too such as recording your thought & decision process.
Ruthlessly ignore tangental issues
You might have noticed that in the last bullet, I used the words ‘current task’, singular. Not ‘tasks’. There is no such thing as having more than one ‘current task’ - there is only the one task you’re actually working on, and distractions.
We probably all use bug trackers / ticket systems to track bugs and feature requests, but when you’re working on a ticket, it’s very common to spot a new bug, or identify an opportunity for improvement, or think of a cool new feature. How many of us go ahead and deal with that right away, because it’s in the area we’re already in, or it’s ‘trivial’, or it’s a cool idea that you want to try right now? I know I did - but I don’t any more; any tangental issues not related to what I’m currently doing get dumped into the ticket system and immediately forgotten until I’m done with the current task, regardless of their size, relevance or priority. It sounds simple and obvious, and this might even be official procedure in your organisation, but I challenge most coders to say that they actually do this all the time. The benefit is that even the tiniest of distractions add an extra level of context that you have to maintain, which is then harder to pick up again after an interruption. For this to work, you need a ticket system which is fast, lightweight, and doesn’t require you to be anal about how much detail you put in initially. You need to be in & out of there in 30 seconds so you can offload that thought without getting distracted - you can flesh it out later.
Always know what you’re doing next
This is one from GTD (‘Next actions’), but it’s a good one. When you come back from a break or interruption, you should spend no time at all figuring out what you need to be doing next. Your ticket system will help you here, and so will the running commentary that hopefully you’ve been keeping on your active task. If you’ve been forced to switch gears or projects, so long as you’ve maintained this external context universally, you should have no issue knowing what the next actions on each item are. The important thing is to have one next action on each project. If you have several, you’ll have to spend time choosing between them, and that’s wasted time (see the next section on prioritisation). At any one time, you should not only have just one current task, but one unambiguous next action on that task. Half the problem of working effectively is knowing what you’re doing next.
I mentioned next actions in the previous section, but how do you decide what comes next? A lot of time can be frittered away agonising over priorities, and I used to struggle with it; I would plan on the assumption that I wanted to do everything on the list, and I just needed to figure out which I needed to do first. I discovered that I could cut the amount of time I spent on planning, and also get better, less ambiguous priorities by inverting the decision making process - to assume a baseline that I wouldn’t do any of the tasks, and assessing the negative outcomes of not doing each one. So instead of ‘which of feature A or B is more important to have?’, it became ‘Let’s assume we ship without feature A and B. What are the issues caused by omitting them in each case?’. It might appear to be a subtle difference, but having to justify inclusion entirely, rather than trying to establish a relative ordering assuming they all get done eventually, tends to tease out more frank evaluations in my experience.
Recognise the benefits of breaks
Much of the above is about limiting the negative aspects of taking breaks, but the fact is, that they have many work-related benefits too. I’m willing to bet that all coders have stayed late at work, or late into the night, trying to fix a problem, only to find that they fix it within 15 minutes the next day, or think of the answer in some unlikely place like the shower. The reason for this is very simple - extended periods of concentration seem productive, and can be on operational / sequential thinking, but for anything else such as creative thinking or problem solving, it’s very often exactly the opposite. Not only do tired minds think less clearly, but often the answer to a problem lies not in more extensive thinking down the current path which you’ve been exploring in vain for the last few hours, but in looking at the problem from a completely different perspective. Long periods of concentration tend to ‘lock in’ current trains of thought, making inspiration and strokes of genius all too rare. Creativity always happens when you’re not trying, and it’s an often under-appreciated but vital element of the programming toolbox. Interrupting that train of thought can actually be a very good thing indeed.
There’s more I could talk about, but that’s quite enough for now I think. I hope someone finds this interesting or useful 😀
|
OPCFW_CODE
|
Helping people with computers... one answer at a time.
Read the article that everyone's commenting on.
I hate the term Broadband probably because as you've said, it's a "fuzzy" description. However, I've been given to understand that there's a difference between ADSl & DSL in that with DSL, one's Uploading speed is about the same as one's download speed whereas ADSL is as you've stated fast down & slow up. Would you care to comment?
I am Psychiatrist but a lay man on these topics. You have explained very nicely about different modes of getting the internet. Big thanks for enlighting such basic things.wish you all the best.
@David: Let me try to explain DSL (Digital Subscriber Line) and its derivatives (with a little help from Wikipedia):
Firstly, the magic of DSL is based on the fact that copper wires, such as telephone wires, are capable of carrying electromagnetic frequencies far beyond those required for the phone itself to work. In effect, the phone system uses only a very small amount of the potential bandwidth the wires could carry. A special note, however, is that these higher frequencies tend to attenuate faster, which is why DSL generally cannot be offered in just any place a telephone system exists -- DSL just doesn't have as much range, as a physical limitation.
So, anyway, what DSL does is use a frequency range above 25 kHz (the telephone system, by comparison, only uses the first 4 kHz available and no more), and further subdivides those frequencies into a number of channels. Those channels are then each assigned as either an upload or upstream channel, and a download or downstream channel (this is not entirely correct, actually the subdivision is first upstream/downstream, and then channels, but it ends up being the same thing in the end).
So that's DSL itself, as a technology. Now, DSL further divides into several implementations, based on a number of factors. The most common (I think) division is between ADSL (Asymmetrical DSL) and a variant of SDSL (Symmetrical DSL) known as SHDSL (Single-pair High-speed DSL).
Firstly, the primary difference between ADSL and SDSL refers almost solely to the way the channels are divided between upstream and downstream. Specifically, in ADSL you get a lot more downstream bandwidth than upstream (Wikipedia reports standards-compliant speeds between 8 and 24 Mbits/s for downstream and 1 to 3.5 Mbits/s for upstream), whereas with SDSL the division is symmetrical, i.e. there is exactly as much upstream bandwidth as there is downstream.
However, SHDSL also uses the frequencies normally reserved for telephony, and is generally marketed to businesses, which is why I doubt you'll see it as an option for a residential contract. To compare speeds, SHDSL (according to Wikipedia) provides up to 4.6 Mbits/s in both directions, barely topping ADSL's upstream maximum of 3.5 Mb/s. However, being a business-class connection, it should also provide far more, uh, "supportive" support.
Hope this helps, and (of course), I may not be entirely correct, but at least this should give you some idea of what's involved.
The pair of wires used to serve customers from a telephone central office can be thought of as a large capacitor. The longer the loop to the customer the more capacitance. After the loop reaches about 18000 feet the attenuation makes the loop unusable for voice communication so the phone company adds inductance in the form of load coils and the voice frequencies in the 400 to 3400 cps can be extended much farther and voice frequency amplification can even be used. The load coils have the affect of filtering out the higher frequencies making the pairs unavailable for "broadband" or carrier frequencies. There are several load schemes that are used but basically you start with a half load section from the central office and then full load sections thereafter. Therefore if you live within about 18000 feet of the telephone central office you can probably receive "DSL" internet service.
AT&T offers a service similar to FIOS with fiber to an enclosure in every neighborhood and copper wire from there to the customer. It's cheaper to provide this than fiber home runs to every customer but the potential bandwidth suffers a little. Nobody wants the enclosure in their "back yard" either.
How do I setup my smart phone to get wireless internet when in "hot spots"?
Firstly , I've read all comments , thanks to all comments posters , and particularley more thanks to OCTAV for his satisfying explanation.
You missed one that is available in a lot of rural areas (mine included) and apparently as a backup option (mainly aimed at businesses) in some cities as well. I don't think it's quite the same as WiMax, but maybe. Seems to be called "Fixed Wireless Internet."
It consists of one or (in our case) a system of wireless transmitters on towers spread throughout the entire valley, and is typically line-of-sight to the tower or (for more $$) a different frequency that can "see through the trees." The transmitters are fed ultimately by the provider's connection to hard-wired (T1 I think here) broadband at some location close enough to broadcast to the first tower in the network.
Subscribers get a special receiver that is mounted on or near their house, then the connection is hard-wired from the receiver into the house, into a special modem, and then to the computer/router using a standard ethernet connection. You CANNOT just pick the service up using a WiFi-enabled computer.
Speeds are based on your subscription level, around here from 750kpbs (upload) to significantly higher (again, here at least, for more $$).
Pros: broadband in locations where the only other option is satellite or dial-up, and fairly consistent service.
Cons: must be line-of-sight (or line-of-sight with only trees in the way), transmitters are subject to breakdowns with subsequent downtime, and appear to be affected at times by the number of active users on a given transmitter.
I won't rant on about satellite cons, but there are more than Leo has listed. I'd recommend satellite only in the absence of other broadband options, and do a lot of research ahead of time, especially on whatever provider is available in your area. Also note that some piggyback on others, for example the Canadian provider Xplornet uses the Hughes satellite network, so many problems Hughes users experience will be similar for Xplornet users. On the "Pros" side of things, there are tons of great help resources on the internet for satellite users, because they can be such a PITA and because tech support is often unhelpful or very slow to respond.
Good article but I need to know the best type connection when I live in two different places depending on time of year and neither place has the same cable company or telephone company. I just want to plug it in and have service when I move between the two places without complications with the computer. Thanks
I've had DSL for a few years and am moving into an area where they only offer dial up OR extended service dsl (thru CenturyTel, or actually it's now CenturyLink, a combo of CenturyTel & Embarq). Does anyone know anything about this extended service dsl?
To post a comment on "How do all these options for connecting to the internet differ?", please return
to that article's main page.
Question? Ask Leo!
The Tip Jar: Buy Leo a Latte!
By Date |
Business Card |
Advertisements do not imply my endorsement of any product or service.
Copyright © 2003-2013 Puget Sound Software, LLC and Leo A. Notenboom
Ask Leo! is a registered trademark ® of Puget Sound Software, LLC
Terms, Conditions & Privacy
Product Reviews, Recommendations and Affiliate Links Disclosure
|
OPCFW_CODE
|
[bug] Device XML issue
Version
Build/Run method
[x] Docker
[ ] PKG
[ ] Manually built (git clone - npm install - npm run build )
Zwave2Mqtt version: 3.0.2
Openzwave Version: 1.6.1061
Describe the bug
parameter config does not show all options from list of the OZW device xml file. for parameter 100 (and 101) it only shows the option 1 to 6, but omitts 0 (disable) and 9 (sensor binary) from the list.
<Value genre="config" index="100" instance="1" label="Enable / Disable Endpoints I2 or select Notification Type and Event" max="9" min="0" size="1" type="list" value="1">
<Help>Enabling I2 means that Endpoint (I2) will be present on UI.
Disabling it will result in hiding the endpoint according to the parameter set value.
Additionally, a Notification Type and Event can be selected for the endpoint.
Endpoint device type selection: notification sensor (1 - 6) sensor binary (9).
NOTE: After parameter change, module has to be re included into the network in order setting to take effect!
Default value 1.</Help>
<Item label="Home Security; Motion Detection, unknown location." value="1"/>
<Item label="Carbon Monoxide; Carbon Monoxide detected, unknown location." value="2"/>
<Item label="Carbon Dioxide; Carbon Dioxide detected, unknown location." value="3"/>
<Item label="Water Alarm; Water Leak detected, unknown location." value="4"/>
<Item label="Heat Alarm; Overheat detected, unknown location" value="5"/>
<Item label="Smoke Alarm; Smoke detected, unknown location" value="6"/>
<Item label="Endpoint, I2 disabled" value="0"/>
<Item label="Sensor binary" value="9"/>
</Value>
To Reproduce
Steps to reproduce the behavior:
Go to 'node configuration' select the parameter 100 (or 101)
Click on 'pull down menu'
you don't have the options, "Endpoint, I2 disabled" and "Sensor binary"
Expected behavior
the options "Endpoint, I2 disabled" and "Sensor binary" should be presented as well.
Additional context
@jtonk Did you checked if those options are present in the ozwcache file? I have no control of the options, I show what I receive from OZW
@robertsLando I assumed it was taken from the OZW 1.6 config folder, but I just double checked and the values 0 and 9 are present in the ozwcache file. Is there a way to send paramters manually? in other words override the pulldown menu
You can try to directly send the value using mqtt apis, but I think this needs some investigation
Daniel - Software Engineer
Support me at:
Github sponsors
On 7 Apr 2020, at 12:08, Jasper Tonk<EMAIL_ADDRESS>wrote:
@robertsLando I assumed it was taken from the OZW 1.6 config folder, but I just double checked and the values 0 and 9 are present in the ozwcache file. Is there a way to send paramters manually? in other words override the pulldown menu
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or unsubscribe.
The pulldown shows the first 5 items, but if you scroll on it it show the latter two. Sorry for taking your time...
|
GITHUB_ARCHIVE
|
The global data science platform market is expected to grow at a 30% CAGR annually, from $37.9 billion in 2019 to $140.9 billion by 2024. Data science is one of the most promising fields of this century, but it's continuously changing, making it hard for freshers to keep up with the latest trends and technologies. The market for data scientists is highly competitive, making it hard to break into the industry as a fresher if you don't have proper guidance. Even a single lousy recruit can hurt an organisation's productivity. Thus, companies are particularly careful when working with computer science freshers with no experience. With the help of our comprehensive data science recruiting guide, you will learn what it takes to get hired as a data scientist at a top company. We will delve deep into the subject, exploring topics like what the role entails, the skills required, and even interview preparation.
Data collection, cleaning, organising, and analysing data are all part of the data science process. The objective here is to construct a data model that can use the data to predict upcoming events and make predictions about user actions and patterns. Data scientists employ various methods to find answers to challenging problems, including statistical analysis, mathematical algorithms, and machine learning. You may break it down into five main phases: data capture, data cleaning, data processing, data analysis, and data visualisation.
Here are some of the skills you might want to master to secure a job as a data scientist:
The data science field has a very steep learning curve. Unless you know precisely where to look, it can be tough to break into the industry as a fresher. A data scientist needs real-life experience with various tools and technology stacks, not just project-specific know-how. The ideal candidate for the role would be naturally inquisitive and have an innate sense of using data to solve problems. Possessing a statistical mindset when tackling a problem may also be useful. While it's impossible to know what to expect while organisations are data science recruiting, there are certain things you can do to prepare. There are various things you can do to guarantee your readiness. Here are some tips you can use to improve your chances of getting hired in a data science role:
Before we get into the questions, there are some things you need to know. Given the kind of role a data scientist plays, asking them questions that can accurately judge the way they think is significantly more critical than considering whether their answer is correct or not. The interviewer might try to pique your curiosity by asking you something you're unfamiliar with to test how you handle things under pressure. Here are some of the questions frequently asked during data science interviews:
The high salaries of data scientists reflect the rapid pace of change in their industry. Data scientists are expected to take on increasingly significant tasks as the discipline develops. Data scientists are in high demand worldwide since modern businesses rely heavily on information obtained via data collection and analysis. There is a sizeable market for data scientists in India, as evident from the high data scientist salaries in the country. According to AmbitionBox, an entry-level data scientist can make anywhere between Rs. 6,00,000 and Rs. 10,00,000. While that is great, senior data scientists can earn upwards of Rs. 20,00,000.
To sum it up, data science is an exciting field with a ton of potential. If you wish to break into the industry as a fresher, our advice and tips are something you can rely on, given our experience with helping numerous freshers get hired in a data science role. While getting hired is not exactly tough, having the right guidance is the key to making it in the industry.
|
OPCFW_CODE
|
Paramount MX AG Optical 12.5 IDK Apogee Alta 16803 AstroDon Series 2 E LRGB AstroDon Ha 5nm SII 5nm OIII 3nm SBig Sti ACP observatory control, ACP Scheduler PixInsight, Maxim DL 5, My Images http://www.remarkableheavens.com/
Mach1GTO / G11/G2 (stock) / AT6RC / AT10RC / TMB92SS / Astrodon 50D / STT-8300M / FW8G-STT / PixInsight / etc, etc. Astrobin - Flickr
Two roads diverged in a wood, and I— I took the one less traveled by, And that has made all the difference. -- Robert Frost
Avatar=20 foot (6 meter) pier being poured in 2004.
Quote:I'd like to image a comet at some point. Is there any special things I need to know, for example does tracking rate need to be increased? Where can I find resources from experienced comet imagers?
Quote:One more thing- creating a video is fairly involved... I would hold off on that to start with to avoid a lot of frustration
Warren - Stargazing since the 60's! Scopes: ETX-LS6, ED80T, AT6RC, Lunt LS60T, C9.25 Mounts: Atlas EQ-G, Vixen Portamount II Cameras: Atik 314L+,DMK31AU03,SSAG, ASI120MC Filters: Astrodon LRGB, Orion HA, SII, OIII Acc: Orion 5 place Filter wheels x 2, Flatman Primary Imaging site: Bortle Scale Class 6 Red Zone http://astrobin.com/users/rigel123/
Photo: Qhy9 mono + qhy 5x2" filter wheel and Baader 2" LRGB, Ha, O3 and S2 filters , Meade DSI Pro 2 as guider, TS 9mm off-axis guider Binoculars: Nikon Action 12x50 Telescopes: Skywatcher Evostar 120ED f7.5 APO + TS 2" flattener Mounts: HEQ5 Pro Eyepieces: Nagler 11mm type6, Pentax XW 7mm, Televue 2X barlow 1.25"
Quote:Actually a simple animation like the one I did here was pretty easy and was quite a lot of fun! Give it a go.http://www.pbase.com/dsantiago/image/122734666Derek
Quote:Quote:Actually a simple animation like the one I did here was pretty easy and was quite a lot of fun! Give it a go.
Derek- how did you stretch the individual frames? I ended up using ImageMagick, a command line tool that is not for the faint of heart! My raw data would not display comets unless they are quite bright.
Quote:Actually a simple animation like the one I did here was pretty easy and was quite a lot of fun! Give it a go.
|
OPCFW_CODE
|
Last time out I looked at corporate email from the Mac, and was promptly taught by pseagers a very cool thing I did not know: That being there was actually a way to collect all the unread emails in mail.app in one place: Smart Folders. I have been using that ever since the comment was posted, and now Outlook 2011 is rarely brought up. The mail.app adaptive junk email filters add another very nice layer of protection beyond Outlook 2011's. Enough that mail.app is now my Mac email client of choice, the way Evolution is for me on Linux (posts too numerous to link).
The Big Three
Once past email, the problem becomes the so-called "productivity software". Word Processor. Spreadsheet. Presentation package.
There are numerous ways to go with this. The Mac does not want for solutions here. The gating factor here is interoperability with MS Office. All over the place there are still people using MS Windows, and still using MS Office. They didn't go to the cloud yet (Google Docs, Zoho, ThinkFree, et al). No: They are using locally installed software, and they are emailing .doc, .docx, .xls, and so forth out to those that they love and worth with. They expect me to be able to read it because they are pretty sure I am just like them, and use the same software they do.
As a Linux desktop user for years, I am used to this. Linux adapted to this years ago, and Mac's are able to do the same kinds of adapting, in some cases using Mac versions of the same software.
There used to OpenOffice, and there still is, but due to a nasty breakup, LibreOffice came into existence as its more-or-less successor. Except OpenOffice is now part of Apache. Complicated. For the purposes of this, I'll talk about LibreOffice, but most of it applies to OpenOffice too. I use LibreOffice everywhere, just like I used to use OpenOffice. When I go to Fedora now, LibreOffice is what 'yum' installs and updates, so when I looked at what to put on the Mac, it was an easy choice.
My favorite word processor, bar none, is still Word Perfect. But it runs on an OS I do not even personally have, and I am not putting my personal copy on my corporate machine, so it sits sad and alone on its install disk, waiting for the day that there is a Mac or Linux version again.
In the meantime, I like LibreOffice. I use it for all my reports, especially the ones that have a lot of imbedded pictures in them. It is not page layout software, but it has enough of the layout page controls that I can build fairly nice looking reports fairly quickly. I save everything in .ODF format, and only when I need to send it to a another do I spin off a different format version. Usually PDF, unless they will need to edit it, in which case one of the MS formats like .doc.
The issues of old, like pages laying out oddly when viewed from another package are largely gone. Sometimes the pictures and captions act odd, but by and large there are no serious issues. My spread sheet work is all fairly simple. Not much in the ways of macros or RDB imbedded data searches. Nothing I do around power planning for data centers spreadsheet math-wise, for example, causes issue. It just works. Same as it always does on Linux.
Presentations used to have problems with fonts and page sizes, and it still happens from time to time on really cmplex templates, but nothing I can't live with.
Most of the time, people do not know I created it on the Mac any more than they did when I created it on Linux. Document. Spreadsheet, or Presentation.
Apple's iWork Suite
I have talked to many people here that wish they could just switch over to using Apples premiere office software, namely Pages, Numbers, and Keynote. That the software is beautiful is clear. Easy to use.
Where I have had problems in the past interoperability. I create something in Pages, save it as .doc, and it just does not import quite right. Same thing for Keynote. I have never had that issue with Numbers, but again, my spreadsheets are very simple.
The iWork suite is not updated very often. The current version is iWork '09. it was last updated in July of last year, and that appeared to mostly add OS.X Lion support. not new features or increased compatibility.
Another thing that works against the suite in the corporate world is that it is available, as near as I can see, only in the App store now. No bulk buys.
Pages used to be very page layout oriented, but the last version introduced the ability to run in a word processing or a page layout mode. Kind of an interesting way to think about document creation
Still, for compatibility reasons, I tend to work in LibreOffice.
This one is pretty obvious. Here are Word, Excel, and PowerPoint, plus the previous posted about Outlook. What is not here is Project or Visio. The two I really need. Nor is InfoPath, the XML formatter that some, especially Sharepoint users, seem so fond of.
Internet Explorer is also no longer built for Macs. No loss: the Mac version was not the same code base, and web sites that are stupid enough to require IE usually did not work with the Mac version of IE.
The apps that are here are hybrid Mac / Ribbon look and feel. Not bad. Very usable. Very compatible.
Communicator is also available. I have 13.1.3 at this writing. It allows me to share desktops with MS Windows users, which is nice.
In summary, I can maintain nearly 100% interoperability with the MS Office users of the world. I had this on Linux before now, and oddly it took a while for the Mac to catch up with Linux here. OpenOffice (pre LibreOffice) took a very long time to go abut creating a port of their software ot the Mac. So long in fact that back in the early days of using a Mac at home, I used another project, called NeoOffice. NeoOffice uses the OpenOffice code base, but the authors were far faster than OpenOffice at porting it. For years it was the only good office suite available on the Mac. It predated iWork, and the Apple sourced office suite on the Mac before iWork, AppleWorks, was ... suboptimal.
NeoOffice looks more like a Mac app than, say, OpenOffice. Its fast, and it is updated quite frequently to stay current.
So, there are at least five valid office suites for the Mac now. All of them work, most are very compatible, and if you are coming over from either Linux or MS Windows, there is something that you will find that makes you comfortable working on the Mac at the office.
|
OPCFW_CODE
|
PHP Fatal error: Call to undefined function apache_getenv()
Host: Amazon Web Services
Domain: Godaddy
SSL: Godaddy
Framework: Laravel 4.2
PHP: PHP5.6
Apache: Apache 2.4.16
File located: /var/www/html/test
My code:
$api_request = 'https://'.apache_getenv("HTTP_HOST") . apache_getenv("REQUEST_URI");
Error: PHP Fatal error: Call to undefined function apache_getenv()
It looks like it doesn't exist.
Laravel App Located at: html. Then another two folders with laravel: test and live.
html/ (Laravel app landing page)
app/
bootstrap/
packages/
public/
vendor/
test/ (Laravel app)
live/ (Laravel app)
index.php
It works properly at html app. but in test app the function doesn't exist anymore.
Is your Server API the Apache handler?
@ÁlvaroGonzález Server API Apache 2.0 Handler
What's wrong with using $_SERVER?
maybe interesting? getenv — Gets the value of an environment variable.
@JonStirling I tried. Gives me the error of undefined index HTTP_HOST
I think we need a clarification... Are we talking about a web application (that runs trough Apache) or some command line test suite?
@ÁlvaroGonzález check my answer with follow up question. this could help a lot of people hehe
@JonStirling check my answer with follow up question. this could help a lot of people hehe
@RyanVincent check my answer with follow up question. this could help a lot of people hehe
Looks like apache_getenv is disabled in your php.ini
Enable it by
Edit php.ini (By default /etc/php.ini)
Remove apache_getenv from disable_functions section
save and exit.
restart apache/php handler.
what word should I search in php.ini?
i found it but its empty disable_functions =
There was nothing to erase.
Are you using a shared hosting environment? If yes then there might be override in server's main php.ini.
check my answer with follow up question. this could help a lot of people hehe
The error message is "PHP Fatal error: Call to undefined function apache_getenv()", not "Warning: apache_getenv() has been disabled for security reasons".
Can you please try
$_SERVER['HTTP_HOST'] and $_SERVER['REQUEST_URI']
In place of apache_getenv some times apache does not allow you to access the functions without the module activation.
Hope this will help you.
I tried. Gives me the error of undefined index HTTP_HOST
check my answer with follow up question. this could help a lot of people hehe
@MarlonBuendia - Seriously, all symptoms suggest you are running PHP from the command line, not through a web server.
|
STACK_EXCHANGE
|
Network Adapter error when accessing a web app from my desktop on a container in the same network as the oracle database container
I’m running the latest oracle container and a tomcat container tomcat:9.0.22-jdk8 on my own network and I used the following commands:
docker network create mynetwork
docker run -dit -p 8080:8080 -e JPDA_ADDRESS=8000 -p 8000:8000 -e JAVA_OPTS=’-Dconfig.file=/usr/local/tomcat/temp/config.properties -Xmx512m’ --name rcmc rcm-container:0.1 catalina.sh jpda run
docker run -dit -p 1521:1521 --name rcmdb rcmoracledatabase:0.1
docker network connect mynetwork rcmc
docker network connect mynetwork rcmdb
I can login from my desktop to the database container without any issues using these tnsnames.ora entries:
ORCLCDB=(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=<IP_ADDRESS>)(PORT=1521))
(CONNECT_DATA=(SERVER=DEDICATED)(SERVICE_NAME=ORCLCDB.localdomain)))
ORCLPDB1=(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=<IP_ADDRESS>)(PORT=1521))
(CONNECT_DATA=(SERVER=DEDICATED)(SERVICE_NAME=ORCLPDB1.localdomain)))
When I try to login from my webapp at http://localhost:8080/ I get a server 500 error with the following exceptions:
oracle.net.ns.NetException: The Network Adapter could not establish the connection
java.sql.SQLRecoverableException: IO Error: The Network Adapter could not establish the connection
I’ve tried the following JDBC connection strings, but I can’t seem to connect to the oracle container.
jdbc:oracle:thin:@rcmdb:1521:ORCLCDB
jdbc:oracle:thin:@rcmdb:1521:ORCLCDB.localdomain
jdbc:oracle:thin:@<IP_ADDRESS>:1521:ORCLCDB
jdbc:oracle:thin:@<IP_ADDRESS>:1521:ORCLCDB.localdomain
These commands work on my co-worker’s laptop, but he’s running docker toolbox on his desktop
My docker info
Kernel Version: 4.9.184-linuxkit
Operating System: Docker Desktop
OSType: linux
His docker info
Kernel Version: 4.14.116-boot2docker
Operating System: Boot2Docker 18.09.6 (TCL 8.2.1)
OSType: linux
Am I missing something? Are there any settings that I need to look into?
Thanks in advance for any info.
In reading this documentation, the following worked
jdbc:oracle:thin:@host.docker.internal:1521:ORCLCDB
|
STACK_EXCHANGE
|
[01:03] <dabblerdude> genii: Hey genii, so my system froze up earlier again; I went into a TTY, logged in, and typed "killall systemd."
[01:04] <dabblerdude> Doing that command got me back to the login screen.
[05:57] <ced-no> jello
[05:57] <ced-no> hello
[06:15] <ced41> Help, I booted up xubuntu today and it was working yesterday, and when I exit, I get the Karnel Panic, I'm guessing it's because it's trying to mount to /root/*, but I don't know how to fix it.
[06:28] <diogenes_> ced41, are you sure you didn't mess with anything like gparted, partitions etc.?
[06:31] <ced41> diogenes_I tweaked the partitions, but not the xubuntu partition.
[06:43] <ced0180> \msg ced0180 REGISTER 5158<EMAIL_ADDRESS>[06:46] <ced0180> /msg NickServ REGISTER
[07:15] <ced90> diogenes_ Sorry, I've been out of the office for a while.
[14:52] <Xeroine> Hello, I downloaded Focal Fossa LTS release 20.04 and wrote the .iso file using dd but it doesn't boot for some reason. I've triple checked secure boot is disabled, changed boot order, etc. I think everything should be correctly configured in the UEFI firmware settings but it just doesn't boot. can anyone help?
[14:53] <Xeroine> it just boots me into the GRUB menu not from the usb
[14:57] <Maik> you sure you DD'ed it properly?
[14:58] <Xeroine> I think so, idk I guess I'll try again
[14:58] <Maik> did you also try another usb port?
[14:59] <Maik> otherwise try using the usb startup creator, it's in the repo's
[14:59] <Xeroine> aight
[15:03] <xu-irc46w> Hello. It's my first time here. I'm from Switzerland, old Linux user (since 1996/7). I'm trying to help people use Xubuntu without using a command line interface.
[15:06] <Maik> xu-irc46w: this channel is for support questions related to Xubuntu. #xubuntu-offtopic is for casual talk. :)
[15:07] <Maik> also
[15:07] <Maik> !discuss
[15:13] <xu-irc46w> So, I'm at the right place. The problem is (Xubuntu 20.04): a person must enter the wlan Mac Address allow access to her phone wifi gateway. How can she find the MAC address of her wlan interface without any established connection? In the connection menu, the Connection Information is grayed out.
|
UBUNTU_IRC
|
/*
* Although this code is an example of how to get started using
* this bot framework, it is not an example of what to impl
* in production code.
*
* DO NOT USE IN PRODUCTION!
*
*/
require('./handle_sigint');
const DynamicTwitchBot = require('../dynamic_twitch_bot.js');
const dtBot = new DynamicTwitchBot({
twitchClient: {
username: process.env.TWITCH_USERNAME,
token: process.env.TWITCH_TOKEN,
channels: ['danonthemoon']
},
storageManager: {
defaultStorage: 'memory'
}
//rbac: { enabled: false }
});
dtBot.init();
dtBot.rbac.addRole('admin', {
can: ['echo'],
inherits: ['default']
});
dtBot.rbac.addRole('default', {
can: [ 'e', 'devmoon', 'devearth', 'devdan' ]
});
dtBot.rbac.addUser('danonthemoon', 'admin');
dtBot.addRule({
name: 'echo',
aliases: 'e',
args: 'message',
handler: async (params) => {
// needs user message sanitization!
return params.args.message;
}
});
dtBot.storageManager.add('moonStorage');
const moonStorage = dtBot.storageManager.get('moonStorage');
moonStorage.init()
.then(async (success) => {
if (!success) throw new Error('could not init moonStorage');
await moonStorage.add('members', []);
dtBot.addRule({
name: 'devmoon',
handler: async (params) => {
const bot = params.bot;
const storageManager = bot.storageManager;
const mS = storageManager.get('moonStorage');
let msMembers = await mS.get('members');
const index = msMembers.indexOf(params.username);
if (index > -1) {
return `@${params.username}, you are already on the moon :)`;
}
if (!params.username) return 'need to have a username to join the moon';
msMembers.push(params.username);
const success = await mS.edit('members', msMembers);
if (!success) return `@${params.username} failed to make it to the moon! Try again!`;
return `@${params.username}, welcome to the moon!`;
}
});
dtBot.addRule({
name: 'devearth',
handler: async (params) => {
const bot = params.bot;
const storageManager = bot.storageManager;
const mS = storageManager.get('moonStorage');
const msMembers = await mS.get('members');
const index = msMembers.indexOf(params.username);
if (index > -1) {
msMembers.splice(index, 1);
const success = await mS.edit('members', msMembers);
if (!success) return `@${params.username} failed to leave the moon! Try again!`;
return `@${params.username}, came back down to earth :(`;
}
if (!params.username) return 'need to have a username to be on earth';
return `@${params.username}, you are already on the earth...`;
}
});
dtBot.addRule({
name: 'devdan',
handler: async (params) => {
const bot = params.bot;
const storageManager = bot.storageManager;
const mS = storageManager.get('moonStorage');
const msMembers = await mS.get('members');
return `Heres the current moon party: ${msMembers.join(', ')}`;
}
});
});
dtBot.start();
|
STACK_EDU
|
Windows 10 turns display off after minutes instead of hours
Has anyone else seen it where Windows 10 turns the display off after a short amount of time instead of the number of hours that you tell it to? I had set my computer at work to turn the displays (I have 2 monitors) off after 2 hours, but when I lock it out, they go off after only about a couple of minutes. I have even tried setting it to turn the displays off after 5 hours, but they still go off after only a small number of minutes after locking the computer out. I do have a screen saver set to run after 5 minutes of inactivity, but it's not the one that just blanks the screen, so I know it's not the screen saver kicking in. (And I don't see anything in the screen saver settings that says to blank the screen after so many minutes.)
Is there any fix for it, or do I have to forget the whole thing and just turn the monitors on and off when I go home and come back in? (This is very annoying.)
I have a feeling turning off while locked and turning off while unlocked are probably two different things. If the computer is locked, then what would the screen need to stay on for?
When the screen is locked, a different timeout applies—1 minute by default. D.Gmina’s answer tells you how to make it configurable.
Open the Run command, type regedit, and click OK to open the registry.
Browse the following path:
HKEYLOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Power\PowerSettings\7516b95f-f776-4464-8c53-06167f40cc99\8EC4B3A5-6868-48c2-BE75-4F3044BE88A7
On the right side, double-click the Attributes DWORD.
Change the value from 1 to 2.
Click OK.
Use the Windows key + X keyboard shortcut to open the Power User menu and select Power Options.
Click the Change plan settings link for the selected plan.
Click the Change advanced power settings link.
On Advanced settings, scroll down and expand the Display settings.
You should now see the Console lock display off timeout option, double-click to expand.
Change the default time of 1 minute to the time you want, in minutes.
Click Apply.
Click OK to complete the task.
Thanks. I'll see how it goes and then give the check mark if it works.
Ok, I've tried your solution, setting the time in that setting to 180 minutes (2.5 hours) and left the displays physically turned on overnight after locking out the computer, but when I got in to work in the morning, they were still on.
Works great, thank you!
Just deactivate the SCREEN SAVER!
Go to:
SCREEN LOCKER
Choose Screen Protection = NONE
Do not choose SEE THIS SCREEN AT BOOT SESION
Reboot and it is done.
|
STACK_EXCHANGE
|
How does Nextjs work
Next.Js- How it works and its usage!
It is best for you if you have zero to little information about next.js, you have used React in the past, and you’re looking forward to diving extra into the React ecosystem, in particular server-facet rendering. With this let’s start our topic “How does Nextjs work”!
I find next.js a super tool to create web programs, and on the quit of this put up I hope you’ll be as enthusiastic about it as I’m. And I hope it will help you examine next.js!
Next.js is React framework to perform all of this in a very easy way, but it is now not constrained to this. It’s marketed by means of its creators as a zero-configuration, single-command toolchain for React apps.
It offers a commonplace structure that permits you to without problems build a front-end React software, and transparently handles server-side rendering for you.
Process for getting started with Next.js
First of all, for next.js you should have node.js installed on your system and that’s all. Next.js is like any other node.js application — you need npm or Yarn to install dependencies.
First step- you have to create a folder and give it a name of your preference. we will name it nextjs-app.
After developing the nextjs-app folder, open it at the terminal. Run npm init command to create the package.json file.
After that, we must deploy our dependencies.
installing next.js, using Yarn, type
yarn add next
// using npm --save npm i next --save
Then we must install React because next.js makes use of React. The first line beneath makes use of Yarn for the setup.
yarn add react react-dom
npm i react react-dom --save
Two necessary folders pages and static must be created, otherwise next.js won’t work.
mkdir pages static
next.js-app -pages -static -package.json
After this execute “npm next dev” and open “http://localhost:4000/” on the browser.
So after all this, we have to create a homepage and index.js(entry point).
touch index.js home.js
And you can code your React component for which next.js is used. Next.js has a live reload function. All you want to do is simply trade and store, and next.js will collect and reload the app mechanically for you.
Another way is using create-next-app CLI
npx create-next-app my-app
Main aspects which help next.js perform!
Next.js performs server-facet rendering via default. This makes your application optimized. also, you can combine any middleware along with express.js or Node.js, and you may run any database together with MongoDB or MySQL.
Speaking of SEO, next.js comes with a Head thing that allows you to add and make dynamic meta tags. It’s my favorite feature — you could make custom and dynamic meta tags. those make your website able to be listed through engines like google.
Interested in learning “Blockchain and Solidity”? Click here!
This is some other one of the notable capabilities of next.js. when you operate the create-react app, you commonly need to install react-router and create its custom configuration.
next.js comes with its personal routers with zero configs. You don’t need any extra configuration of your routers. Just create your web page within the pages folder and next.js will take care of all routing configurations.
Lazy loading makes your software deliver a better user revel in. now and again the web page might take time to load. The person can also abandon your app if the loading takes extra than 30 seconds.
The way to avoid that is to use some trick to indicate to the consumer that the page is loading, for an instance by means of displaying a spinner. Lazy loading or code splitting is one of the capabilities that allow you to deal with, and control, slow loading so you only load the necessary code on your page.
Free money-back guarantee
Unlimited access to all platform courses
100's of practice projects included
ChatGPT Based Instant AI Help
Structured React.js/Next.js Full-Stack Roadmap To Get A Job
Exclusive community for events, workshops
Sharing is caring
Did you like what Vishnupriya wrote? Thank them for their work by sharing it on social media.
- 13 Reasons why you should use Next.js
- Next.js Versions
- Next.js production tips and checklist
- Next.js eCommerce
|
OPCFW_CODE
|
Godot fails to cross-compile to Windows using MinGW, target=debug and use_lto=yes
Godot version:
3.4-beta e9909b763af
OS/device including version:
Fedora 34 host.
GCC/x86_64-w64-mingw32/10.3.1
Issue description:
[100%] Linking Program ==> bin/godot.windows.tools.64.exe
/usr/lib/gcc/x86_64-w64-mingw32/10.3.1/../../../../x86_64-w64-mingw32/bin/ld: btAlignedAllocator.windows.tools.64.o (symbol from plugin): warning: no symbol for section '_Z14btAlignPointerIcEPT_S1_y' found
lto1: error: two or more sections for .gnu.lto__ZN4ListIPN9Octree_CLIN17VisualServerScene8InstanceELb1E16DefaultAllocatorE8PairDataES3_E5clearEv.9678572.e6925aa0daa7c15
(null):0: confused by earlier errors, bailing out
make: *** [/tmp/cchKQabt.mk:17: /tmp/godot.windows.tools.64.exe.s4Jwiu.ltrans5.ltrans.o] Error 1
make: *** Waiting for unfinished jobs....
lto-wrapper: fatal error: make returned 2 exit status
compilation terminated.
/usr/lib/gcc/x86_64-w64-mingw32/10.3.1/../../../../x86_64-w64-mingw32/bin/ld: error: lto-wrapper failed
collect2: error: ld returned 1 exit status
scons: *** [bin/godot.windows.tools.64.exe] Error 1
scons: building terminated because of errors.
Not sure if this should/can be supported, but the error is here. The use case is to debug issues caused by LTO, see goostengine/goost#83 (which seems to be jpgd issue).
Steps to reproduce:
scons platform=windows target=debug use_lto=yes debug_symbols=yes bits=64
debug_symbols=yes likely not needed but that's what I used while compiling.
Minimal reproduction project:
N/A
I can confirm the issue on Mageia 8 with:
mingw64-binutils-2.34-2.mga8
mingw64-gcc-10.2.1-2.mga8
mingw64-headers-8.0.0-1.mga8
For the reference, I tried with the bullet module disabled (as the error references a bullet struct), but that just moves the issue someplace else:
lto1: error: two or more sections for .gnu.lto__ZN3MapIN18CanvasItemMaterial11MaterialKeyENS0_10ShaderDataE10ComparatorIS1_E16DefaultAllocatorE5_Data10_free_rootEv.7524010.3187f6e5341ddf31
This is also a problem for a native debug LTO MinGW Windows build of 3.2.2
Still valid in 3.5-beta. I stumbled upon this by accident again when I forgot to specify target when using production=yes.
Still valid in 3.4.2 release. When use_mingw=yes target=release_debug use_lto=yes
@Atem1995 What error are you having exactly, on what OS, and with what mingw versions?
This error is about target=debug, not release_debug. Building target=debug with LTO is not really useful and unsupported, and what this issue is about (finding a fix would still be good, but there's not much reason to do such a build).
@akien-mga I tried to compile the source code for the 3.4.2 release In window 10. Use x86_64-w64-mingw32-gcc 11.2.0. Command is “Scons -j 12 platform= Windows bits=64 Tools =yes USe_mingw =yes Target = release_DEBUG USe_LTO =yes”. It took a long time in the final phase, and then ultimately no executable was generated. I did it with reference to official documentation, which could have been generated without using mingw.
Thanks, that confirms that it's not the same issue. For this issue you should see an error like the one described in the first post: warning: no symbol for section '_Z14btAlignPointerIcEPT_S1_y' found.
In your case, is there really no error in the terminal?
It's normal that it takes a very long time to link with LTO, it can easily take 30 min or more. But ultimately it should finish with scons: done building targets and you should have a binary in the bin folder. If it doesn't do that, then most likely there was an error printed in the terminal.
@akien-mga Thank you for your attention to my problems. Please wait a moment, I just tried compiling again and am waiting for the result or error. News will be posted.
Besides the warnings, this is the error:
C:/msys64/mingw64/bin/../lib/gcc/x86_64-w64-mingw32/11.2.0/../../../../x86_64-w64-mingw32/bin/ld.exe: i386 architecture of input file `platform\\windows\\godot_res.windows.opt.tools.64.o' is incompatible with i386:x86-64 output
collect2.exe: error: ld returned 1 exit status\n"
That seems to be the same issue as #40286, we should continue the discussion there.
I don't know if it's worth creating a new issue, but if compiled with the flags optimize=size use_lto=yes i got errors like:
/usr/lib/gcc/i686-w64-mingw32/10.3.1/../../../../i686-w64-mingw32/bin/ld: /tmp/godot.windows.opt.32.exe.XFme3F.ltrans127.ltrans.o:<artificial>:(.text+0x3343f): undefined reference to `GDScriptParser::DataType::DataType() [clone .lto_priv.1] [clone .lto_priv.0]'
/usr/lib/gcc/i686-w64-mingw32/10.3.1/../../../../i686-w64-mingw32/bin/ld: /tmp/godot.windows.opt.32.exe.XFme3F.ltrans127.ltrans.o:<artificial>:(.text+0x334e2): undefined reference to `GDScriptParser::DataType::DataType() [clone .lto_priv.1] [clone .lto_priv.0]'
/usr/lib/gcc/i686-w64-mingw32/10.3.1/../../../../i686-w64-mingw32/bin/ld: /tmp/godot.windows.opt.32.exe.XFme3F.ltrans127.ltrans.o:<artificial>:(.text+0x335ec): undefined reference to `GDScriptParser::DataType::DataType() [clone .lto_priv.1] [clone .lto_priv.0]'
/usr/lib/gcc/i686-w64-mingw32/10.3.1/../../../../i686-w64-mingw32/bin/ld: /tmp/godot.windows.opt.32.exe.XFme3F.ltrans127.ltrans.o:<artificial>:(.text+0x33b67): undefined reference to `Ref<GDScript>::Ref<Script>(Ref<Script> const&) [clone .lto_priv.1] [clone .lto_priv.0]'
collect2: error: ld returned 1 exit status
scons: *** [bin/godot.windows.opt.32.exe] Error 1
scons: building terminated because of errors.
Full flags: scons -j4 platform=windows use_mingw=yes target=release bits=32 tools=no production=yes verbose=yes warnings=no progress=no optimize=size
Compiled in Docker with Fedora 34, GCC/x86_64-w64-mingw32/10.3.1, Godot 3.4.4
Is this a documentation issue of the incompatible optimise=size and use_lto=True flags with mingw, or is this a bug?
I'll close this as wontfix, as I don't think it's fixable. GCC LTO just isn't capable of dealing with too large objects, which they inevitably are when debug symbols are enabled (hundreds of MBs for some).
It's also not a useful build configuration (LTO depends on doing optimizations, debugging depends on not doing optimizations), so aside from the theoretical wish for all build options to work, there's not much to gain here by trying to workaround GCC LTO bugs.
|
GITHUB_ARCHIVE
|
Microsoft Dynamics CRM 2015 Has Plenty Of Appeal For Non-Profits
Microsoft announces the release of Dynamics CRM 2015 expected to become available in Q4 2014 and announces key features designed to increase collaboration within the organization to improve fundraising campaigns and so
Toronto, Canada, October 1, 2014 (Newswire.com) - Microsoft is providing solutions that enable fundraisers, campaign managers and other departments to deliver amazing constituent experiences – together. With the release of Dynamics CRM 2015 Microsoft has focused on breaking down the silos between Marketing, Sales, Customer Service and Social Communication. They also continue to deepen Dynamics CRM’s inter-operability with other leading Microsoft applications such as Office 365, Lync, Yammer, Skype and SharePoint. “Unlike vendors that want to separate businesses by selling them countless different clouds and solutions, we have designed Microsoft Dynamics CRM to facilitate the kind of collaboration that businesses need to thrive and grow,” said Bob Stutz, corporate vice president, Microsoft Dynamics CRM. This release is designed to enhance the collaboration across the organization and meet the needs of evolving organizations and their constituents.
“At Altus Dynamics we know our customers in the non-profit and public sector will benefit immensely from the new campaign management and social listening features coming in Dynamics CRM 2015,” comments James Faw, CTO at, Altus Dynamics. “It will break down operational and solution silos and create a more cohesive understanding of the success drivers throughout the organization.”
New campaign features such as email templates, drag and drop email components, as well as A/B and split testing offer more built in email marketing tools for campaign managers, allow them to eliminate some third party tools and work more efficiently within one product. This will also give them better campaign reporting options and an overall better picture of campaign success and performance.
Microsoft’s in-built Social Listening tool is something our non-profit/ public sector clients will also love. Organizations can monitor their social activity within Dynamics CRM, pulling information from Twitter, Facebook, Blogs, and other social networks. Within Dynamics CRM, you will see what people are saying about your organization and gain social insight by narrowing data sets by location or social network in 19 different languages from all over the world! Building on that theme, it was recently announced that Dynamics CRM Online is now available for purchase in more than 65 markets worldwide and they expect to reach over 130 markets in 44 languages by the end of calendar year 2014. This is a huge plus for global non-profits allowing them to use one system throughout the entire organization containing all the same information just offered in a variety of languages for remote sites.
“Technology plays an important role in allowing organizations to engage effectively with their constituents. Dynamics CRM 2015 will help organizations link their funders, member managers, student recruiters, campaign managers, service teams etc with the insights they need to deliver amazing experiences and add real value to the work carried out in their communities.” adds Colin Dickinson, CEO and Managing Partner at Altus Dynamics.
About Altus Dynamics
Altus Dynamics is a Canadian based information technology consulting and services company applying practical innovation through services and solutions that deliver tangible results for non-profit, education and government clients. Services include application development, ERP and CRM implementations utilizing the Microsoft Dynamics platform, including Microsoft Dynamics® NAV and Microsoft Dynamics® CRM as well as Microsoft SharePoint for employee portals.
Founded in 2003 and headquartered in Toronto, ON, Altus Dynamics operates across Canada and the United States. Altus Dynamics is a multi-award winning Gold Certified Partner, and Microsoft Dynamics Industry Solutions Vendor (“ISV”). For more information about Altus Dynamics please visit www.altusdynamics.com.
|
OPCFW_CODE
|
How to get values to create world file format?
i have researched regarding making world file format and tried to create one using c++ in windows, can somebody confirm if where i am getting the values from are correct? the literature regarding world file formats are quite vague. i don't use rotation so i assume value D and E are always zero..
Line 1: A: pixel size in the x-direction in map units/pixel.
p->GetDeviceCaps(HORZRES); // HORZRES is defined in wingdi.h as Horizontal width in
pixels
Line 2: D: rotation about y-axis // always 0 as i don't use rotation
Line 3: B: rotation about x-axis // always 0 as i don't use rotation
Line 4: E: pixel size in the y-direction in map units, almost always negative
p->GetDeviceCaps(VERTRES); // VERTRES is defined in wingdi.h as Horizontal width in
pixels
Line 5: C: x-coordinate of the center of the upper left pixel
// how do you get this value?
Line 6: F: y-coordinate of the center of the upper left pixel
// how do you get this value?
To populate the values it is necessary to know what software you are developing for. In most software there is an extent object of the frame that will have all the necessary values.
say i am creating a gis application and i need to use the gdi of windows to get those values, how do i go about it?
If you are creating an application you must know the extent of the frame and then the number of pixels wide and high... I'll put in an Esri example which may shed some light on it.
World files, as used by Esri, GDAL etc.. have a standard format:
Normally the rotation is 0 for both values (not rotated) but the values form a 6 parameter Affine Transformation between cells and the world; if you understand the maths then you could conceivably populate those numbers for a fine rotation. Here is some code for a GDAL GeoTransform array that is very similar to a world file:
double GeoTransform[6];
GeoTransform[0] = Xmin; // Upper Left X
GeoTransform[1] = CellSize; // W-E pixel size
GeoTransform[2] = 0; // Rotation, 0 if 'North Up'
GeoTransform[3] = Ymax; // Upper Left Y
GeoTransform[4] = 0; // Rotation, 0 if 'North Up'
GeoTransform[5] = -CellSize; // N-S pixel size
Whereas a world file is populated like:
Cell Width
0
0
Cell Height (negative)
X coordinate of upper left cell (centre)
Y coordinate of upper left cell (centre)
Note that N-S pixel size is always negative as rasters start at the upper left and read downward. All values are in world units (metres, feet, degrees, inches etc.).
Here is an example of how I would calculate/create a world file in Esri objects:
void GetWorldValuesFromActiveView(ESRI.ArcGIS.Framework.IApplication pApp)
{
ESRI.ArcGIS.ArcMapUI.IMxDocument pDoc = (ESRI.ArcGIS.ArcMapUI.IMxDocument)pApp.Document;
ESRI.ArcGIS.Carto.IActiveView pView = pDoc.ActiveView;
ESRI.ArcGIS.Geometry.IEnvelope pExtent = pView.Extent; // the display bounds in world units
ESRI.ArcGIS.esriSystem.tagRECT pPixBnd = pView.ExportFrame; // the display bounds in screen units
// cells wide and high, I am using absolute value
// as (very rarely) a screen can have negative coordinates
int Rows = Math.Abs( pPixBnd.top - pPixBnd.bottom );
int Cols = Math.Abs( pPixBnd.right - pPixBnd.left );
double Width = pExtent.XMax - pExtent.XMin; // width in 'world' units
double Height = pExtent.YMax - pExtent.YMin;
double CellX = Width / Cols;
double CellY = Height / Rows;
double ULX = pExtent.XMin + (CellX / 2);
double ULY = pExtent.YMax - (CellY / 2);
using (System.IO.StreamWriter pWorldWrite = new System.IO.StreamWriter("c:\\path\\to\\world\\file.tfw"))
{
pWorldWrite.WriteLine(CellX.ToString());
pWorldWrite.WriteLine("0");
pWorldWrite.WriteLine("0");
pWorldWrite.WriteLine("-" + CellY.ToString());
pWorldWrite.WriteLine(ULX.ToString());
pWorldWrite.WriteLine(ULY.ToString());
}
}
As you can see from the application you must know how many pixels the window is but also what extent 'on the ground' that those pixels represent. If you are writing your own application you must know these values at some point as it's intrinsic to rendering to display.
You need to transform a half-resolution to convert between corner and centre references.
@MikeT, some software assumes corner, others assume centre, the trick is knowing which you are working with. The extent rectangle is the edge of the frame so, as you said, you will probably need to add half a cell to the values. Most of the time though a half cell displacement isn't a big deal.
thank you for answering the question. However, the answer is still quite vague. can anyone show example on how to do this using gdi in windows? i saw this site when i was researching: http://spatialhorizons.com/2007/09/26/using-qgis-4-raster-images/ , but this seems to apply only for rasters with long and lat coordinates. windows gdi seems to work using mm/pixel.
How does windows gdi relate to the world? Surely you're starting with a map of something, what created that map? Geographic coordinates / projected coordinates all work the same in this instance; there is a .prj file often accompanying the image/world that specifies the spatial reference and therefore units.
It's not software dependent. The matrix used in world files are referenced to the cell center, and GDAL's GeoTransform matrix is referenced to the cell corner. No exceptions.
I understand that @MikeT, but how you obtain numbers to populate those values is software dependent. For example in Esri I would use the IActiveView.Extent and divide the width/height by the pixels of the form to get cell size, IActiveView.Extent.XMin + (CellSizeX / 2) is upper left cell centre X etc...
|
STACK_EXCHANGE
|
add .net sdk preview3 fsprojs to core3
using .net core sdk 1.0.0-preview3-004007
To test:
build paket build (becuase some paket files are rrequired as sources)
cd src\Paket.Core
dotnet restore Paket.Core.preview3.fsproj
dotnet build Paket.Core.preview3.fsproj
Atm i get one error
ProjectFile.fs(1341,35): error FS0039: The record label or namespace 'PackagesConfigFile' is not defined [e:\github\Paket\src\Paket.Core\Paket.Core.preview3.fsproj]
same error if i build src\Paket.Core\Paket.Core project.json with preview2, so is just something new?
Nvm fixed :D
Paket.Core.preview3 is ok
works with dotnet pack Paket.Core.preview3.fsproj too
rebased it on master and will take a look tomorrow. thanks buddy
ok this is pretty cool.
Restore and Build seem to work on master. @enricosada could you please take a look on Pack? I commented it out for now
Microsoft.FSharp.Core.netcore is that stuff merged into FSharp.Core alpha package?
/cc @dsyme @enricosada @KevinRansom
Try workaround for pack https://github.com/dotnet/netcorecli-fsc/blob/master/examples/preview3/console-crossgen/README.md
Yes is the same. Use fsharp.core if you want.
There are issues with fsharp.core a netcore but less discuss later.
@forki i'll check later.
Workaround was for crossgen (net46+netcoreapp).
I need to add the use case dotnet pack of console netcoreapp to netcorecli-fsc test suite too :)
Cool. I'll commented it out for now.
Am 05.11.2016 12:58 nachm. schrieb "Enrico Sada"<EMAIL_ADDRESS>
@forki https://github.com/forki i'll check later.
Workaround was for crossgen (net46+netcoreapp).
I need to add the use case dotnet pack of console netcoreapp to
netcorecli-fsc test suite too :)
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/fsprojects/Paket/pull/2004#issuecomment-258607153,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AADgNB1hon8UpyTfG-wHp0teH2IgeWm1ks5q7G9egaJpZM4KprLb
.
https://ci.appveyor.com/project/SteffenForkmann/paket/branch/master
The specified deps.json [C:\Users\appveyor\.nuget\packages\.tools\dotnet-compile-fsc\1.0.0-preview2-020000\netcoreapp1.0\dotnet-compile-fsc.deps.json] does not exist
what does that mean?
<PackageReference Include="FSharp.NET.Sdk">
<Version>1.0.0-alpha-000001</Version>
<PrivateAssets>All</PrivateAssets>
</PackageReference>
what does PrivateAssets mean and is it needed?
It's new of nuget spec for references AFAIK.
check the generated nuspec inside package.
I see Three values: Code(something like that)/analyzer/build.
All mean to restore also build (msbuild targets i think).
I didnt searched more details becuase doesnt matter for me but i think works like that.
If you Explore more and find more info, pls send a link to me too.
Deps.json error dunno why happen. I'll check
|
GITHUB_ARCHIVE
|