Document
stringlengths 395
24.5k
| Source
stringclasses 6
values |
|---|---|
From: David Abrahams (david.abrahams_at_[hidden])
Date: 2002-03-26 19:33:51
----- Original Message -----
From: "Rene Rivera" <grafik666_at_[hidden]>
> >Vladimir wrote a snazzy
> >python testing system but I never got around to trying to use it.
> >need a dummy toolset definition for testing purposes. All it would
> >to do is echo or cat some simple data into files.
> Great! Is that the *.py files in tools/build/test? And I'd be willing
> some of the test cases I've had to write when encountering bugs.
I hope that he and Steve K. will work out the "perfect" build system
testing environment. Wait a coupla days for their conversation to settle
down, I suggest.
> >What do you think needs to be covered in the top-level docs which
> >currently isn't?
> What I'd like to see is a better division between documentation for
> Boost.Build, and the rest of the documentation.
> Since I was looking at the docs tonight here are some things that I
> - Even though we document description of main targets, we don't really
> document the various main targets that are possible.One description
for all of
> them doesn't cut it from a users point of view.
> - There are a few non main targets we don't describe. "unit-test" is
> is one that I was unaware of for some time. That's the most glaring
> are probably others I don't know.
That one's really provisional. It should be replaced with the "run" rule
from status/Jamfile. Someone (Joerg) was going to break out the
functionality of status/Jamfile so it could be re-used, but I think he
> - All the python targets, pyd, testing, etc. aren't documented, as far
> can tell.
Yeah; that's partly because it's a bit messy. The Python testing code,
again, ought to be integrated with what's in status/Jamfile, but right
now it's working for me and I don't want to rock the boat while we do
> - We describe what features and, briefly, properties are. But we don't
> describe what they all are and what effect they have. The only thing
we do is
> point to the features.jam, which is in no way documented.
True. That's a bigger problem, too. How will features get added to the
system? They may creep in from various toolset/platform/library support
files. In a modular system, how do you document those components?
> - This applies only to V1, but we certainly lack in describing even
> minimal set of variables we have to set in order to get Boost to
> therefore aren't even close to documenting what all the various
> can set to change the behavior of things. Ones that immediately come
> are GCC*, STLPORT*, and PYTHON*.
I completely agree; however, we could spend weeks documenting all this
stuff which is going to be thrown out (soon, I hope!)
> >From both a user and developer viewpoint what I'd like to see is
> divided into three audiences: users, programmers, developers.
> Users are those writing their own Jamfiles, and nothing else. This
> a good description of all the possible constructs the can make use of
> the work from a production aspect. I consider this the cause and
> viewpoint, they have an intended effect and want to know the cause so
> make use of it.
I think that's a tall order. As you know, the constructs are limitless,
because users can start programming. What we /can/ do for users is show
them the basic usage, and tell them that supplied modules (e.g. for
STLPort support) will add their own capabilities. It would be worth
having a way to get help from the command-line:
to dump information about how to use the stlport module.
> Programmers are users that are supporting either an extended
> system or a custom one using Boost.Jam. They want to know some of the
> internals of the system, but usually just enough to get their stuff
> For example they would not be interested in the v2 architecture doc,
> be interested in the "Internals" doc section we currently have.
> Developers, that's us, are the ones doing the base coding. We need to
> everything, but we also don't need info to be totally digested to make
> it. Also in this group would be anyone providing patches to new
> they've implemented.
> I know that's a lot, and that few of us like writing documentation but
> to do it eventually.
> I'm not suggesting we do this now, that would be just
> very bad
Not even extremely bad? ;-)
> , but thinking about it at minimum. And at best coming up with a
> structure for the documentation would be good.
> And lastly, I think testing and docs are parallel tasks. If we are
> something we should be documenting that something.
I'm ready to start doing that.
Boost-Build list run by bdawes at acm.org, david.abrahams at rcn.com, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
|
OPCFW_CODE
|
In my recent blogpost, I discussed my participation in a Quality of Life Roundtable centered around brain tumor patients. The roundtable included healthcare professionals from institutions like the NIH/NCI and tertiary cancer hospitals, as well as patients, caregivers, and advocates. It was a richly rewarding experience, and I hope that it will inspire the allocation of grants to bring about meaningful change.
Today, I had a conversation with a former colleague who is a neurosurgeon. Our discussion revolved around the topics of interoperability, patient data access, and the problem of fragmented information. We touched upon the advancements in other countries that empower patients to have more control over their own data, as well as countries that have better systems for managing data access. During the Quality of Life meeting, I had raised concerns about data sharing, specifically regarding the separation of patient medical records, clinical trial data, and the lack of information for long-term survivors who experience recurrence.
For the majority of patients, caregivers, and families, their interactions primarily revolve around their own hospital and clinical team. They might participate in a clinical trial, where they briefly encounter a trial administrator and possibly other staff members. However, they remain unaware of what goes on behind the scenes, often receiving vague answers to important questions, such as:
- How was the trial designed?
- What data is being collected and why?
- What are the trial endpoints? (For many of us, what do endpoints refer to?)
- What tasks and tools do we need to complete?
- How does the team support us when we’re not in the hospital?
- Will we have access to the study’s results (assuming we are still alive)?
- How long does the study run?
Even if the data and tasks are clearly communicated, let’s imagine an ideal patient experience with digital access, reminders, and other helpful information. The foundations surrounding the trial and its publication are not meant for the eyes of patients and often their care teams. Furthermore, some of this data is managed within a clinical trial system, preventing patients from seeing the answers they provided in their hospital’s medical record portal.
This brings me to the aspect of Real-World Data in my previous post. Now, envision a scenario where electronic medical records (EMRs) and clinical trial data are interconnected. Patients could access information related to their participation in a clinical trial through their hospital’s portal. They would have additional information at their fingertips and hopefully realtime details like the size of the cohort, where they fit within the study’s demographics, and endpoint statistics (e.g., disease progression, overall survival, patient-reported outcomes, adverse events). Taking it a step further, by opting in, patients in the clinical trial could enable further analysis, even years later, allowing the study to request additional grant funding to examine long-term survivors using advanced technologies like germline and somatic DNA whole-genome sequencing.
- Why make patients wait for a study they are part of to be published?
At minimum let us know the study has been published that they are part of. This simple curtesy is often not extended. Alas, hospitals (in general) are not keen on sharing patient data, let alone telling patients about trials outside their institution or informing patients of a cancer registry for their specific tumor type. We have a long way to go to improve data sharing for patients to feel more empowered and active participants in clinical trial research. The glimmer of hope is the professionals I have meet agree there is a problem.
|
OPCFW_CODE
|
Install instructions warning and poetry shell
Trying to deploy locally. I tried exactly following the readme and I tried doing it inside a conda venv. I run into a poetry install warning and a problem with the .venv
john@johns-Air Python % git clone https://github.com/amjadraza/langchain-streamlit-docker-template.git
Cloning into 'langchain-streamlit-docker-template'...
remote: Enumerating objects: 70, done.
remote: Counting objects: 100% (70/70), done.
remote: Compressing objects: 100% (44/44), done.
remote: Total 70 (delta 31), reused 60 (delta 21), pack-reused 0
Receiving objects: 100% (70/70), 196.08 KiB | 1.23 MiB/s, done.
Resolving deltas: 100% (31/31), done.
john@johns-Air Python % cd langchain-streamlit-docker-template
john@johns-Air langchain-streamlit-docker-template % poetry install
Creating virtualenv langchain-streamlit-docker-template in /Users/john/Work/Python/langchain-streamlit-docker-template/.venv
Installing dependencies from lock file
Package operations: 65 installs, 0 updates, 0 removals
• Installing six (1.16.0)
• Installing attrs (23.1.0)
• Installing markupsafe (2.1.2)
• Installing mdurl (0.1.2)
• Installing numpy (1.24.3)
• Installing packaging (23.1)
• Installing pyrsistent (0.19.3)
• Installing python-dateutil (2.8.2)
• Installing pytz (2023.3)
• Installing smmap (5.0.0)
• Installing tzdata (2023.3)
• Installing certifi (2023.5.7)
• Installing charset-normalizer (3.1.0)
• Installing decorator (5.1.1)
• Installing entrypoints (0.4)
• Installing frozenlist (1.3.3)
• Installing gitdb (4.0.10)
• Installing idna (3.4)
• Installing jinja2 (3.1.2)
• Installing jsonschema (4.17.3)
• Installing markdown-it-py (2.2.0)
• Installing marshmallow (3.19.0)
• Installing multidict (6.0.4)
• Installing mypy-extensions (1.0.0)
• Installing pandas (2.0.2)
• Installing pygments (2.15.1)
• Installing toolz (0.12.0)
• Installing typing-extensions (4.6.2)
• Installing urllib3 (2.0.2)
• Installing zipp (3.15.0)
• Installing aiosignal (1.3.1)
• Installing altair (4.2.2)
• Installing async-timeout (4.0.2)
• Installing blinker (1.6.2)
• Installing cachetools (5.3.1)
• Installing click (8.1.3)
• Installing gitpython (3.1.31)
• Installing importlib-metadata (6.6.0)
• Installing marshmallow-enum (1.5.1)
• Installing pillow (9.5.0)
• Installing protobuf (3.20.3)
• Installing pyarrow (12.0.0)
• Installing pydantic (1.10.8)
• Installing pydeck (0.8.0)
• Installing pympler (1.0.1)
• Installing rich (13.3.5)
• Installing requests (2.31.0)
• Installing tenacity (8.2.2)
• Installing toml (0.10.2)
• Installing tornado (6.3.2)
• Installing typing-inspect (0.9.0)
• Installing tzlocal (5.0.1)
• Installing validators (0.20.0)
• Installing yarl (1.9.2)
• Installing aiohttp (3.8.4)
• Installing dataclasses-json (0.5.7)
• Installing numexpr (2.8.4)
• Installing openapi-schema-pydantic (1.2.4)
• Installing pyyaml (6.0)
• Installing sqlalchemy (2.0.15)
• Installing streamlit (1.22.0)
• Installing tqdm (4.65.0)
• Installing langchain (0.0.184)
• Installing openai (0.27.7)
• Installing streamlit-chat (<IP_ADDRESS>)
/Users/john/Work/Python/langchain-streamlit-docker-template/langchain_streamlit_docker_template does not contain any element
john@johns-Air langchain-streamlit-docker-template % poetry shell
Virtual environment already activated: /Users/john/Work/Python/langchain-streamlit-docker-template/.venv
john@johns-Air langchain-streamlit-docker-template %
I did not try inside conda environment but it should work. Can try deactivating and activating it again? Poetry Shell some time behaves, I had this issue too. However, I can see your environment is activated
I tried it in no venv, totally fresh and got the same result. However, I was able to use the Docker build / compose up and get it running on my local machine.
|
GITHUB_ARCHIVE
|
#include <CGLApp.h>
#include <CGLWindow.h>
#include <CGLRenderer3D.h>
#include <CMathRand.h>
#include <CStrUtil.h>
class CGLDragon3DWindow : public CGLWindow {
private:
CGLRenderer3D *renderer_;
double qval_;
public:
CGLDragon3DWindow(int x, int y, uint w, uint h);
void setQ(double q) { qval_ = q; }
void setup();
void createList();
void drawPoint(double x, double y, double z);
bool resizeEvent();
bool exposeEvent();
bool buttonPressEvent (const CMouseEvent &event);
bool buttonMotionEvent (const CMouseEvent &event);
bool buttonReleaseEvent(const CMouseEvent &event);
bool keyPressEvent(const CKeyEvent &event);
};
//---------
int
main(int argc, char **argv)
{
CGLAppInst->init(argc, argv);
CGLDragon3DWindow *dragon = new CGLDragon3DWindow(0, 0, 400, 400);
if (argc == 2) {
double q;
if (CStrUtil::toReal(argv[1], &q))
dragon->setQ(q);
}
dragon->setup();
CGLAppInst->mainLoop();
return 0;
}
//---------
CGLDragon3DWindow::
CGLDragon3DWindow(int x, int y, uint w, uint h)
{
qval_ = 0.967;
init(x, y, w, h);
}
void
CGLDragon3DWindow::
setup()
{
renderer_ = new CGLRenderer3D(this);
addControl();
}
void
CGLDragon3DWindow::
drawPoint(double x, double y, double z)
{
//glPointSize(2.0);
renderer_->drawPoint(x, y, z);
}
bool
CGLDragon3DWindow::
resizeEvent()
{
return true;
}
bool
CGLDragon3DWindow::
exposeEvent()
{
static bool has_list;
renderer_->clear(CRGBA(0,0,0));
renderer_->setForeground(CRGBA(1,1,1));
if (! has_list) {
createList();
has_list = true;
}
glCallList(1);
return true;
}
bool
CGLDragon3DWindow::
buttonPressEvent(const CMouseEvent &)
{
return true;
}
bool
CGLDragon3DWindow::
buttonMotionEvent(const CMouseEvent &)
{
return true;
}
bool
CGLDragon3DWindow::
buttonReleaseEvent(const CMouseEvent &)
{
return true;
}
bool
CGLDragon3DWindow::
keyPressEvent(const CKeyEvent &)
{
return true;
}
void
CGLDragon3DWindow::
createList()
{
glNewList(1, GL_COMPILE);
uint num_iterations = 5000;
double kmin = -3.0;
double kmax = 3.0;
double kd = 0.1;
for (double k = kmin; k <= kmax; k += kd) {
double g = (k - kmin)/(kmax - kmin);
renderer_->setForeground(CRGBA(g, g, g));
double x = 0.500001;
double y = 0;
double mag, q;
if (qval_ == 0.0) {
mag = 1;
q = 4*sqrt(1 - k*k);
}
else {
mag = k*k + qval_*qval_;
q = -4*qval_/mag;
}
double p = 4*k/mag;
for (uint i = 0; i < num_iterations; ++i) {
double tx = x*p - y*q;
y = x*q + y*p;
double ty = y;
x = 1 - tx;
mag = sqrt(x*x + y*y);
y = sqrt((-x + mag)/2);
x = sqrt(( x + mag)/2);
if (ty < 0)
x = -x;
int b = CMathRand::randInRange(0, 1);
if (b) {
x = -x;
y = -y;
}
x = (1 - x)/2;
y = y/2;
double z = p/2;
if (i > 20)
drawPoint(x, y, z);
}
}
glEndList();
}
|
STACK_EDU
|
Dr Peter GarraghanReader in Distributed Systems
Machine Learning systems, Sustainable computing, ML security, Cloud datacenters.
(Sept 2022): I have multiple positions available:
- Postdocs: Green computing, ML security.
if you have experience or strong interest in experimental systems research, kindly contact me via email to discuss further.
Peter Garraghan is a Reader (US Full Professor equiv.) and EPSRC Fellow in Distributed Systems. His research expertise is empirically studying and and designing high performance, resilient, and sustainable distributed systems at scale (Cloud datacenters, Deep Learning systems, core network infrastructure) in the face of societal and environmental change. His research places strong emphasis on conducting analysis, design, and evaluation via experimentation on systems both in laboratory and production.
Peter has published over 50 articles, has industrial experience building large-scale production distributed systems, and has worked and collaborated internationally with the likes of Alibaba Group, Microsoft, BT, STFC, CONACYT, and the UK datacenter and IoT industry.
He is the recipitent of the prestigious EPSRC Early-career Fellowship (2021 - 2025), and his research on sustainable computing and future AI infrastructure has featured in the media including the BBC and the Daily Mail.
- Distributed Systems, Deep Learning systems, Cloud datacenters
- Sustainable & energy-efficient computing at scale
- CPU & GPU cluster resource management
- Systems security & resiliency
Ph.D. in Computer Science (University of Leeds, UK)
PhD Supervision Interests
I am happy to explore and supervise topics within distributed systems, machine learning systems, cloud computing, energy, resource scheduling, security, and dependability research. If you have your own ideas for a research project you would like to pursue, feel free to contact me to discuss further. I currently have multiple fully funded positions open.
MSI: Ultralow-power, Non-volatile, Random Access Memory Arrays for Data centers and Space Applications (ULTRARAM)
01/05/2022 → 30/06/2024
SL: PINCH: An End-to-end Cyber Security Technology to Better Understand the Risks of Deep Learning Model Stealing
01/04/2022 → 31/07/2022
ICASE: Sustainable Workload Scaling in Distributed Network Infrastructure (Matthew Hodkin)
01/10/2021 → 30/09/2025
Reducing the Global ICT Footprint via Self-adaptive Large-scale ICT Systems
01/06/2021 → 31/05/2025
Future Places: A Digital Economy Centre on Understanding Place Through Pervasive Computing
01/10/2020 → 30/09/2025
Investigation of intelligent Controller for indirect Heaters on Gas Networks
01/06/2018 → 15/07/2018
Pin the Tail: Understanding Straggler Manifestation in Internet-based Distributed Systems
01/09/2017 → 30/11/2019
Security Lancaster, Security Lancaster (Systems Security)
FST Sustainability Advisory Committee, Materials Science Institute PhD Student, MSF PhDs Cohort 2 (2019/20)
- DSI - Foundations
- Fundamentals of IRAS
- Lancaster Intelligent, Robotic and Autonomous Systems Centre
- MSF Supervisors 2019/20
- SCC (Distributed Systems)
- Security Lancaster
- Security Lancaster (Distributed Systems and CPS)
- Security Lancaster (Software Security)
- Security Lancaster Secure (Machine Learning and Intelligence)
|
OPCFW_CODE
|
Interest rate to use in black scholes when rates of borrowing and lending are different
I was reading Option pricing and volatility by Sheldon Natenburg and he talks about interest rates and which interest rate to feed to the model. Here is the paragraph from the book(chapter 5):
"The situation is further complicated by the fact that most traders do not borrow and lend at the same rate, so the correct interest rate will, in theory, depend
on whether the trade will create a credit or a debit. In the former case, the trader
will be interested in the borrowing rate; in the latter case, he will be interested in
the lending rate. However, among the inputs into the model—the underlying
price, time to expiration, interest rates, and volatility—interest rates tend to play
the least important role. Using a rate that “makes sense” is usually a reasonable
solution. Of course, for very large positions or for very long-term options, small
changes in the interest rate can have a large impact. But for most traders, getting
the interest rate exactly right is usually not a major consideration."
I am concerned with the statement "In the former case, the trader
will be interested in the borrowing rate; in the latter case, he will be interested in
the lending rate.".
I believe, if I have a credit, I would lend it out to someone, so my rate of interest would be the lending rate and vice versa in the case of debit.
Am I missing something here?
I think, a possible justification of this confusion is that I need to look at the riskless portfolio used in the Black Scholes derivation and looking at the rate we expect it to grow at(by arbitrage arguments)
Is my second interpretation correct or am I misinterpreting the author completely?
The statement in Natenburg's book might be causing confusion due to the specific use of "credit" and "debit" in the context of option trading. Here's a clarification: A credit in option trading doesn't mean someone owes you money. It refers to the net cash inflow you receive when selling an option. This inflow comes with the obligation to potentially buy or sell the underlying asset later. To cover this, you might need to borrow, hence the interest in the borrowing rate. Similarly, a debit isn't a debt you owe. It's the net cash outflow when buying an option. This gives you the right, but not the obligation, to buy or sell the underlying asset. If you have extra cash, you could lend it out, hence the interest in the lending rate.
Your intuition about lending out a credit is valid in a general financial sense, but in option trading, the terms have a different connotation. The focus is on the cash flow's direction due to the trade, not the existence of an actual debt or receivable.
Your second interpretation is on the right track. The Black-Scholes model hinges on the concept of a riskless portfolio. This portfolio should grow at the risk-free rate due to arbitrage arguments. If it doesn't, traders would exploit the opportunity for riskless profit, pushing the option price back to its "fair" value.
Therefore, the relevant interest rate in the model depends on the cash flow from the option trade: If there is a net inflow (credit from selling), the relevant rate is the borrowing rate since you could use this inflow to offset borrowing costs. If there is a net outflow (debit from buying), the relevant rate is the lending rate since you forgo potential interest income by using the cash to buy the option.
When you say net cash inflow or outflow, do you consider the money spent/earned on the underlying to establish the risk less portfolio as well, or is it just the money you get from the option trade only?
|
STACK_EXCHANGE
|
Enable off-main-thread rendering on macOS
In https://github.com/servo/webrender/issues/1640 we discussed how Firefox
(and probably also other browsers) have a compositor thread, which is not the
main thread, that does all their OpenGL calls. In glutin this kind of
architecture doesn't seem to be possible right now since it's not possible to
split the GlWindow into the window and the context, and the Context
doesn't implement Send on macOS.
The following two changes allow such architectures:
Implement Send for Context on macOS. The Apple docs suggest that this is correct.
Add a split method to GlWindow that allows obtaining an owned Window and Context.
cc @mitchmindtree @mstange @glennw
I don't like this change as it is, because destroying the Window before the Context (which you could do with this split method) is a really bad idea.
Interesting, that makes sense. I'll see if I can think of some way to use the borrow checker to fix that, if possible.
Okay here's a sketch of my new idea, I'm not sure when or if I'll get around to fully implementing it though:
trait RenderThread {
type Message;
type State;
// these are both called only on the render thread
/// sets up state that needs the context, and probably sets context as active for this thread
fn init(ctx: &Context) -> State;
/// called by render thread event loop on getting an mpsc message
fn got_message(ctx: &Context, state: &mut State, msg: Message);
}
struct ConcurrentGlWindow<R: RenderThread> { ... }
enum RenderThreadMsg<T> {
// handled by the thread's loop, drops the context and shuts down the thread
Close,
// calls got_message with the contained message
Inner(T),
}
impl<R: RenderThread> ConcurrentGlWindow<R> {
// this is the main loop of the render thread, called from a thread::spawn somewhere
pub render_thread_loop(render_thread: R, ctx: Context, rcv: mpsc::Receiver<RenderThreadMsg<R::Message>>) {
// 1. do context initialization that has to be done on the render thread, if any
ctx.do_setup_stuff();
let mut state = render_thread.init(&ctx);
for msg in rcv.into_iter() {
match msg {
RenderThreadMsg::Close => break,
RenderThreadMsg::Inner(m) => render_thread.got_message(&ctx, &mut state, m),
}
}
}
// enable the main thread to send messages to render, like new frame data
pub fn sender(&self) -> &mpsc::Sender<RenderThreadMsg<R::Message>> { ... }
// on platforms that don't support off-main-thread rendering, this manually ticks the render thread event loop on the main thread. When there is a separate thread, it does nothing.
pub fn tick(&self) { ... }
pub fn window(&self) -> &Window { ... }
}
impl Deref for ConcurrentGlWindow { type Target = Window; ... }
impl Drop for ConcurrentGlWindow {
fn drop(&mut self) {
self.sender.send(RenderThreadMsg::Close);
self.thread_join_handle.join();
}
}
Thoughts?
Theoretically, the correct and easiest fix would be to implement Sync on Context and !Sync on Window, so that the reference grabbed by calling gl_window.context() can be sent to another thread.
However you are probably going to run into lifetime issues when trying to use this design in practice.
@tomaka implementing Sync on Context would be partially correct, but I don't think it would be fully correct.
On macOS, each thread has its own OpenGL context, and you are only supposed to do calls on that context from one thread at a time. You can do it from multiple threads, but only if you synchronize them. If Context was Sync you could activate it on multiple threads and do concurrent OpenGL calls with nothing stopping you. With ConcurrentGLWindow you can only ever activate the OpenGL context on the render thread, which stops you from doing unsynchronized OpenGL calls.
Also the ConcurrentGLWindow approach works on both platforms that do and don't support off-main-thread OpenGL with the same code, as long as you call .tick() in your event loop.
On macOS, each thread has its own OpenGL context, and you are only supposed to do calls on that context from one thread at a time. You can do it from multiple threads, but only if you synchronize them. If Context was Sync you could activate it on multiple threads and do concurrent OpenGL calls with nothing stopping you. With ConcurrentGLWindow you can only ever activate the OpenGL context on the render thread, which stops you from doing unsynchronized OpenGL calls.
That's the reason why the methods on the GlContext trait and the OpenGL functions themselves are both unsafe. Glutin will never protect you against this one-context-per-thread thing.
|
GITHUB_ARCHIVE
|
Doctoral Dissertation Defense: Chen Lai
Advisors: Drs. Florian Potra and Susan Minkoff (UT Dallas)
Wave propagation through air and ground is studied for two different phenomena: (i) Nearly Perfectly Matched Layer Boundary Conditions for Operator Upscaling of the Acoustic Wave Equation and (ii) Modeling of Air Platform Detection Methodology Based on the Human Auditory System.
(i) Acoustic imaging and sensor modeling are processes that require repeated solution of the acoustic wave equation. Solution of the wave equation can be computationally expensive and memory intensive for large simulation domains. One scheme for speeding up solution of the wave equation is the operator-based upscaling method. The algorithm proceeds in two steps. First, the wave equation is solved for fine grid unknowns internal to coarse blocks assuming the coarse blocks do not need to communicate with neighboring blocks in parallel. Second, these fine grid solutions are used to form a new problem which is solved on the coarse grid. Accurate and efficient wave propagation schemes also must avoid artificial reflections off of the computational domain edges. One popular method for preventing artificial reflections is the Nearly Perfectly Matched Layer (NPML) method. In this paper we discuss applying NPML to operator upscaling for the wave equation. We show that although we only apply NPML to the first step of this two step algorithm (directly affecting the fine grid unknowns only), we still see a significant reduction of reflections back into the domain. We describe three numerical experiments (one homogeneous medium experiment and two heterogeneous media examples) in which we validate that the solution of the wave equation exponentially decays in the NPML regions. Numerical experiments of acoustic wave propagation in two dimensions with a reasonable absorbing layer thickness resulted in a maximum pressure reflection of 3–8%. While the coarse grid acceleration is not explicitly damped in our algorithm, the tight coupling between the two steps of the algorithm results in only 0.1–1% of acceleration reflecting back into the computational domain.
(ii) We develop a general framework for the aural detection of air platforms. We consider air platform noise and ambient sound information in our formulation as well as physiological characteristics of the human auditory system to provide the probability of acquisition with respect to range. The primary focus is sound propagation in a heterogeneous domain with realistic atmospheric conditions, which can be prohibitively expensive in computation. In order to minimize the computational cost, we formulate a mixed numerical method in a modified finite difference scheme with the NPML method for the Helmholtz equation. We adapt the idea of using Green’s function in the Helmholtz equation formulation in order to implement a domain decomposition scheme. We describe the subgrid unknowns only based on the coarse block unknowns by using the Green’s functions. We formulate a linear system of the coarse block unknowns to solve. In the coarse block problem, we incorporate the forcing function or specific air platform signal and solve for the subgrid unknowns. We can easily update the forcing function for various acoustic signatures in the coarse block problem. The numerical case studies and application examples demonstrate the efficient performance and long time stability of our formulation for unbounded domain problems. Numerical experiments resulted in a maximum relative error in amplitude of 2–4%. Ollerhead’s aural detection model serves as the foundation of aural detection in our study. A collection of air platform signatures in various environmental conditions has been produced. This study also provides an algorithm scheme that can be integrated with Defense combat simulations.
|
OPCFW_CODE
|
Platform for Creation, Distribution, Discovery and Management of APIs focused on developers
Yappes Technologies Private Limited announced the release of their latest version of Yappes, the go to global platform for creation, distribution and management of Application Program Interface (API). Yappes is a secure, cloud-based, domain agnostic Backend as a Service (BaaS) platform which allows users/developers to reach a worldwide ecosystem of users/developers through online market place. Yappes has cutting-edge technologies and suite of tools for the developers to create APIs within minutes, thus saving hours of tedious coding. More importantly it helps leverage APIs to enhance revenues and supports any business’s digital strategy. In today’s business scenario, APIs are the most important and critical link through which applications and software communicate with one another. However, creating and managing an API is complex, cumbersome and expensive. Yappes Ver 3.0 platform enables users to manage the entire API lifecycle effectively, efficiently and in a simple easy to use process.
Yappes provides flexibility for the developers/providers to write their own business logic and connect with their remote data silos for the APIs. The platform is built by developers for developers, and fosters creation of robust complete and comprehensive APIs. Yappes can help Design, Develop, Test and Release APIs for distribution and provides detailed analytics, monitoring and monetization services. It helps users find and use APIs quickly, and a provider-developer to easily create, share and monetize it. It is a revenue generating prospect for developers, providers and users.
– Cloud Platform providing BaaS for APIs.
– Live APIs are accessible within a matter of minutes across different development phases (dev, sandbox and production).
– Remote connect to MySQL and Mongo database silos over SSH.
– Migrate and save the data within the application as Store/Collections.
– Enhanced security where all transactions are authenticated and authorized.
– Maintain and Transition each Endpoint across the development cycles.
– Monitor through a single window all APIs created and their usage.
– Easy integration through readily accessible multiple Libraries/SDK.
– Try-out or Test the APIs on the platform, prior to integration.
– Negotiate pricing and terms in private deal rooms between the API provider and consumer with dynamic pricing support.
About Yappes Technologies Private Limited:
Yappes Technologies, an Indian company founded by Bhanu K Jain (CEO) and Rajagopal Somasundaram (CTO) is young, promoted by technophiles and seasoned promoters and has exponential growth prospects. It was incubated by TSI in 2016 November. The company is selected as one among the top 10 global startups by Eye for Travel, Las Vegas.
The development was reported by prnewswire.co.in
|
OPCFW_CODE
|
Can't edit or select anything in document after opening it in Microsoft Word unless I reopen it
I've got a client I'm working with that is having an error I've never seen before and I can't seem to find anything about it. Here's the issues:
Client opens an Office Word file from their server and it opens. However, when they attempt to select anything in said file or edit it in any way, they can't. I tried and verified that the file isn't frozen. You can open up multiple files from the server and all documents have the same problem. So far as I know this is the only computer that is currently having the issue. Other people in the same office haven't complained about this problem.
I tried closing the file and having her re-open it, and the problem went away. This is an intermittent problem for her. Has anyone seen this?
define "can't". Your title states "error", what is it specifically.
Well that's the thing, I don't have an error code or anything like that to go off of. More specifically, I can open the Word document in question and it displays normally, but I can't do anything at all to the file. In fact, I have to close it via task manager, but according to task manager Word isn't frozen. I verified this because you can see the mouse icon blinking in the Word file, but you can't do anything to said file.
Sorry I know that's a bit confusing. Does that make sense?
Edit you question to include this information. Is this the only PC that has issues editing word files on the server?
Did you try Repair an Office application?
I didn't for Word yet. She seems to be having this problem with Word and perhaps Excel, but not in Outlook. I had removed a problematic update for Outlook that was causing to open in safe mode, but she doesn't have any other issues with that application.
consider reinstalling word as an option.
Yeah I'll give that a shot and see what happens. Thank you for the suggestion. I take it no one has seen this sort of behavior in Word?
@JoshMcMullin Perhaps I might've seen something like it when it wasn't activated or something but i'm not sure and I don't recall about reopening it, or even if that was a symptom of that. BTW, the word "pc" is old fashioned. And avoid the word "they" if it's ambiguous, it's clearer to say what is meant by "they" So i've edited your question appropriately. Your question would've got a -1 because of the undescriptive title. Imagine if a user said to you "weird ms word problem" that doesn't say much, does it.
I believe the error was specifically tied to corrupted Office files. Thanks to all for the suggestions.
Thanks for closing the loop on your question. It might enhance the usefulness to others if you can expand your answer a little. What makes you think that? What did you do to solve it?
|
STACK_EXCHANGE
|
I've been testing a lot of bridges lately and have to say that most have greatly improved upon the convoluted and often frustrating setup process that have been the hallmark of these devices. Sad to say, the WUMC710's setup is more old school than new if you don't opt to set it up using WPS pushbutton.
The 710 will grab an IP address if it detects a DHCP server plugged into any of its LAN ports. But since you would normally plug the device you are trying to connect into it (which won't have a DHCP server), it will set itself to a default IP of 10.100.1.1. This means you'll need to set your device IP to something in the same range (like 10.100.1.10) to connect to the admin pages and finish the setup.
Alternatively, you can plug the 710 into your LAN router, let it grab an IP, check your router's DHCP list for the 710's IP address, use that IP to access the server and set a static IP so that you don't lose it.
Compared to the captive-portal techniques I've been seeing on the other more recent-vintage N bridges I've been testing, setting up the 710 was much more difficult. Here's a shot of the login landing screen showing the classic Linksys admin look.
Basic Setup screen
I've put screenshots of the other admin screens in the gallery below.
The WUMC710 functions only as a wireless bridge, not an AP and not a repeater. So the feature set is pretty simple:
- Static and dynamic IP for bridge IP
- WEP, WPA / WPA2 Personal wireless security
- Wi-Fi Protected Setup (WPS) support, pushbutton and PIN
- Wireless Network Site survey
- HTTPs admin access
- Can't dim or shut off LEDs
- No transmit power adjust
I used our new test process, which is also used for testing wireless adapters, to test the WUMC710. The bridge had its original v1.0.01 firmware loaded because there have been no updates. The bridge was placed so that its left side (viewed from the indicator end of the product) faced the test chamber antennas, which were 8" away. The ASUS RT-AC66U reference router was set to Channel 153 and Auto 20/40/80 MHz bandwidth mode.
The Benchmark Summary shows the averages of all the measurements made in both directions. Note the distinct imbalance between up and downlink.
As I mentioned earlier, I have tested only one other dedicated AC1300 bridge, the WD My Net AC Bridge. So both are included in the throughput vs. attenuation plots below.
5 GHz downlink throughput vs. attenuation
The WUMC710 is clearly outperformed by the WD bridge, especially on uplink.
Given its rather disappointing performance, the only thing the WUMC710 has going for it is its price. If $50 is all you have in your AC bridge budget, the Linksys is the only game in town. But if you're willing to goose your budget to $60 or so and don't mind buying a discontinued product, the WD My Net AC Bridge is the better performer of the two.
But given these two uninspiring choices, I suspect most of you will look for sales on your favorite AC router if you need a bridge.
|
OPCFW_CODE
|
Do you have a question? Post it now! No Registration Necessary. Now with pictures!
- Posted on
- Spaces in directory names -- ugly!!
July 31, 2004, 5:25 pm
rate this thread
any similar posting.
Been developing pages for a few months, getting up to speed in
Working on my machine (Win2K, various editors -- not page development
tools, like to get my hands dirty in the code), viewing the pages in
IE6 from the local directory for testing, etc. Working in C:My
DocumentsDev, already knew and was using "%20" rather than the space
in "My Directory" in the addresses. No problem.
Needed pages -- FAQs, actually -- where the links to each would
include anchor refs, as in www.faqpage.html#ans20. Not complicated,
Could NOT get them to work. Stripped it down to a test page with
everything removed but a bunch of text blocks and the anchors -- no
the IE opening of the local-file URL with the hash anchor value will
NOT scroll the page to that anchor.
Found examples on the Web of working anchor refs. Captured the
source, created one such page locally, in C:Temp. It works just
Strip it to match my non-working version. Still works fine.
Light starts to dawn. Copy it to my development directory, C:My
DocumentsDev. IT STOPS WORKING! Page loads, but does not go to the
anchor. Whether I cite it with the space or the %20, it will not
Am I really the first to discover that, when testing pages locally, do
NOT have them sitting in a directory where the complete address string
contains spaces, even if you replace the spaces with %20?
Time to start drinking . . .
Re: Spaces in directory names -- ugly!!
> [...] as in www.faqpage.html#ans20. Not complicated, right?
I'm not sure; even if we wildly guess that this is supposed to be some
kind of host name, it looks complicated enough to me to resolve the TLD
'html' (and a fragment identifier usually addresses a file, not a host).
> Am I really the first to discover that, when testing pages locally, do
> NOT have them sitting in a directory where the complete address string
> contains spaces, even if you replace the spaces with %20?
Who knows (or even cares)? *Testing* webpages locally requires running
a webserver locally (this is not the place to learn how to do that, BTW,
or for inquiries about file system conventions of one or the other
| ) 111010111011 | http://bednarz.nl /
| ) Distribute me: http://binaries.bednarz.nl/mp3/aicha
Re: Spaces in directory names -- ugly!! -- correction to typos
"www.somedomain.com/faqpage.html#ans20". And my testing technique
starts with a simple double-click on the file name in the directory
(Windows Explorer) which opens it in my browser, then I add the
"#ans20" and Go; from then on, just keep refreshing as I make
changes.(Except that, once in a while, changing the base file results
in the IE6 window that cites it closing -- but not always. Gave up
trying to figure out why it sometimes closes, sometimes not -- on the
same page/file name. And sometimes the refresh doesn't take unless I
erase the Internet cache . . .) Once it's working that way, then it
gets uploaded to my server, checked out in various browsers.
Oh, yes, that should read replacing the space in "My Documents", not
"My Directory", with "%20".
Too much frustration venting while typing . . .
|
OPCFW_CODE
|
What are the options for audio equalizers in Ubuntu 15.10?
I have looked around but only found discontinued projects. What audio equalizers are able to be used in 15.10.
The PulseAudio-Equalizer is the best for Unity and Gnome desktop environment. It has 15-band Graphic Equalizer with many presets available and it's completly integrated with Audio-Indicator on system tray of Ubuntu 15.10 Wily Werewolf and also with VLC media player and Popcorn Time.
To install, just follow the steps below:
sudo add-apt-repository ppa:nilarimogard/webupd8
sudo apt-get update
sudo apt-get install pulseaudio-equalizer
Just as a clarification about using the PulseAudio-Equalizer from ppa:nilarimogard/webupd8 I spent a while trying to figure out how to open the equalizer after installing it. There's an included GUI, not sure if it's accessible from the terminal, but you can definitely get to it by searching for "PulseAudio Equalizer" (press the super key to open the search, and type "PulseAudio Equalizer").
Don't worry. The instalation just create a menu item on Launcher automaticaly for you in this location /usr/share/applications/pulseaudio-equalizer.desktop. It works for both Unity and Gnome Shell desktop environments as well. You can drag the icon from Launcher and drop it on Dash to easier access.
Too bad I can't use this. I am not allowed to add respositories because security audits would then be required to audit the foreign respositories, too. Too many computers involved to cause one to take up so much of their time, so that's that.
There are 2 PulseAudio EQs I know of: qpaeq and pulseaudio-equalizer (ladspa-sink).
The issue I ran into with PulseAudio EQs is that they tend to introduce audio latency and crackling/popping sounds when starting/quitting applications on my hardware (Xonar DX).
The best solution I came up with is to use the JACK audio server that is used for professional audio production on Linux and put that between PulseAudio and ALSA (the hardware connection). This allows for various EQ modules to be applied on a low-latency basis while still keeping the PulseAudio interface for your applications, so you don't have to adjust them in any way.
It is a rather non-invasive approach; you can give it a try using my guide here: https://github.com/M4he/Linux/tree/master/JACK/PA_through_JACK
This is 300% what I as looking for. Worked on the 1st try on Elementary OS (ubuntu 16.04). Any catch of using an EQ with more bands?
@RuiMarques I don't think more bands would give any real benefit on a non-pure JACK setup. At most you are getting increased latency or higher load I'd guess. You can choose between an 8, 12 or even 30 band EQ with calf though, so feel free to play around.
Interesting, but again I run into the security audit problem. Oh, well.
This sounds exactly the correct way forward for Ubuntu audio. Is there any reason Ubuntu doesn't configure things like this by default?
|
STACK_EXCHANGE
|
Trusting the user
Friends trust each other. If a site is going to be friendly, it's good
to trust the user.
But where do you draw the line of letting them take responsibility for
their own actions?
Sometimes a particular user action
might have serious results, e.g. deleting data.
How do you decide when to step in and
help the user be sure they know what they're doing?
How much help is too much?
Example of too little trust
I'm using a work laptop running
If I create a folder in CAPS, it
'helps' me by changing my folder name to initial capitals e.g. "NAFTA"
to "Nafta". What could it do better?
- It could leave me the hell alone.
I'm the one naming the folder. If I want a folder name in caps, it's my
choice and my responsibility.
- It could change it to initial
capital - ONCE. But if I rename it back to "NAFTA", it should respect
my wishes. (Win2000 fails to do this. It thinks that, even if I
actively rename a folder to a term in full caps, I'm probably mistaken!)
- It could use some intelligence and
look the word up in a dictionary. If it doesn't recognise the word,
leave it alone. If it does recognise it, change it for me, but only
Too little trust 2
I used go on a networking website
When I posted a blog on this site,
there was only a Preview button. What you have to do is preview first,
to check the layout, then you get (re)Preview and Submit options. The
blog is only posted when you submit.
I have lost messages on at least 3
occasions because of this over-cautious behaviour! The problem would be
lessened if the result page after clicking 'Preview' had a bright red
banner saying "STOP, you haven't submitted your message yet!!".
What should they do? Users have the
ability to edit their blog messages, so why not trust them to check it
Example of enough trust
In this screenform, the user can
delete records from a database by selecting one or more checkboxes, and
clicking the button.
In this case, because the user has to
do two separate actions, there is no 'Are you sure..?' prompt. That
would be too much 'help'.
It can sometimes be appropriate to
present an 'Are you sure..?'. The decision comes down to a combination
of: likelihood of triggering the action in error, and severity of the
The Percentage Game
If in doubt, play a percentage game:
Estimate the chance that a user triggering an action (e.g. delete) is
doing it in error, and multiply that by the pain caused (the severity
of the consequences, out of 100).
e.g. Taking the form above, there's a
probability of 1% that someone clicking the button doesn't mean to
delete the records. Multiplied by a pain of 60/100, .01 x 60 = 0.6
Compare that with: the probability
that the user isn't making a mistake, multiplied by the pain of having
to click the confirmation. In this case, it might be 99% probability x
5/100 likely pain, gives .99 x 5 = 4.95 likely pain.
The prompt is therefore about 8x more
inconvenient than having the chance to make a mistake. That's why, in
that case, it's better to trust the user.
|
OPCFW_CODE
|
unable to run mfe app on windows: module not found MFClient.js
Noticed something when trying to run the mfe app on windows. On mac, I had no issues starting up my apps. Was wondering if there is a solution for this. Checking the modules folder, I noticed we have a patchNextClientPageLoader.js file that was meant to resolve this issue however it doesn't seem to be working.
customer: error - ../../node_modules/next/dist/client/page-loader.js:111:23
customer: Module not found: Can't resolve<EMAIL_ADDRESS>customer: Did you mean<EMAIL_ADDRESS>customer: Requests that should resolve in the current directory need to start with './'.
customer: Requests that start with a name are treated as module requests and resolve within module directories (node_modules, C:\Users\Raymond\next-nx-mfe).
customer: If changing the source code is not an option there is also a resolve options called 'preferRelative' which tries to resolve these kind of requests in the current directory too.
customer: Import trace for requested module:
customer: ../../node_modules/next/dist/client/index.js
customer: ../../node_modules/next/dist/client/next-dev.js
customer: https://nextjs.org/docs/messages/module-not-found
my dependencies:
@module-federation/nextjs-mf: "^5.11.0"
next: 12.3.1
I've been battling with the same issue on windows. when I add automaticAsyncBoundary: true to the config I'm getting different errors but they all seems to point to how paths are resolved on different OS's.
extraOptions: {
exposePages: true,
enableImageLoaderFix: true,
enableUrlLoaderFix: true,
skipSharingNextInternals: false,
automaticAsyncBoundary: true,
automaticPageStitching: true
}
This seems to strip the / chars from module paths.
error - C:UsersHarles-HermanPilterDocumentsProjectsLearningext-nx-mfeappshostpages_app.tsx?hasBoundary Module build failed: UnhandledSchemeError: Reading from "C:UsersHarles-HermanPilterDocumentsProjectsLearningext-nxmfeappshostpages_app.tsx?hasBoundary" is not handled by plugins (Unhandled scheme).
Just putting a link to the repo here https://github.com/HarlesPilter/next-nx-mfe where this can be reproduced. @rayng86 Its a fork of the same repo you used to demonstrate an issue in https://github.com/module-federation/nextjs-mf/issues/360 Hope you don't mind.
hey there weren't any additional commits or changes in your forked version aside from our initial commit but I did try the automaticAsyncBoundary: true and can confirm the same Unhandled scheme error you were getting.
customer: info - automatically enabled Fast Refresh for 1 custom loader
representative: info - automatically enabled Fast Refresh for 1 custom loader
host: info - automatically enabled Fast Refresh for 1 custom loader
customer: error - C:UsersRaymond
customer: ext-nx-mfeappscustomerpages_app.tsx?hasBoundary
customer: Module build failed: UnhandledSchemeError: Reading from "C:UsersRaymond
customer: ext-nx-mfeappscustomerpages_app.tsx?hasBoundary" is not handled by plugins (Unhandled scheme).
customer: Webpack supports "data:" and "file:" URIs by default.
customer: You may need an additional plugin to handle "c:" URIs.
customer: [ ready ] on http://localhost:4201
host: error - C:UsersRaymond
host: ext-nx-mfeappshostpages_app.tsx?hasBoundary
host: Module build failed: UnhandledSchemeError: Reading from "C:UsersRaymond
host: ext-nx-mfeappshostpages_app.tsx?hasBoundary" is not handled by plugins (Unhandled scheme).
host: Webpack supports "data:" and "file:" URIs by default.
host: You may need an additional plugin to handle "c:" URIs.
host: [ ready ] on http://localhost:4200
representative: error - C:UsersRaymond
What happens if you use this
const pathMFClient = require.resolve('@module-federation/nextjs-mf/client/MFClient.js'));
Ill get a windows VM tomorrow and try running one of these setups in there and see if i can repro it
@ScriptedAlchemy
We found someone to test this on Windows. He's receiving the same error.
Okay let me get a windows box up and ill check it
I just tested changing const pathMFClient = require.resolve('@module-federation/nextjs-mf/client/MFClient.js'); and was able to get the mfe app to start on windows so the absolute path does work.
|
GITHUB_ARCHIVE
|
SaaS Solutions for eCommerce
Spend your budget on delivering unique business features rather than reinventing expensive technological foundation.Book a Meeting
SaaS Solutions on Top of the Virto Commerce Platform
A disruptive startup owned by Accelya Group creates innovative ways to define, bundle and personalize offerings in travel industry.
Fortune 500 Brewery Company
The company developed a SaaS B2B Portal on top of the Virto Commerce Platform, used by distributors and retailers around the world.
An online real estate solution to serve an unmet need in providing digital commerce to a specific audience.
Using the Virto Commerce Platform for Building a SaaS Solution
Everything you need to run a successful and scalable SaaS solution with a minimum of time, cost and risk.
Designed for Fast MVP Delivery
Due to the proper architecture the Virto Commerce platform is designed to be adoptive and flexible. We provide enablers for building up a solution of any complexity on top of the platform with minimal time, costs and risks.
Special License Model for Startups
We realize the license-related challenges which startups face. Being a market disruptor ourselves, we respect and facilitate market disruptors. That is why we developed specific licensing models for startups.
Reliable SaaS Solutions Expertise
The Virto Commerce team has extensive successful experience in facilitating startup MVPs and we share our expertise with our partners and customers. We help our clients to select the right implementation partner and engage our experts when needed.
Virto Commerce Platform for Building SaaS Solutions
- Everything for eCommerce
Digital catalog, cart, orders, accounts, pricing, inventory, marketing, promotions, subscibtions, quotes and everything you need to build a unique e-commerce solution on top of it.
- Modular Architecture
Virto Commerce has a flexible modular architecture that allows developers to select the required granularity of services (micro-service or service oriented), add new functionality, and discard obsolescence.
- Open Source
The entire code is in GitHub. No hidden problems, no hidden technical dept, we are open. Developers can read the code and make their decision based on what they see. Extend the system and adapt it to your business needs.
- API-Based and Multi-Channel
Each and every function that you can use in Virto Commerce is available via API. Easy to connect Virto Commerce with any software or any touch point to provide a multi-channel user experience.
- Customizable & Extensible
You can extend the Virto Commerce Platform with your own code, and it will be still possible to receive updates from the vendor. Design and develop the functionality on top of the Virto Commerce Platform which is 100% suitable for your business requirements.
The Virto Commerce Platform has been designed as a cloud-first sofware for Azure. Since the release of version 3 in 2020, it is possible to run the platform on Linux as well.
Want to learn more about SaaS solutions on top of the Virto Commerce platform?
|
OPCFW_CODE
|
Last active 2 months ago
Awesome Thanks! What script has the bless command? Or if you used a different command what is it?
In CloneDeploy 1.2.0 there was a section for editing core scripts in the Global > Imaging Scripts setting. I cannot seem to find that in my deployment of 1.3.3. I just want to make sure I'm not missing some crucial piece of the server setup. There have been a few quirks in upgrading as you know. So if there is a different location or specialized account (although I'm logged in as an admin). Thanks
Of course I tested again today with two devices that imaged just fine from 10.12 to 10.13 with no issue. Here's the log for the first device I tested where on reboot I had a ? folder however once choosing the volume Macintosh HD it booted just fine.
Update on imaging from Sierra to High Sierra. After upgrading to CloneDeploy 1.3.3 and working though a couple of hick-ups getting the NetBoot service working I have been able to image Sierra to High Sierra successfully. I did not have to use the extracted firmware package as chronicled in the link you provided above. Initially when imaging I received a warning from CloneDeploy (thank you for putting that bit of code in) that imaging hfsf+ to apfs could result in an unbootable drive. The result, however has been a perfectly bootable drive. The only issue is that the volume does not get blessed at the end of imaging so initial boot will be a question mark folder. This is reminiscent of when you first added Mac support and PXE booting obviously did not have access to the mac command line and could not bless the drive. So my question is this: Is it possible to add a post script to be run at the end of the Mac imaging to bless the drive? If so where would I add this in CloneDeploy? Thanks for all the hard work, and hanging in there bucking the Apple Juggernaut.
That worked adding that to the config file. I can now boot into apple netboot, however after the upgrade to 1.3.3 I cannot upload an image. I get an error that mounting the smb share failed. After changing the user passwords in both the web interface and on the server I went through your SMB troubleshooting documentation. Everything looks good until I get to step 6 booting into the console. When I attempt to mount the share with user cd_share_rw and the password that I reset and verified I receive a message that mounting the share failed and permission was denied. Are there certain characters I should avoid when creating this password?
The nbis are in C:\Program Files (x86)\clonedeploy\application\public\macos_nbis\0001\ per your instructions here: https://clonedeploy.org/docs/using-the-macos-imaging-environment/
So change that in the config.ini file? Also when making the nbi?
Here's the latest log from IIS
It gets past the spinning globe and then the progress bar continues to the end and then it just hangs. That error is the only thing I see in the logs for today and the only thing that I've done today is to try and netboot a client. I made the nbi with autodmg and yesterday's download of the 10.13.2 installer.
As I mentioned earlier I uninstalled 1.3.0 beta, deleted the left over clonedeploy folder and then installed 1.3.0 and copied the patch folders for 1.3.3 over. I had to recreate the base url from http://[serverip]/clonedeploy to http://[serverip]/clonedeploy/service/client.asmx/ and looking at that error it looks like some api files may need to be rebuilt.
I finally found this in the logs:
2017-12-21 10:55:48,851 ERROR CloneDeploy_ApiCalls.ApiRequest Response Data Was Null For Resource: api/ActiveImagingTask/RecreatePermanentTasks/
Sorry, I forgot to change .log to .txt. Hopefully this time it comes through.
|
OPCFW_CODE
|
JasonS last edited by JasonS
I’m looking for some help with the project that I am working on. A high level demo project can be found at
I will be running this program on a Raspberry Pi and will be controlling a couple of different devices. I’m new to QML/Python/C++ so I may not use the correct terminology, but hopefully I will make some sense in explains things. I’ve done various research to get to the point that I am at now, but am struggling with putting everything together so that it works the way I envision. My example is built to control 3 motors and 1 light, read various sensors and display some results from the sensors on the screen.
Main.qml: This is the main application window that builds the framework of the UI. It includes a menu and a drawer to load various pages.
Page1.qml: This is what I am considering the “home” page. This is the page that will the default page. In my example it shows the temperature that I will be reading from a sensor (as of now a DHT11).
Page2.qml: This page is used to setup some of the controlling settings for my motors. I have 3 motors in my example, but in the end this may change. There is a time that will be used to turn the motors on and then other values that will be used to dictate how long the motors will run.
Page3.qml: This page is used to setup the light. Like Page2, it has a start time and a duration of how long the light will stay on.
For various reasons I’ve chosen to use Python as the framework to interact with the Raspberry PI GPIO and this is where I need the help. From what I’ve read the Python application needs to control the system. By this I mean the Python will load the QML app and be used to interact with the GPIO pins.
I want my program to:
- Run and load the UI
- Get the temperature ever X min/sec and update the screen
- Turn on the motors and run for the specified time
- Turn on the light for the specified time.
These activities may or may not happen at the same time, meaning the light could be on at the same time the motors are running and the UI could also be updating the temperature. They could happen in any order after the UI is loaded and it’s possible that while any of these events are running the user is navigating to the different pages in the UI.
My first question is how do I control the loops that update the screen and turn the devices on? Do I put a timer in the QML or do I put it in Python or a mixture? I’m assuming I have to make these calls synchronously to avoid blocking so was thinking about using some code I found that queues callbacks. The article can be found at: https://wiki.python.org/moin/PyQt/QML callback function
Is this the correct way/possible way of running multiple threads so to speak? If using the callback function is a viable option how is this done? I understand the concept of a callback and I partially understand the sample that is provided, but what I am missing is how is this put into practice is? The sample adds 3 callbacks to a queue and then processes them one right after another. Is the only option for this code to run the functions sequentially or can they be processed as they complete? Assuming in a real world situation it’s possible that you would want to process one function while another one is doing its thing. If possible with the sample code, how is this code used to process the callbacks as they complete? A small snippet of code showing how to do this would be appreciated or even a more detailed explanation of what this code is doing and when would be great too.
Second question is what is the best way to run the events based on the schedule that is created? Can this be done in a loop fashion from either QML or Python or should I have my program create and maintain CRON jobs? If the Python script is the one in charge of running the loops, how do you initiate sending data from Python to QML without doing a callback from QML?
Also, I have a couple comments in my QML files that I haven't figured out yet. Specifically in Page1.qml regarding importing a JS file. If anyone has suggestions on that, please share.
I know I’m asking a lot of questions, but it’s what I came up with when trying to wrap everything together. Thank you for any help that is provided.
|
OPCFW_CODE
|
by Rishidev Chaudhuri
It is an oddly well-kept secret that mathematical learning is a very active process, and almost always involves a struggle with ideas. To a large extent, this is due to the nature of mathematical intuition: grasping a mathematical idea involves seeing it from multiple angles, understanding why it's true in a broader context and understanding its connections with neighboring ideas. And so, when you sit down to read through a proof or the description of an idea, you rarely do just that. Instead, digestion more often involves settling down with a pen and a piece of paper and interrogating the concept in front of you: “What is this statement saying? Can I translate it into something else? Can I find a simpler case that will help me gain insight into this general context? What about this makes it true? What would be the consequences if this statement were false? What contradictions would I encounter if I tried to disprove it? How does this concept reflect those that have gone before? How do the various assumptions used to prove this statement factor in? Are all of them necessary? Are there other ways to frame this fact that seem fundamentally different?” And so on. And this interrogation often involves taking your pencil and paper on long digressions, slow rambling explorations of ideas that help clarify the one you're trying to understand.
Similarly, proving a mathematical statement or solving a problem is an unfolding of false sallies and blind alleys, of ideas that seem to work but fail in very particular ways, of realizing that you don't understand a problem or a concept as well as you thought. And again, these are not wasted. In almost every case, if someone were to just give you a proof or a solution and you didn't either try to come up with it first or actively interrogate it once you had it (which is almost the same thing), you'd learn that the statement was true, but learn very little about why it was true or what it meant for that statement to be true. And much of the learning in a math class happens not in the lectures but afterwards, in the time spent on problem sets (and, if you had a choice between attending the lectures and doing the problem sets, you should always pick the latter).
Unfortunately, most people make it through a high school mathematical education without being taught this. This has unfortunate consequences and makes mathematical learning exceedingly vulnerable to expectation and self-belief, so that it is often seen as something you either can or can't do, and many people see the struggle as a sign of a lack of ability rather than as an intrinsic part of the learning. There are certainly children who, for whatever accident of genetics, upbringing or attentional prowess start out by being quicker at math. But this seems swamped by differences in temperament and confidence, or by the effect that initial quickness has on confidence. How you engage with the setbacks of learning seems more important than how quick you are1.
This was strongly brought home to me when running math classes. There would inevitably be two groups of people who could take the same amount of time to solve or almost solve a problem but be quite differently convinced about their mathematical ability (which, over a semester, ends up being self-fulfilling). Some students, ten minutes into wrestling with a problem, would find progress difficult and take this as a sign that they were learning what didn't work, were spending time understanding the problem, were edging towards a solution, were exercising their reasoning ability and so on. Others would start off anxious and ten minutes in, at about the same stage of reasoning, would come to me convinced that they were never going to figure it out, and that they were dumb or not good at math2. And yet the two groups didn't seem to have wildly differing levels of intuition and for the second group reassurance that they were participating in the right process or helping them follow the path they were on, even if it was headed in the initially wrong direction, would often lead them to the same solution. Strangely, while some of the job of a math teacher seems to be to help with mathematical intuition, a large part of the job seems to be palliative, compensating for something that they should have been told or learned but hadn't: be patient with yourself.
One of the inevitable tragedies of specialization is that most people don't take classes in most areas after college or high school. For some this is compensated by an amateur interest in history, say, or philosophy. But for the variety of reasons I mentioned, the reasons that make students think that mathematics proficiency is an extreme example of a natural talent and that it is hopeless to do math without this essential ability, few people seem to maintain an amateur interest in mathematics or study mathematics recreationally.
If it isn't clear already, I think this is a huge pity, especially because it is often motivated by a false assessment of one's mathematical ability. And it is also a pity because most people stop doing math just at the point when the fun stuff starts, just when they've worked through most of the tedious arithmetic and are finally ready to embark on sweeping journeys of abstraction. It's like taking dance classes but never going dancing.
And, as almost anyone with a sustained interest in mathematics will tell you, math contains some of the loveliest conceptual and aesthetic pleasures available to us. It is the locus of some of the grandest and most elegant ideas we know, the site of struggles to explore the nature of infinity, to abstractly describe form and space and to reason about the nature of logic. There is a profound sculptural beauty to the edifices of the great mathematical theories. Mathematical thought is as much a part of our intellectual and aesthetic heritage as the arts and philosophy. It would be a shame to miss this grandeur if you didn't have to.
Mathematics is also a very pure example of the pleasures of intellectual play. Large branches of math emerge from someone writing down a few rules and seeing what they can construct within those rules, asking what manner of objects a set of rules gives rise to, what conceptual universe they call into being, and how the objects interact within that universe. It feels like frolicking in some fantastical Borgesian garden.
There are many other pleasures, of course. There is the pleasure of learning the mathematical language, with its conceptual precision and logical power, and the pleasure of translation, as you begin to see what is general and abstract in patterns in order to mathematize them. And there are the smaller material pleasures of doing mathematics, like the satisfying tactility of scribbling over sheets and sheets of paper as you explore an idea. And mathematical notation is charming, with the Greek letters and the multiple squiggles and the host of symbols, each with its own history. It is the same charm I imagine for alchemy or highly symbolic esoteric teachings.
Learning math and working through theorems or problems takes time, but so what? There is no hurry, and you'll be exploring some of the deepest ideas we have. And the pleasure of investigating an idea, exploring it from every angle and then the thrilling leap (or slow clamber) of finally having it reach your intuition is unparalleled.
So where to start? Conveniently, many mathematics textbooks have no prerequisites (at least in the formal sense, in that they define everything they need but a lack of experience might make the logical steps harder to follow). For recreational study, with an eye towards aesthetics, either real analysis (which studies the intricate structure of the number line) or abstract algebra (which attempts to abstract out and study the structure of relations like addition or multiplication) are good places to begin. The Wikipedia articles on both have links at the end to online textbooks, and enough universities put coursework up online that it's pretty easy to track down resources for study.
1This is all anecdotal, of course.
2Unsurprisingly, these groups tended to be somewhat gendered
|
OPCFW_CODE
|
Oh god not another Fedora user...
Originally Posted by gilboa
The reason for the "premature exposure" is because Phoronix was told by Valve that they were switching to Webkit and releasing Linux, Mac, and Windows versions of their new Steam client simultaneously. Well, hopefully it will be simultaneously, otherwise they at least told them they would release a Linux client, but that's the reason behind all their close tracking of Steam issues is to drum up hype. That's why Phoronix has always just said "see! see!" and everyone went "wut?...pfff whatever" because Phoronix new for sure, but couldn't say it due to their contract with Valve. Everyone here has read several articles about it, and Slashdot and other sites have linked to them, so that's a lot better than a SINGLE article about it. Quite simply, "hype" has the potential to reach a larger audience, because when it is finally released, it will probably reverberate more loudly among the "primed" audience.
You all just watch, I'll be proven correct when the new Steam is finally released. :P
"new"..."knew"...*sigh* I love you 1 minute edit rule! ^^
Yeah because graphics on linux suck because Xorg, mesa, whatever sucks. Then along comes fedora that employs people to work on Xorg, and turn it into a decent display system.
Originally Posted by Joe Sixpack
Then proprietary graphics drivers don't work on this new system, and fedora sucks. Damn you fedora fanboys...
As for me, still waiting for KMS on proprietary display drivers. And xrandr 1.2.
Last edited by [Knuckles]; 04-26-2010 at 04:10 AM.
Out of which KMS will likely never happen, xrandr maybe some day. Closed drivers afaik have something somewhat similar to KMS (though focus in functionality is more on X, not consoles).
Originally Posted by [Knuckles]
I really wish that was true, but I don't see that happening. Here's my thinking/observations:
Originally Posted by Yfrwlf
1) The Windows client update is scheduled to be released specifically today, whereas the Mac version will be released "by the end of the month".
2) It makes more sense to do these on separate days so that you don't have your technical resources split if you run into multiple issues.
3) They've hyped the Mac version for two months. Why would they do that, and keep the Linux version "secret"?
4) It makes sense to release the Mac and Linux client in two different marketing blitzkriegs - so that you get free advertising from the tech sites for Steam on two separate occasions.
Actually, it's a policy of supporting workstation users first and foremost- gaming use and general use by Linux users doesn't largely register on their radar (at least it didn't some four years or so ago...that's somewhat changed with them giving the xorg community unfettered access to most of the secrets to driving their hardware...).
Originally Posted by barbarbaron
As an observation, much of the CAD, etc. stuff that works under Linux use immediate mode operations (yes...) and they much less use the "fastpath" stuff that games tend to use or things like shaders- mainly because they're using fairly mature codebases that the companies are loathe to mess with.
This has the result of if it worked well, it stays working reasonably well. That's why NVidia's seems to be better- they got more of it "right" out of the gate. AMD's drivers started off with less robust answers for things until recently and unless it's a workstation vendor or someone like iD or Epic that complain about something busted, it may take a bit to get someone on it because resources on the Linux side of things is somewhat limited, even in the NVidia camp. This is why while it's still not fully showing fruit, I'm glad that AMD had the wisdom to allow us a shot at making credible FOSS drivers for their parts.
Michal... While it's nice that you get scoops like this- it's probably "better" if you wait a smidge before disrupting things for the vendors like this go at things. If I got my planned rollout leaked like this, I'd have at least fleeting second thoughts on the matter as a result.
Originally Posted by phoronix
Which is unfortunate, because the modern API is so much faster, and geometry shaders are especially helpful for subdivisioning.
Originally Posted by Svartalf
Tags for this Thread
|
OPCFW_CODE
|
Novel–Chaotic Sword God–Chaotic Sword God
Chapter 2912: The Azure Ink Grandmaster’s Condition afraid battle
“You can’t inform me?” The Azure Printer Grandmaster close his eyes slowly and murmured to him or her self, “I know some signs leading to the Sacred Blood flow Fresh fruits of methods, but I have never ever informed any person about it solution, yet you’ve occur all the way in the Cloud Jet to specially check with me about any hints about the Sacred Blood stream Fresh fruits of Ways. You need experienced thing of very specific info.”
“Leader from the Tian Yuan clan, just where have you discover this? Can you be sure which i have hints bringing about the Sacred Our blood Fresh fruit of methods?” The Azure Printer ink Grandmaster required with an unappealing manifestation.
“You can’t tell me?” The Azure Printer ink Grandmaster shut his eye slowly and murmured to himself, “I do know some signs resulting in the Sacred Blood vessels Fruit of methods, but I have by no means informed any person in regards to this secret, yet still you’ve arrive all the way from your Cloud Airplane to specially request me about any signs regarding the Sacred Blood Fruits of Ways. You need experienced thing of very accurate data.”
However, when the Azure Ink cartridge Grandmaster observed the Sacred Blood flow Berries of methods, his experience without delay improved. Even his position grew to become rather unstable, his views tossed in a chaos.
“Please forgive me, however can’t solution you in regards to this, grandmaster.”
Jian Chen spotted his response and promptly rejoiced on the inside like he got grasped a sliver of expect amidst lose hope. He turned out to be hopeful once more.
Nonetheless, Jian Chen was cannot discover the talk in between the Limitless Best elders. He experienced already joined the depths from the Supplement King clan below the great elder’s lead, showing up before a tower all things considered.
After offering Jian Chen an easy greeting, he directly asked Jian Chen to the Product Queen clan.
But Jian Chen’s silence appeared to become a method of entrance from the Azure Printer ink Grandmaster’s view. The Azure Printer Grandmaster allow out an extensive sigh. “Forget it. Because you’ve been advised from the first majesty with the Heavenly Palace of Bisheng, I can forget about having my practical this Sacred Blood Berry of methods.” Right after a minor pause, the Azure Ink cartridge Grandmaster persisted, “Leader with the Tian Yuan clan, I do know the place where a Sacred Bloodstream Berries of methods resides, having said that i won’t advise you the spot for not a thing. You need to produce a thing in swap.”
This wait around survived for four many hours. The ancestor of your Tablet King clan, the Asura Printer Grandmaster, ultimately sprang out.
Hearing that, the excellent elder smiled bitterly. “That’s a funny joke, innovator with the Tian Yuan clan. If you passed by earth Tianming back then, you kicked up quite a good surprise below, so basically any vital physique on earth Tianming will know about you.”
“Please forgive me, having said that i can’t respond to you in regards to this, grandmaster.”
“Hehehehe, the first choice with the Tian Yuan clan has journeyed this type of substantial length from your Cloud Airplane to world Tianming merely to pay a visit to this classic gentleman. I truly really feel honoured.” The Azure Printer ink Grandmaster sat from the key seat and was extremely amicable.
On the other hand, Jian Chen was unable to discover the talk between your Endless Prime seniors. He got already accessed the depths from the Capsule Ruler clan below the excellent elder’s head, turning up when in front of a tower finally.
Even so, hardly ever did he feel the Azure Ink cartridge Grandmaster would actually have some clues in connection with Sacred Blood flow Fruit of Ways.
Chaotic Sword God
Jian Chen recognized his outcome and quickly rejoiced on the inside like he obtained grasped a sliver of believe amidst despair. He became hopeful once again.
Jian Chen did not arranged feet inside promptly. Preferably, he quit before the tower, furrowing his brows slightly and examining the site. He said, “This ought to be a high quality our god artifact.”
“Please forgive me, nevertheless i can’t solution you in regards to this, grandmaster.”
Immediately after offering Jian Chen a very simple greeting, he directly welcomed Jian Chen into the Pill Ruler clan.
Having said that, in no way managed he assume the Azure Printer Grandmaster would actually have some clues in regards to the Sacred Blood flow Fruits of Ways.
“It’s already been a long time due to the fact we’ve kept the clan, vacationing in listed here to review the manner in which of Alchemy. Our perception of the Cloud Plane still is tied to a century previously. Possibly another new organisation has shown up on the Cloud Aircraft with this millennium…”
The Azure Printer ink Grandmaster had been a First Heavenly Layer Fantastic Leading. He was extremely old in looks, and that he was extremely thin on top of that, basically just epidermis and bone tissues. He looked like a classic mankind who already enjoyed a feet on the serious.
The Asure Printer Grandmaster should have acknowledged a little something regarding the Sacred Our blood Fresh fruits of Ways. It was even entirely possible that he is in possession from it now.
Even so, once the Azure Printer Grandmaster been told the Sacred Blood flow Berries of Ways, his experience promptly altered. Even his profile turned out to be rather unstable, his thought processes tossed to a clutter.
the third gate hotel escape room
“Please say about any clues ultimately causing the Sacred Blood vessels Fresh fruits of methods, grandmaster. You will definitely have my wonderful appreciation,” Jian Chen explained eagerly. He was extremely ecstatic. The lord of the Heaven’s Hyperlink Top experienced only instructed him ahead listed here and try his fortune. He did not ensure that the Azure Ink cartridge Grandmaster would definitely have hints resulting in the Sacred Blood vessels Fruit of methods.
This delay survived for four a long time. The ancestor on the Pill Ruler clan, the Asura Ink cartridge Grandmaster, finally shown up.
The Asure Ink Grandmaster must have well-known a thing relating to the Sacred Blood stream Berries of Ways. It was actually even quite possible that he is at property than it right now.
Chaotic Sword God
Jian Chen failed to fixed feet within right away. Preferably, he discontinued in front of the tower, furrowing his brows slightly and understanding the site. He said, “This must be a high quality lord artifact.”
little tommy tucker
“Don’t tell me a supreme pro peered via the incredible tips and deduced the matter we tried using so desperately to cover up?” The Azure Printer Grandmaster slowly opened up his view and stared upright at Jian Chen. He said, “Across the Saints’ Environment, there aren’t a lot of people competent at similar to that. I can even report each of them. Innovator of the Tian Yuan clan, managed the earliest majesty on the Incredible Palace of Bisheng let you know about this?”
“Hehehehe, the best choice from the Tian Yuan clan has journeyed this sort of large extended distance out of the Cloud Airplane to planet Tianming to simply stop by this older person. I truly sense honoured.” The Azure Printer ink Grandmaster sat inside the key seat and was extremely amicable.
Novel–Chaotic Sword God–Chaotic Sword God
|
OPCFW_CODE
|
In the past few years, I've heard a lot about Rust.
As someone that hacks on computer graphics and low-level infrastructure libraries, it seems relevant to my interests. I decided to make a small demo – of a procedural planet generator – and see how it went.
The planet starts as an icosphere:
I use a 3D Perlin noise field to offset points, producing terrain:
A little bit of random jitter stops it looking too clean:
Biomes are assigned based on distance from the center of the sphere:
The ocean starts out as a big blue icosphere:
I add two levels of specular highlights:
Then, I use a per-vertex noise value to jitter the edge of the highlights:
Finally, this noise also tweaks the baseline blue, to give it more texture:
Here's what it looks when we first combine the terrain and the ocean:
The atmosphere is a subtle effect: a translucent sphere that's slightly larger than the planet itself.
Some of the mountains poke above it – better bring your oxygen tanks!
The clouds are my favorite effect in this demo! They start as clusters of quads, jittered around random points:
Fragments outside of a circular region are discarded, giving more rounded silhouettes:
We apply transparency to feather the edges of each circle:
This creates an interesting effect where clouds at the edge of the planet stack up, producing a white wall that's undesirable.
To fix this, I add a special-case to the fragment shader that causes the clouds to fade to transparency as they cross around to the back side of the sphere.
Finally, each cloud quad draws a randomly selected subset of a Perlin noise field, producing a nice, fluffy effect:
Here's the full planet, with terrain, oceans, atmosphere, and clouds:
The stars are a Perlin noise field, blurred and level-adjusted to create a field of the desired density:
Nothing too fancy, but they do the job:
Enough about the graphic effects – how was Rust?
It was a mixed experience, but I can see the potential. Notes are below, and reflect my own ignorance as much as limitations in the language and libraries
I also believe many of these issues are under active development
(e.g. incremental compilation, the
and the ergonomics improvements in Rust 2018).
This hurdle was somewhat self-inflicted:
I wanted to have a live-coding setup,
where I could edit a source file and see the changes
immediately. I implemented a custom system to do this,
with shared libraries and
unsafe calls to swap them in and out.
This worked fine, but the compiler was still quite slow (about 5 seconds for a debug build), so it didn't feel like a real-time development environment.
For graphics, I used
glium, since I'm comfortable with OpenGL
and wanted to test out the Rust-flavored bindings.
It worked okay, but was generally high-impedance,
and I had trouble finding effective documentation.
There were a few things that I could never get working – in particular
high-DPI rendering. The screenshots above were captured by
only rendering the lower-left quarter of the window, then
the function doesn't realize that my screen
has a higher pixel density than usual, so if I rendered
an entire window, it would only capture a quarter of it.
Error handling was another rough patch.
When doing a series of OpenGL operations, I wanted
to return an error if any of them failed. However, all of the different
functions had their own error types, so I had to modify the top-level
function signature to return a
Box<Error>. The only way I knew
this was educated guessing, because I'm coming from a C++ background
and read the docs about trait objects.
I still don't understand how Rust modules work, despite having read the docs.
On an organizational level:
- Why do they need to be in subfolders sometimes?
mod.rsa special magic name, or just a convention?
I basically moved files at random until the compiler stopped complaining at me.
Even within a single file, I find the logic confusing. Look at this block of imports:
- Why do I need to import
- Why must I import
- Why didn't I need to declare
extern crate glium?
Lifetimes and borrowing
No issues here, surprisingly enough.
The only pattern that got annoying was calling a function on an
Option<T>, and having to type
I wished there was a way to automatically handle the conversion to a
It's very nice being able to drop crates into your project with a single line. I used a handful of external crates (noise, random number generation, image processing, etc), and found them to be high-quality.
I'm still a bit uncomfortable with the uncurated, flat namespace, which – in conjunction with a small standard library – puts the onus on the user to do all of the research to find which crates are de-facto ecosystem standards.
The code is here, and is very unpolished (lots of compiler warnings, etc).
At this point, I'll be doing more experiments in Rust, but am not ready to switch from my usual C++ / Qt stack (and didn't expect to be!).
As someone that writes bare-metal and embedded Linux software at work, I'm also excited for the future of embedded development in Rust!
|
OPCFW_CODE
|
This is a guest post from Accela, a platinum sponsor of the 2015 Code for America Summit.
At Accela, we’re lucky to be able to work with innovative public officials from across the country to find ways to make government work more efficiently.
Increasingly, at the heart of our work with governments is data – finding ways for governments to make better use of their own data to make more informed decisions and to share their data with external partners like CfA Brigades.
Over the past year, our work in this area has surfaced a new imperative – finding ways for governments to collaborate more effectively. As with much of the innovative work being done in the civic tech space, data lies at the heart of our efforts.
We believe that developing and implementing new shared standards for open data will help usher in the next phase of open government and civic innovation, and we’ve been working with cities and counties from across the country and other civic tech companies to help make this happen.
But if we’re going to be successful in these efforts, we’ll need your help.
Why Data Standards Are Important
Data standardization across governments is a critical milestone that must be realized to advance the open data movement, to fully realize all of the potential benefits of openly publishing government data. More and more people in the civic technology community are starting to realize the importance of this milestone and more and more energy will be devoted to creating new standards for open data in the months and years ahead.
The best example of what is possible when governments publish open data that conforms to a specific standard is the General Transit Feed Specification (GTFS). Developed by Google in partnership with the Tri-County Metropolitan Transportation District of Oregon (TriMet), GTFS is a data specification that is used by dozens of transit and transportation agencies across the country, and it has all of the qualities that open data advocates hope to replicate in other data standards for cities.
Transit authorities that publish GTFS data see an immediate tangible benefit because their transit information is available in Google Transit. Making this information more widely available benefits both transit agencies and transit riders, but the immediacy with which transit agencies can see this benefit makes GTFS particularly valuable.
The GTFS standard is relatively easy to use and presents a low barrier to entry for transit agencies being asked to produce open data. In addition, it’s an inherently usable format for consumers of GTFS data. In fact, the ease of use of GTFS has spawned a cottage industry of transit applications in cities across the country and continues to be used as the bedrock set of information for transit app developers.
How You Can Get Involved
Accela is leading a new initiative to develop a data standard for building permits – data that is of great importance to cities and counties across the country.
Building permit data can provide huge insights to those working to improve communities. Permit data can be used as a proxy for economic activity and allow for insights into how an upswing (or downturn) in the economy plays out at the community level. It might show the changing character of neighborhoods, and how gentrification is playing out in cities.
The Building and Land Development Specification (BLDS, pronounced “builds”) is being developed collaboratively by a consortium of civic tech companies using a process that is transparent and open to the public. Using tools like Github,waffle.io and Slack, we’re hoping to not only build a new data standard, but also to help develop a blueprint for building data standards that can be used over and over again in the future.
Anyone interested in helping develop the BLDS standard, that would like to start using building permit data or who wants to view the work of consortium to date can go to the GitHub repo being used for this work. Anyone can open an issue, ask a question or make a suggestion. Governments that want to start publishing building permit data in the BLDS format can ask questions or obtain assistance by emailing email@example.com.
At Accela, we believe that shared standards for open data will help usher in a new era of innovation in civic technology.
We hope you’ll join us in helping to build this future.
|
OPCFW_CODE
|
Vehicle & new item storage
The past year we started the transition to our new storage technology, and it is going to represent the foundation for the offline application support we have added in the next releases. It is a continuous process that grows our application and matures into a full Progressive Web Application. The transition was necessary to improve the load time of our web pages dramatically.
We are proud to announce that 95% of the application now uses this new storage mechanism exclusively and that finally, both Items and Vehicle modules are compliant.
Brand new inventory system
After an in-depth analysis that ended around December 2019, we realized that the inventory was not standing to our quality expectations. Some time to make a step forward, you have to make one backward. Hence we decided to restructure our inventory system completely. We started by differentiating between items and elements of the stock. One can now create Items, Warehouses, and Vendors from their own windows easily. Once the setup is ready, the user can now access the Stock page and start creating transactions (add, move, and remove items to the warehouses).
This screen allows to move around all your goods between warehouses, load, and unload material from your stock. From here, one can access the Item Detail page, which summarises all the information about a specific item. From the detail page, one can see all the stock available for a specific item and see each item status (assigned, available, out of order, in maintenance, need maintenance), update its status, assign or return it to and from a user or group, create one or multiple alert (today you can create time-based alerts, next release is going to allow to create alerts based on quantity). Overall we believe this new version of the inventory is much easier to navigate, as it rightly divides the different logical units within a warehouse.
This release is just the first version, and we are already working to extend the functionalities of the warehouse. In particular, we want to add more ways to create an alert (quantity in a single warehouse, quantity in all warehouse), we want to add easier option to load items in the warehouse (load from file and scanner though the mobile apps) and improve all the vendor management.
New detail page
Orchestra now has a new design for the detail page. The detail page is not an entirely new concept on the application, but we finalized this into a component so that we can easily refuse it across our application. The idea is that modules that need deeper/extended functionality needed a standardized interface to control it. The list and panel are excellent to provide basic functionality on content, but for more complex operations, a new control area was needed. This is why we created the detail page, a new area where users can access specific functionality.
Up till today, most of the interaction with the application relies on side panels, now that we are adding secondary and tertiary functionality, we needed a different approach. This is the reason that brought us to the overlay, a small window that opens up where the user is clicking. The overlay allows for specific functionality to occupy a limited amount of space within its context. The result improves navigation clarity and speed; it enables interactions to be streamlined around their presentation, helping the user to stay in context while he tries to access the most remote application functionality.
The inventory and shapes of the previous version’s vehicles sounded a wake-up call in the very early stages after release. The use of this system was confusing, there were too many parameters available, and the separation between the mandatory and optional parameters was unclear. After some prototyping, we came out with a simple and elegant solution that allows users to customize their modules according to their needs. Initially, the form shows only a tiny portion of data, the mandatory on the most important ones, usually from 2 to 4 fields. The user is now able to add and remove sections and fields at will. We believe this is another step forward in making Orchestra easier to understand, faster to use and more beautiful to see.
It’s fundamental for the application to communicate to its users when something needs their undivided attention. Different situation, like an item expiry date approaching, or a warehouse item that is ending its availability needed a clear way to alert the institution of the need to take action on a specific location of the application. This way, we supercharged our navigation component, the side menu, to host a new area. The Alert Display summarises what is pending on the applicant. Further, it clearly highlights what is about to expire, to what has already expired. Clicking on an alert moves the user to the interested page, thus making it easy for users to understand what needs their attention. On top of the alert we extend our table to highlight rows that are expired or that are about to expire. We also added the possibility to filter the table only by what has an active alert. We believe these three mechanisms together enable your organization to always be on top of your deadlines.
Application tags (objective situation groups Abd events)
Our application provides a broad set of functionality, thus result in a large amount of data to navigate around. Talking with you, we understood that your institution is divided into logical groups/services. Thus most of the time, a user would need to focus only on content that relates their division. That is the reason we added the possibility to group Events, Situations, Groups, Objectives, or Tasks within their logical structure. Thanks to application tags, this is very easy today, one can define application tags for a specific event, and efficiently use the filtering option to display what is relevant to him.
Events and situation archive filtering
We know that data is critical, but that too much data is oppressing, too little is useless. Following the same reasoning of the application tags, we understood that the event list was starting to grow out of control; it was full of archived events and became difficult to navigate. We finally added a default filtering option that ensures one can see only the current events. From the table filtering, when necessary, one can show or hide the archived event, as one would expect. The same concept has been applied to the situation list.
We have been very hard at work on our mobile applications lately, performance and quality were at the center of our efforts.
These last months have seen significant advancement in the quality of our mobile applications. We have been focusing on creating a structure that enables our engineers to create automated tests every time we add some functionality. This approach is a huge step forward for our organization, and it’s going to result in a more stable product, fewer regression problems, and finally, happier customers!
On top of the quality, we have been finalizing our release strategy. After using the business program created by Apple for our particular situation, we decided that we are going to move Orchestra from the private to the regular public store. The complicated and user-unfriendly process Apple provided us with did not meet our quality
While all these structural changes where discussed and operated, we finally enabled our mobile applications to create private and group channels. It’s a straightforward feature, but that turns our mobile application around, allowing institutions that trust their users to start communications proactively. The internal communication role limits the channel creation, so at any time, one can decide whether his users are allowed or not to create channels.
Our mobile applications allowed users to read and download attachments from both the task and communication modules. Many users requested that both modules would allow creating attachment from the same modules. We are proud to communicate that our users can finally take a picture and share it on their favorite communication channel, so that the organization is always aware of critical situations.
This last release was massive, the transition to this new storage technology very demanding, and the iteration of the new inventory system has been asking the best of our professional selves. We are proud to deliver to you version 1.7, and we are thrilled to hear feedback from you. As you know, when we go through these massive transitions, something can always fall off our radar. Thus we thank you in advance to notify us as soon as possible any quirks you might encounter. As always, it is going to be our priority to ensure the best quality in the lowest amount of time.
Internally we continue to assess and iterate our development processes. We are excited to share with you that starting next week, we are going to move to a new iterative development process that enables our team to release much more frequently, but in smaller chunks. We believe this process is going to allow you to see the continuous improvement of our application, to provide feedback in the shortest time, and to be more close to the latest developed feature. We think this is a win-win for everybody, and we can’t wait to get all this going.
|
OPCFW_CODE
|
You can watch Aaj Tak live on Youtube. Aaj Tak — Sabse Tez. Tez News. Read the latest and breaking Hindi news on amarujala. UC News India provides latest news, breaking news, popular videos, top headlines and news related to politics, bollywood, entertainment, movies, cricket etc.
Toggle navigation pip install pycricbuzz Features. This video is unavailable. I am writing a program that recognizes speech. The light that comes from computer and mobile phone screens has a real effect on the human circadian system, especially at night. Well, there is something Udacity to continue to Microsoft Azure.
From google search result , it will check each and very URL 3. Conceived by the founders and key advisors of the company, the ability to get the latest cricket scores is today limited to those with deep pockets.
Added links to our open-source Github repository. Functional Interfaces and Lambda Expressions.
CricClubs offers match schedule creation tool, points table, comprehensive player ranking, batting, bowling, Deprecation Notice: GitHub will discontinue the OAuth Authorizations API, which is used by integrations to create personal access tokens and OAuth tokens, and you must now create these tokens using our web application flow.
Open the settings 4. Note that we will be requiring requests, Beautifulsoup and twilio modules to run the script. Using the last known timezone for the user. Cricbuzz and Cricinfo have a software that each of them has developed separately to do the live scores.
Each time the snake eats an apple its body grows. You can go as far back as 30 days ago, and can ask for up to a 48 hour window of time in a single request. The Facebook API is a platform for building applications that are available to the members of the social network of Facebook.
Cricbuzz is a team of and based out of Bangalore. That one is perfect enough to develop the Applications by their partners for themselves. Listen now. Warning: The API may change without advance notice during the preview period. For detailed explaination with output, visit pycricbuzz blog.
A Java interface to cricbuzz, with options to get live scores and live commentary. Scopes provide limits to API tokens.
WhatsApp's Click to Chat feature allows you to begin a chat with someone without having their phone number saved in your phone's address book.
Google last year started making it easier to access some of its websites such as Google Docs, Slides, Sheets and more with the docs. Hello World.
PDF - Complete Book Concurrency API improvements. It makes it easy to pipeline multiple asynchronous operations and merge them into a single asynchronous computation. Select and enable the Packagizer in the left menu 4.
The snake must avoid the walls and its own body. Freaking fast Everything is asynchronously cached for a super-fast response.
The client makes it easy to browse, install, and keep track of updates on your device. Contact sales accuweather. Trucks strike to get bigger as their talks fail with insurance regulator The truckers strike against hike in third party insurance premium is set to escalate as talks between the insurance regulator and the truckers brokered by the road transport ministry failed on Monday.
In this game the player controls a snake. Contact The Globe - The world's most visited web pages!
To lookup from latitude-longitude to location name or from location name to latitude-longitude conversion, use Google Maps Geocoding API.
It is where a model is able to identify the objects in images. My Using the last known timezone for the user. The main goal of the project is to provide programming practice and knowledge sharing. Display top 3 results from all CrossFit regionals in current year.
The last known timezone is updated whenever you browse the GitHub website. List pending team invitations. Collection API improvements. Installing pycricbuzz. If you experience any issues, contact GitHub Support. Spring JDBC. Should I remove it from my code manually.
Find more data about mylivecricket. But the input into that software is done ball-by-ball, manually.
The objective is to eat as many apples as possible. Update existing code to Kotlin. Start your free trial today! Comprehensive up-to-date news coverage, aggregated from sources all over the world by Google News. My problem is I want to start recording audio only when som Contributed to an open source python module that fetched live cricket sport scores from a cricbuzz.
Some of those codes contain some private API key that I don't want to publish. Pretty URLs Apache. Get information upcoming, live and recently concluded matches series name; match status; venue; toss; match official; squads Cric API is a product of Wherrelz Corporation.
Click on Import 4.
|
OPCFW_CODE
|
So, I'm guessing that most of you have heard of the 3DS... it's the successor to the DS, with true 3D effects and has some DSi features like a camera. If you don't know what it is, then look it up on the Internets.
Now, I'll introduce this topic in ways that most people don't... am I the only person who has no interest in this thing? It just seems so overhyped. For starters, the lineup seems weak (then again, that happens with a lot of systems), and as of now, out of all these games
, only a few even remotely interest me, like Mario Kart 3DS (let's hope that they fix the messed up item balance from MKWii)
. And it's also overpriced. It's supposed to be US $250, which, although it's less than the Japanese price (equivalent to US $300)
, that's still high, and it's more than any other handheld that I've seen (at least Nintendo ones from what I can remember). Sadly, it seems that the trend with handhelds is to get more expensive over time, and the price is also the same as the Wii was when that first came out.
Next, the 3D effect. Since I'm sensitive and get dizzy easily, I'm probably not going to enjoy the 3D effect at all. I saw Avatar sometime last year, and while it was a good movie, the one thing that it didn't need was the 3D stuff. I almost got sick from that. Even worse, I've heard that Nintendo warned that young children shouldn't play it, which is not a good sign. Let's hope that this isn't the next Virtual Boy.
And yes, I'm aware that you can turn off the 3D. So maybe I shouldn't be complaining as much, but the hype around this thing is the 3D effects. Even the name -- 3DS -- emphasizes this, so basically I won't enjoy the main point of the handheld in the first place.
Not to mention -- the battery life. Oh boy, just 3-5 hours for 3D games? Part of the reason why Nintendo has been dominant was because the various Game Boy systems as well as the DS had good battery life. It wasn't something that you had to think about all the time. I mean, for example, the Game Boy crushed the Game Gear partly due to this. Now it's hardly portable if I have to keep recharging it at home all the time. Oh, and for regular DS games, it isn't much better -- 5-8 hours. No thanks, I'll stick to my DS Lite with 15-19 hours (more than double the life!).
And like the DSi, which I never saw the point of getting, there's no GBA slot. So I'd be paying more for a system that plays less games? Ugh. I'd rather keep my DS Lite and be able to play GBA games on it. I don't care if GBA games are "outdated", games don't suddenly get worse just because they're old.
Oh, and seeing in the past that Nintendo makes countless upgrades to their handhelds (Game Boy Pocket, Game Boy Advance SP, Game Boy Micro, DS Lite, DSi, DSi XL), I'm guaranteeing that Nintendo will make an upgrade to the 3DS too. So even if I want a 3DS, it makes more sense to just wait for a redesign instead of getting two handheld systems and have the first one collect dust. I got the original DS in December 2005 (yes I'm aware that I got it a year late, because initially there were no games that interested me), then was disgusted to hear that the then-new DS Lite was going to come out. I wouldn't have even gotten the DS Lite, but my original DS started screwing up (the L/R buttons both stopped working), so I just got a new one. I like the bigger stylus anyway. So I'm not going to let that happen to me again...
Oh wait, it has a camera? Well, I already have a standalone one. It takes high quality pictures, has 20x zoom, and can take them in high resolution. I doubt the 3DS will have any of that, except that it's a 3D camera, but that isn't enough to change my mind.
Oh wait, it has a Virtual Console for the Game Boy? Well, the Game Boy was my first handheld ever, so naturally I have a lot of games for it. There's little point to getting them again, barring battery issues (which actually makes a difference, since a lot of Nintendo R&D1 games that I have, like various Wario and Metroid games, have erased on me several times for no reason, so the VC would actually be useful here in that respect). But other than the battery thing, it's not too special to me.
So... do I have any reason to get this thing? As of now, no. Unless there's some game awesome enough to interest me (not that this will happen anytime soon, as Pokémon Black & White are compatible for the regular DS), I won't get it. Sorry. I know that a lot of people will disagree with me, but I personally don't care, and nobody can force me to get a 3DS.
Although the red one looks way cool, because red is my favorite color.
|
OPCFW_CODE
|
Evidence shows that the entrance to Camelot was by way of a cobbled roadway, ten feet across, which passed through a timber-lined passage beneath a gate tower raised on posts and tied in with the rampart and sentry walkway on either side.Next
The only notable exception to this is the inclusion of the Saxons as Arthur's adversaries and the.
Tests prove Edward III constructed the table, probably in 1344, when he conceived the notion of an order of chivalry based on the knights of the Round Table, as depicted in the popular romances. An alternative theory, which has gained only limited acceptance among professional scholars, derives the name Arthur from , the brightest star in the constellation , near or the Great Bear. The historian made the putative reign of Arthur the organising principle of his history of sub-Roman Britain and Ireland, The Age of Arthur 1973.
Although the Saxons finally conquered Britain, the Celts remained strong in Cornwall, Cumberland, and Wales.
Merlin's cave supposedly lies directly below the ruins, piercing the great cliff, cutting through to a rocky beach on the other side of the headland. Arthur flees and is raised in a brothel, knowing very little of his birthright.Next
Those who claim to have witnessed this fearsome sight talk of seeing lances that glow in the dark and hearing the spine-tingling baying of hounds.
These works were the Estoire del Saint Grail, the Estoire de Merlin, the Lancelot propre or Prose Lancelot, which made up half the entire Vulgate Cycle on its own , the Queste del Saint Graal and the Mort Artu, which combine to form the first coherent version of the entire Arthurian legend. A new code of ethics for 19th-century gentlemen was shaped around the ideals embodied in the "Arthur of romance".Next
According to the Life of Saint , written in the early 12th century by , Arthur is said to have killed Gildas's brother Hueil and to have rescued his wife from Glastonbury.
However, by AD 500, such titles had become vague and 'King' was the customary designation of Celtic leaders. An illustration by Alfred Fredericks for a 1881 edition of the Other early Welsh Arthurian texts include a poem found in the , " yv y porthaur? Y Gododdin cannot be dated precisely: it describes 6th-century events and contains 9th- or 10th-century spelling, but the surviving copy is 13th-century.Next
Even the humorous tale of , which had been the primary manifestation of Arthur's legend in the 18th century, was rewritten after the publication of Idylls.
Two shrines, a metalworkers' area, furnaces, smiths' tools, and finished weapons were also unearthed. Robbed of his birthright, Arthur comes up the hard way in the back alleys of the city.Next
Clive Owen's 'Arthur' was a little internalised and predictable.
Le Morte d'Arthur opens with Arthur conceived as the illegitimate son of Uther Pendragon literally 'the Head Dragon' or King of Britain.Next
John Leland, an antiquarian during Henry VIII's reign, wrote that local people often referred to the remains of this fortified hill as 'Camalat--King Arthur's Palace'.
|
OPCFW_CODE
|
Find files with multiline strings matched
I have a folder with many files of such blocks of pattern:
115,55
,175:500
,123:400
,[blahblah]
,[blahblah]
...
,[blahblah]
200,*
,[blahblah]
,[blahblah]
,[blahblah]
...
Each block starts at a line starting with a number and ends before the next line starting with a number.
I need to find files containing "115,55" and ",123:400" in the same block. There could be any number of lines between the two like:
115,55
,[blahblah]
...
,[blahblah]
,123:400
Summary: Find the names of files with "115,55" and next having ",123:400" before hitting a line starting with a number.
Note:This is a UDR (Usage Data Record) file if it may help.
Python, Perl, sed or awk would help.
Thanks in advance!
Do you just want to print the filenames or the entire block or both? Can you provide your expected output?
I just need to print the filenames containing the block.
perl -lne '/^115,55/ ... /^\d/ and /^,123:400/ or next;print $ARGV;close ARGV' *udr
(/^115,55/ .. /^,123:400/) =~ /E/ or next doesn't work because it will match across blocks. I believe I've created a fix for that though. However, can perl work on globs like *udr? I don't know how to do that properly, so could use your advice.
@Miller shell does the globing and perl works only with @ARGV. Btw, your range match starts and stop on the same line.
That's why I used the ... range instead of ... So it would not stop and start on the same line.
I'm so used to working on strawberry perl, I sometimes forget how superior unix environments are. Dunno if there's a way to get cmd to interpret globs. hrm :/
@Miller perhaps perl -e "BEGIN{ @ARGV=glob pop} " *txt
@Miller, the script failed on my system.
"Unrecognized switch: -E (-h will show valid options)."
My system is SunOS and Perl version is: "This is perl, v5.8.4 built for sun4-solaris-64int"
@mpapec Thanks for letting me collaborate with you a bit. Tiz kinda a fun way to solve a problem. Btw, can still remove 7 more characters if one cares. Also, I had also thought of the BEGIN block solution, and didn't think it would work. However, just confirmed that it does.
@Miller never thought of using $1 when matched but undef, tnx. Although .. and something or next feels strange somehow.
It does feel a little strange to trust the left to right nature of logical operators to build an expanded shortcut op. Kind of makes me want to throw in some () or change it to a next unless, as I'd never do such a thing in anything but a one liner. However, I can see using the .. && $1 idiom in the future. Only just thought of it for this problem.
Just realized that the $1 was actually unnecessary. Reads more logically now. if in block && inner test.
@Miller perl -lne 'print($ARGV),close ARGV if /^115,55/ ... /^\d/ and /^,123:400/'
Nice one, I think that's as tight as we can make the code. Now, if we were good experts, we'd point out that the regexes should be $ bounded. And to avoid the edge case of two blocks in sequence (which MIGHT be possible since Yigit was reporting repeated matches to jaypal), we should add a negative lookahead assertion to the end of the range /^(?!115,55$)\d/.
Using awk:
awk '/115,55/{f=1;next}!/^,/{f=0;next}/,123:400/&&f{print FILENAME;nextfile}' /path/to/files/*
Oh, I have just discovered that it prints the filename for every hit on the file. Any way to break and continue with the next file as it finds one match? This would improve performance and shrink the output.
@Yiğit The perl solution only prints the filenames once.
@Yiğit Yes, check the updated solution. nextfile breaks out of the current file and moves on the next file.
|
STACK_EXCHANGE
|
It is common to compare actual values and targets to know if we are doing well or not. With Power BI, you can accomplish that in several different ways.
For example, you can use conditional formatting in a table or a matrix to color the background or the font or add icons next to the data to quickly check if you are on target.
If we want to visualize this comparison more graphically, we could create a chart instead, having, for example, a combo chart with the actual value on the columns and the target on a line or even a simple line chart with multiple lines representing the value and the target.
But sometimes, the charts can look cluttered with multiple lines or columns, especially if you have more than one reference value to compare your data with.
An alternative way to make the comparison is by coloring the chart’s background based on the defined targets or reference values. To accomplish that, you may use Error bars.
First things first, let’s take a look at our example’s model:
It is a simple sales model, containing the Orders as the fact table and four dimension tables: Calendar, Product, Customer, and Location.
For this example, we have the measures Profit %, which calculates the profit over the Gross Sales Amount, and Profit Target %, which I just set to 15%.
Profit % = DIVIDE( [Profit amount], [Gross Sales amount] )
Profit target % = 0.15
Now that we know our model and main measures, the first step is creating the chart.
Let’s analyze the evolution of our product’s category and subcategory profit margin.
For that, we create a line chart with the Category and Subcategory of our product on the X-axis and add our Profit % to the Y-axis.
I will make it look more beautiful by removing the axis titles and the Y-axis and changing the title. Let’s also change the line’s color to black, add markers, and finally set the data labels to on. Now it looks a little better 🙂
The next step is to add the error bars to match our target. For that, we go to the Analytics area of the Visualization pane and enable the Error Bar option.
In the error bar section, we are asked to add a lower and an upper bound for the error bar. As we only have one target, we want to divide the chart into two sections: above and under target.
So imagine our lowest possible profit is 0% and our highest potential target is 100%. We need to create two measures containing these boundaries. I called these measures _Lower profit and _Higher profit and formatted both as percentages.
_Lower profit = 0
_Higher profit = 1
Now, we first want to create a region below the target, and for that, our lower bound is the _Lower profit, while the upper bound is the target itself (Profit target %).
When we add the boundaries, a bar from zero to the target will be created, which is not exactly what we want.
So to get rid of the bars and color the background instead, we need to disable the bars and enable de Error band.
When we do that, the bars are automatically replaced by a grey band because it matches the series color, but we can format it and color it with any other color. Let’s paint it red, for example:
You could also play around with the transparency and add markers to the error band. But we are good without it.
Now, we want to color the above target part of the chart.
That part is not as straightforward as we would like. That is because we can only add one error bar per measure used in the chart.
That said, we can work that around by creating a dummy measure that has precisely the same value as the profit. I called this measure _Profit % and added that to the chart.
_Profit % = [Profit %]
Now we add the dummy measure to the chart Y-axis.
Adding the second measure to the chart will create a legend and overlap the existing line with another line with the same values. We can format the chart to disable the legend and remove the line, the marker, and the label for the dummy measure.
Now, we need to configure the error bars to the dummy measure, similar to what we did for the actual Profit %. The difference is that we need to specify the _Profit % series on the Apply setting to option.
As we now want the upper band, the error bar goes from the Profit to the Highest Profit, which is 100%. So for that, we will add the Profit % as the lower bound and the _Highest Profit as the upper bound.
Next, we will disable the Bar, enable the Error band, and change its color to green:
Good, now the background is colored and matches the target!
Another thing that may be interesting here is to color the marker green when the data point is above the target and red otherwise. First, add a new measure that checks if the profit is under or over the target and select the color for each case. I called the measure _Profit color.
_Profit color = IF( [Profit %] >= [Profit target %], "#C3ECBA", "#FB5D49" )
Notice that there are no conditional formatting options on the marker color. To achieve that, we will need another workaround.
We can copy and paste the chart we created, change it to a column chart, and delete the dummy measure from it.
In the Format options, go to Columns and select the conditional formatting option to the color. Then, we choose the Format style as the Field value and place our _Profit color measure in the field option.
Now with the formatted bar chart, we can select it, click on format painter on our ribbon and click on our line chart with the colorful background. It should work like a charm.
The last thing to do is to disable the tooltip option so it doesn’t show the error bands values when we hover over the chart.
And now we have our final chart, with the background and the markers colored according to the target.
If you want to know more about our services and how we can help grow your business, click here to learn more about us. Also, don’t miss the many tips for Power BI and other tools we regularly share on our YouTube channel.
Finally, stay tuned for DevScope’s free events, such as “Dashboard in a Day”, to further expand your knowledge and experience with Power BI.
|
OPCFW_CODE
|
Import containers when accessing dependencies via import_module
I found this bug when working on icelab/alpinist#2 – while the app would work properly when run via the web server, my unit specs were failing because the "core" dependencies (i.e. the ones imported into the Main::Container from Alpinist::Container) were not being loaded properly. This was because in the unit specs were (rightly) not finalizing Main::Container, in order to test things in isolation and to avoid spuriously loading the whole application environment.
To address this, I've made it so that any imported containers are imported when auto-imported dependencies are accessed from a container's .import_module.
(I'm not 100% sure if this is the approach you'd like to take, @solnic, so I was going to push this up into a branch directly on the dryrb repo so you could potentially make adjustments, but dry-container isn't included in the list of repos I can write to, so I've put it up on a personal fork instead. Let me know if you'd like me to push this anywhere else).
There's one tweak I though about - skip import containers finalization when deps are being dynamically resolved w/o complete finalization. Otherwise loading a component might cause other things to be booted etc.
There's one tweak I though about - skip import containers finalization when deps are being dynamically resolved w/o complete finalization. Otherwise loading a component might cause other things to be booted etc.
Thanks for the feedback! I'll go about separating the loading and finalizing of the imported containers, and improving overall naming.
On 2 Jan 2016, at 5:07 AM, Piotr Solnica<EMAIL_ADDRESS>wrote:
There's one tweak I though about - skip import containers finalization when deps are being dynamically resolved w/o complete finalization. Otherwise loading a component might cause other things to be booted etc.
—
Reply to this email directly or view it on GitHub.
Thanks for the feedback! I'll go about separating the loading and finalizing of the imported containers, and improving overall naming.
On 2 Jan 2016, at 5:07 AM, Piotr Solnica<EMAIL_ADDRESS>wrote:
There's one tweak I though about - skip import containers finalization when deps are being dynamically resolved w/o complete finalization. Otherwise loading a component might cause other things to be booted etc.
—
Reply to this email directly or view it on GitHub.
So I've just tweaked the method names and added support for not finalizing the imported containers if we're just resolving individual deps. I haven't committed yet (I'll explain in a moment), but for reference, the code looks like this:
def self.load_imported_containers(finalize: false)
imports.each { |ns, container| load_imported_container(ns, container, finalize: finalize) }
end
def self.load_imported_container(ns, container, finalize: false)
return if imported?(ns)
container.finalize! if finalize
items = container._container.each_with_object({}) { |(key, item), res|
res[[ns, key].join(config.namespace_separator)] = item
}
_container.update(items)
imported[ns] = true
end
Code-wise, I thought the named finalize kwarg made it clear at the call-site what we'd be expecting the method to do re: finalizing or not, and it seemed like it'd be a worse solution to have a separate methods for loading vs loading+finalizing.
Anyway, here's the problem I hit: in the unit test (which hasn't changed) my imported container has auto-registration enabled, and the files to be loaded via auto-registration don't get loaded until that container is actually finalized. So when I have this inside .import_module...
load_imported_containers(finalize: false)
...it doesn't actually make my auto-registered deps available from that imported module, which means the test fails.
This makes me think that we might actually be better served by a different approach.
Perhaps instead we have .require_component do this:
Try to require the component from the current container (i.e. the current behaviour)
If this finds nothing (i.e. before we fail ArgumentError atm), loop through any yet-to-be-imported containers for the component (taking away the imported container's local name prefix) and try and .require_component the dep there.
This way we could load the dep from the other container and avoid finalizing it, which is our preferred behaviour, and we can get rid of the "brute-force" bulk import of all the containers in .import_module. Seems like it'd be cleaner and more direct.
Does that seems like a reasonable approach? If you think you'd be happy with this, I'll put it together for you.
@timriley this sounds good, in fact that's what I wanted to do initially but then, being tired and sleepy (or maybe just hungry?) I ended up doing it differently. But what you describe makes sense. I struggled with test suite (loaded_features issue) so I rushed with it too quickly, I suppose :/
Closing this since #3 is implementing the alternative approach I suggested in the final comment (I hope to loop back to #3 and get it merged at some point soon).
|
GITHUB_ARCHIVE
|
This is the best they've come out with so far from the search engine, to the image to the commercials on TV to promote it. They blow those old Ask.com commercials out of the water.
The message from MSFT for this seems to be you can use this engine to DO x,y,z. Google is a habit at this point which will be very difficult to overcome, Yahoo! stands for what?
The message anyway, is action or task oriented, not just a Yahoo! yodle or Google it. I think it will do much more damage to Yahoo! and given that MSFT was ready to fork over so many billions for Yahoo! not so long ago, it makes one wonder how long this thing has been in serious development. If they really though this would fly or it has been in development for a while, then why the serious offers to by Yahoo! for so much $$$?
Google might have to step up the promotional efforts in traditional media to fend this off. Thing is, what message in traditional media can either Yahoo! or Google convey to say why you should use them instead of the new MSFT engine?
Does anyone know why they are displaying the information in the pop-out (next to the serp listing) the way they are displaying it? Any idea how to edit it?
Some interesting Bing stuff:
|<meta name="robots" content="nopreview"> |
Apparently the 'adCenter' is something worth looking at - (which I will later.)
[edited by: eelixduppy at 12:18 am (utc) on June 9, 2009]
[edit reason] snipped URL [/edit]
I am pleased to say that bing must be catching on. I am seeing an increase in referals vs live. Results look real good compared to Google right now.
I can confirm, as other did here, that referrals increased slightly; I refer to several sites we manage.
Referrals from image searches increased a lot.
It's not only a a "redesign": quality search result is more similar to google quality as well.
Definitely a good improve!
Little news today confirms the reason behind the increased referrals
Bing has for now overtaken Yahoo from this "one" source.
I use statcounter and confirmed were Bing in my stats (7 years)has for the first time ever overtaken Yahoo.
It is interesting to see the history of "bing.com" domain; hopefully this time the domain will be owned by the same entity for the years to come :) - [web.archive.org...]
Btw. I do think that "bing" is more marketable domain; sounds and types better/easier than "google". Good move on MS.
No sign of any advertising or promotional campaigns for Bing in my area of the country. Micosoft will have to spend major money to take market share from google.
any word on this?
As reported earlier on the p--- issue and I felt that this was goning to be a a big problem for MSN
Some news today on it.
The novelty will wear off.
The novelty will wear off.
This is exactly what I was thinking but my stats arent reporting that, quite the contrary actually.
I like bing. I'm seeing a lot of traffic from bing. Bingo! (duh)
But there does seem to be a lack of bottom, but I suspect, given the more recent aggressive behavior of msnbot(x) these days that might soon change. Y! has dropped to third in recent hours...days... and it looks like that trend might continue. Not ruling Y! out, or their multi-years database/history. MS, however, seems a little more serious this time around...might say they were a little ticked about a past deal falling through. :)
Dammit the traffic doesn't count until Google add it as a "search engine" rather than a referral to Analytics... what's wrong with them? 10 days now, it's starting to look like it's deliberate LOL
I must say they have done a fantastic job with Bing image search, good results, great way to present the images and the most important thing your images are not filtered like on Google here you really get all the images they have with filter on with no glitches to nude images, its perfect.
As a webmaster for 11 years running, I am just happy that somebody - Bing - is at least trying to usurp the monopoly that is Google. even if Bing makes incremental gains in searches/referrals, it's better than what existed before.
My initial impressions are good: my site has #1 ranking for my keywords, plus better sub-links (more focused) than on Google and I really like the ad campaign so far.
As an Apple guy, having never had much love for MSFT, I now feel compelled to dance with the devil. Bing it!
Is it just me or has the search results also got better, I would even say very close to google quality.
I have made even more searches now in all topics, I must say im impressed. Personally im not placed as good as on google, but close.
I see nice quality results for my bag of searches--not necessarily my sites well ranked. The results seem to be improving.
As far as referrals go, I've seen not only continuous flow of traffic but nice conversions ratios.
Bing traffic is converting much better for my site than Google. It also likes my site a lot, both in regular search and image.
[edited by: Boulder90 at 7:18 am (utc) on June 10, 2009]
I like it, it looks nice and modern and clean.
It makes Google look old fashioned.
As for search results, I don't think they are as complete as Google.
BUT I DO like the fact that from here in New Zealand (or anywhere I suppose) I can click on Advanced search and then Country and pretend I am searching from another country. That is great for me and my company.
I don't think you can do that in Google.
Does that mean we can safely block Live referrer spam with anything coming from live.com? This stuff sure got old fast, and is still going on.
I really think with the normal results and image results Bing could be a good competition now for Google.
I have been using Blind Search, which gives you Google, Bing and Yahoo results side by side, and only reveals which is which after you pick which you prefer.
I mostly prefer Google, but I do pick Bing sometimes. The surprise is that I pick Yahoo nearly as often as Google - my initial impression (from using it directly) was that Bing was better than Yahoo.
Google remains the best at long phrases and complex or ambiguous searches.
I also noticed that Yahoo and Bing love Wikipedia even more than Google does.
I only get 0.01% of search engine traffic from bing. Im not sure if its my sites ranking or lack of traffic from bings side.
regardless I dont think they can compete with Google.
The search nude may return sexually explicit content.
To get results, change your search terms.
Nah... Most would like google more.
| This 144 message thread spans 5 pages: < < 144 ( 1 2 3 4 ) |
|
OPCFW_CODE
|
Are you an aspiring computer games developer with games development knowledge, and a passion for all things gaming in general? Are you looking for a unique opportunity to work part-time and be paid to teach young children how to create their own computer games?
We are based in South-East London and specialise in teaching children how to write computer code through creating their own computer games. We are looking for a passionate games developer to help our young students learn to code!
If all of the following applies to you:
- Are an aspiring games developer (e.g., pursuing a university degree in computer games development)
- Have solid knowledge of various game creation tools and platforms
- Are actively creating your own games
- Are looking for a unique opportunity to be paid to teach children how to create their own games
- Are able to work weekend mornings (Saturday or Sunday);
Please get in touch, making sure to include your CV in your application.
What we offer
Working for Spark4Kids, you will be:
- Working in a fun, vibrant environment where playing games (testing the children's creations) is part of your job
- Building up experience for your future career
- Able to work hours that suit you/your study
- Paid to do something you already enjoy doing - be part of designing/creating games, and helping children design and create their own!
- GCSE/A Level/Undergraduate
- Work experience not required, however technical knowledge is (you will be tested)
Professional experience is not a pre-requisite, however you will need to demonstrably show (during interview and/or with actual games you have created or published) a deep knowledge and understanding of games development.
Of interest are as many of the following game development skills as possible:
- Stencyl, Game Maker Studio, GameSalad, Unity3D, Unreal Engine, Construct 2 etc.
- ability to demonstrate coding ability (with games in development, published games etc.)
- general coding ability, knowledge of languages such as C# or Java and other languages would be a bonus
- eager to learn new skills, and to do so quickly
Inter-personal Skillspatient with a pleasant demeanour, dependable, approachable and politepunctual, good at time-keeping and time managementenjoys teaching and interacting with young children, liaising with parents
Tutoring Skillshappy to participate in developing courseware (in conjunction with other Spark4Kids staff)goal-setting, guidance and technical supportproviding help to and mentoring children
Please note - this role is based in West Dulwich, south-east London (10 minutes by train from London Victoria). You will be required to take a technical test as part of the interview process. Applicants will be DBS-checked prior to final job offer being made.
- Education Level
- Secondary School
- SOUTH-EAST LONDON
- Working hours per week
- 2 - 4
- Type of Contract
- Casual / Part Time Jobs, Summer / Holiday Jobs
- Salary indication
Between £7.00 and £12.00 per hour
- Responsible for
- Tutoring children in games design and coding
- Type of Job
- ICT, Teaching / Instructors / Guides
- Full UK/EU driving license preferred
- Car Preferred
- Must be eligible to work in the EU
- Cover Letter Required
|
OPCFW_CODE
|
I'm developing real time 3D graphics and video software to run on tablets and low power ultra book type devices.
When decoding h.264 video and also running pixel shaders that perform a fair amount of work (for the platform), I see the video codec struggles to keep up.
My question is, how does the video decoding on a device like the Microsoft Surface 3 (64bit Atom + Intel HD Graphics) share CPU and GPU resources?
I'm trying to understand performance expectations and implications.
Your issue could be the result of power, with low power devices the CPU and GPU have less power to work with, thus lower performance in demanding situations. If your pixel shaders are doing a lot of work then the GPU is using a lot of the available power, leaving little for the video codec to do its work. Resource sharing is dynamic, it depends on the workload.
You should download Intel INDE: https://software.intel.com/intel-inde
It is a comprehensive set of tools that would allow you to analyze the workload and see where the bottleneck is. Using GPA you can get a system trace the shows the work being done on the CPU and GPU, a frame capture will allow you to dig into the shaders and the work they are doing.
Thanks for responding Michael.
I'll try out GPA and see what it can tell me. The systems I've developing for are Windows based, however some have Core-M and Atom CPUs, so INDE as a whole is probably less useful.
I was hoping to find out how much video decoding is done using fixed function hardware and how much CPU and GPU resources are leveraged to perform this process on different Intel architectures.
Hard to say, media is not my area. This forum is for game developers working with Intel graphics (which is my area). It will depend on the workload, the video codec in use, etc. So a lot of factors here to consider. You can check our developer guides:
Media SDK developer guide: https://software.intel.com/sites/default/files/Intel_Media_Developers_Guide_0.pdf
Intel graphics developer guides: https://software.intel.com/en-us/articles/intel-graphics-developers-guides
I will also talk to a couple engineers who work on media and see if their is any guidance.
Thanks for the information Mike. I will read those guides. I also appreciate you asking your colleagues.
While this project is not a game, it is a real-time interactive 3D graphical application using video game technology. It also happens to encode and decode video from disk and network sources. It is quite impressive what Intel graphics, core-m and atom processors can achieve on these low power devices. I hope I'm not pushing them too far.
|
OPCFW_CODE
|
1) What is the correct way to create a new class/ new animation without overwriting an existing one?
Is not it because you have not changed the battle animation pointer?
When newly added, the newly increased part will be filled with the data of the first character.
This is because if you set it to null, you do not know the end of the data.
Therefore, we use the first data as appropriate data.
At this time, the value of the pointer remains as it is.
You need to change that pointer.
You need to reserve an appropriate area in the appropriate free space (usually the end of the file) and write the address there.
For the part where well-extended events and so on are done, the tool supports allocation of new areas, but class extension does not.
(Class extensions are rarely done)
As of now, if you extend the class, you will need to allocate and assign a new battle animation area yourself.
However, as this is quite troublesome, in the next version, it might make a support tool that makes setting a bit easier ...
2) How do you change which units can seize the thrones? Ticking the ‘Can Seize’ check boxes under the class/ character tabs doesn’t do anything. Is this something that must be addressed through the events?
It seems that the controllable class is decided by the source code.
It seems that this flag was meaningful in the old FE, but it seems to be meaningless in FE 8.
Like a transporter flag, it may be a remnant of the past.
I tried a little analysis.
There is an asm function in the effect pointer of the menu,
Since it is an asm function, the function with the displayed value -1 is executed,
When r0 == 1 is returned, the menu is enabled.
The function of seize is as follows
It seems to be seeing whether it is a unit that is there being a hero judge with subroutine called from among these.
Main character judgment function
Finally, it is compared with the following cmp.
r0 == unit ID of the operation character
r2 == Unit ID that can be seize
FE8J 08037c36 4290 cmp r0, r2
FE8U 08037B9E 9042 CMP r0 ,r2
The place where 0x01, 0x0F is assigned to r2 should change.
If only one character can be suppressed, I think that by changing this parameter substituted in r2 in this routine.
In the case of multiple cases, you have to write asm.
· · · It is troublesome in various ways, so I think that it is better not to change it.
I think it is comfortable to reuse units 0x01 and 0x0F.
|
OPCFW_CODE
|
Group Members: Daniel Mehdi, Ryan Ostrander, Justin Hein, Mahkambet Buzurmankulov
We initially had the idea of data-mining Facebook to see what a person liked, and then using that data to do something. We then thought of determining how “mainstream” a person was, and from that, came up the idea with a pun on “mainstream”: have an actual stream, filled with fish. This eventually evolved to having a fish tank in which each fish had their own level of “mainstreamness.” The viewer could then go fishing by using moves, song, etc. as lures. Each lure would, like the fish, have its own mainstream value, and fish would only be attracted to lures with a value similar to theirs. Choosing the lure would therefore be a game in itself; for example, one would have to determine the type of movie most likely to catch a hipster fish.
We began by taking an open source algorithm for fish graphics (created by Nicolas Tang). This displayed rather pretty fish animations, but the fish were only programmed to swim around randomly and to chase bits of food the user could add via mouse clicks. To get them to behave as we wanted, we had to program them to detect edges (so that they would stay on screen), follow the position of a lure onscreen, to only chase a lure if it had a similar mainstream value, and to be attracted to or scared away from a lure based on how much that lure was moving. Although the algorithms for this behavior wound up being quite complicated, it was possible to do all this only using simple position vectors and angles. We also had to have some way of telling one fish from another. To do this, we set up a database from which a user could pick two colors for their fish. These colors would then be sent to our fishtank program, along with the users mainstream value, enabling us to create a new fish.
When a fish was caught, we used networking to tell another program to add that fish to a bucket, and to display the information associated with that fish. The bucket program also contained a “release” button, which allowed the caught fish (which were being stored offscreen) to swim back into the viewable area.
We then use ajax to send the user’s facebook uid, their name, mainstream value, and fish color they’ve selected on the login page to a database which can be queried by the processing application. The processing application sends the server the id of the last fish it received the the server can send back only new fish that have been added since that id was seen. The server sends this request for new fish every second or so concurrently with the drawing of the fish so there is little delay between the user’s facebook submission and the fish being added.
When the user is logged in, a tacklebox comes up where you can select your likes or the likes of anyone that has logged in before you, this will act as your lure to lure in fish with a mainstream value similar to the mainstream value of the lure. The aquarium application is using the same method it uses to get new fish to get a newly attached lure.
Processing’s network library is being used for communcation between the bucket application and the aquarium. The bucket is the server and the aquarium sends the bucket the facebook information about the fish that was caught when a fish is dragged to the edge of the tank. Also the bucket application has a “Release” button that releases all the fish from the bucket back into the aquarium by simply sending the string “releasefish” through the network.
In the end we we’re very satisfied with our results. The physical set up with the television monitor placed under the plexiglass aquarium made for a very beautiful aesthetic and helped create a more visceral fishing experience for the user. Our calculations of “mainstream” we’re generally accurate but not always so that aspect of the project could be improved. Overall we we’re happy with the final exhibition.fishingforfriends2
|
OPCFW_CODE
|
Other product variants may be available, please contact us or
request a call back
if you cannot see what you are looking for.
NVivo 11 Starter for Windows is simpler software for qualitative research, so you can spend your time finding insights, not learning software. Whether you're new to qualitative research or you need to analyze and understand text like interview transcripts, documents or articles, you'll be up and running quickly with NVivo Starter.
- Import and analyze text based data.
- Work with data in virtually any language.
- Organize information using theme, case and in-vivo coding.
- Review your coding with coding stripes and highlighting.
- Keep track of your thoughts and ideas with memos and annotations.
- Ask questions of your data using text search, word frequency and coding queries.
- Visualize your data with word clouds, word trees, explore and comparison diagrams.
- Import articles from reference management software like EndNote, Zotero, Refworks and Mendeley.
- Import from note-taking software OneNote and Evernote.
- Export and share items to easily share your data, analysis and findings.
- Connect to NVivo for Teams for real time collaboration and secure teamwork.
- Work with a user interface in English, Chinese, German, French, Japanese, Spanish and Portuguese.
- Qualitative Analysis Software - Convert and work with projects created in software by QSR, Atlas.ti, MaxQDA and Framework.
- Reference Management Software - Import bibliographical data, notes and article attachments from reference management software: EndNote, Mendeley, RefWorks, Zotero.
- Statistical Analysis Software - Import and export delimited text files and spreadsheets to and from applications like Microsoft Excel, Microsoft Access and IBM SPSS Stats.
- Note-taking Software - Export notes directly from Evernote and OneNote and bring them into NVivo with the same structure set up in your note-taking software. Collect notes on-the-go using devices like tablets and phones, and then easily import data directly into NVivo via API.
- Generic Formats - Collect, import and export data from web browsers, Microsoft Office, Microsoft Excel and text files in HTML, XML, XLS, XLSX and TXT formats.
- Organization - Create a paperless filing system allowing you to easily search, sort and access project items using Folders and Sets.
- Reliability - Store your project data and material in a single file, making your project completely portable.
- Security - Enhance security by protecting access to projects with User Profiles, User Passwords, User Permissions and Encoded Storage.
- Scalability - Work with larger amounts of data in a single project (up to 10GB) or remove limits with NVivo for Teams.
- Traceability - Keep track of what team members are doing by recording changes to a project with an audit log of user actions.
- Recoverability - Explore your data with confidence knowing you can retrace your steps with multi-level undo and automatic back up and recovery of your data. Includes multiple levels of Undo, Project Repair, Project Restore, Automatic Backup (with NVivo for Teams).
- Coding - Categorize and classify data by theme or topic and analyze how items are connected using In-Vivo Coding, Thematic Hierarchical Coding.
- Case Coding - Gather references to People, Places, Organizations and other entities and categorize and classify data to analyze the who, what and where questions.
- Classifications - Case Classifications with Attributes, Assign Colors to Project Items
- Annotations - Create editable notes to comment on selected content
- Memos - Record and store your insights, observations and interpretations and link them to the material you are analyzing using Memos for Project, Sources and Nodes.
- Links - Use Hyperlinks to link to web pages and files outside of your project.
- Text Search Query - Querying Words or Phrases
- Coding Query - Explore and ask questions about your coding to find overlaps and intersections using AND, OR operators.
- Word Frequency Query - See a list of the words that appear most often within your materials by querying frequently occurring words.
- Find - Locate project items by Find by Name and Find in Content.
- Coding Stripes - Use colored stripes to view and compare coding or demographic information in your data. View and print Coding Stripes for Nodes, Attributes and Users.
- Charts - Create and explore Charts including Column, Pie and Bar charts. Range of Customizable Charts for Project Items and their Associations.
- Word Clouds - A customizable visual representation of Word Frequency Queries that displays the most frequently appearing words in selected materials or nodes.
- Word Trees - See the most frequently appearing words in selected materials and nodes, and explore the context surrounding the words.
- Explore Diagrams - Visually explore project data through a dynamic diagram that shows connections between a central project item and its related project content. Step through the project items to reveal further connections.
- Comparison Diagrams - Visually compare two Sources, Nodes or Cases to see what they have in common and where they differ.
- Spelling Dictionary - Check spelling as you edit new and existing sources in NVivo in English (US, UK), French, German, Japanese, Portuguese (Brazil), Simplified Chinese and Spanish (Mexico).
- Data Language - Work with data in virtually any language including character based languages such as Japanese and Mandarin
- Query Dictonary Language - Run Text Search and Word Frequency Queries in 7 languages: Chinese, English (US, UK), French, German, Japanese, Portuguese and Spanish.
- User Interface Language - Work with a user interface in English, French, German, Japanese, Portuguese (Brazil), Simplified Chinese and Spanish (Mexico)
- Multi-User Projects - Allow team members to work in the same project at the same time and view each others changes immediately with NVivo for Teams.
- Getting Started Guide - Get up and running fast with an introductory guide to learning fundamental tasks.
- Online Help - Online Help that provides step-by-step instructions for working with every feature of NVivo.
- Online Tutorials - Step-by-step animated online video tutorials that demonstrate how to use NVivo.
- Sample Project - Provides a 'real-life' sample project that can be used as an example to explore how to organize data, and experiment with queries, visualizations and other analysis tools.
- System Administrator Help - In-depth technical resources for System Administrators.
- Community Resources - Learn and communicate with peers and our Customer Support team in online social communities: User Forum, Facebook, LinkedIn User Group, Twitter, YouTube, Blog.
- Update Notifications - Receive automatic notifications when new updates are available to download and install
- Processor - 1.2 GHz single-core processor (32-bit) 1.4 GHz single-core processor (64-bit)
- Memory - 2 GB RAM or more
- Display - 1024 x 768 screen resolution
- Operating system - Microsoft Windows 7
- Hard disk - Approximately 5 GB of available hard-disk space (additional hard-disk space may be required for NVivo project data)
- Processor - 2.0 GHz dual-core processor or faster
- Memory - 4 GB RAM or more
- Display - 1680 x 1050 screen resolution or higher
- Operating system - Microsoft Windows 7 or later
- Hard disk - Approximately 8 GB of available hard-disk space (additional hard-disk space may be required for NVivo project data)
- Browser - Internet Explorer 11 (or later) or Google Chrome 44 (or later)
- Other - Internet connection
NVivo 11 for Windows is designed to operate natively on Microsoft Windows. If you're running the software on a virtual platform on a Mac, these system requirements may not apply.
|
OPCFW_CODE
|
package org.usfirst.frc.team5830.robot.commands;
import org.usfirst.frc.team5830.robot.Robot;
import edu.wpi.first.wpilibj.Joystick;
import edu.wpi.first.wpilibj.buttons.JoystickButton;
import edu.wpi.first.wpilibj.command.InstantCommand;
/**
*
* @author Hunter P.
* DEPRECATED! To avoid nullPointerExceptions, this code was moved directly into teleopInit. Still have no idea why that worked.
* Gets the status of the SmartDashboard chooser for Joystick Input,
* then creates and maps joystick buttons. Axes definition in Robot.teleopPeriodic
*/
public class JoystickMappingInit extends InstantCommand {
public JoystickMappingInit() {}
protected void execute() {
//There is no isFinished defined because this is an InstantCommand.
//An InstantCommand is just shorthand for returning true in isFinished, meaning execute will only run once.
//Initiates command to call buttons according to the option selected on SmartDashboard (Command name = ChooseButtonLayout)
switch (Robot.controlType.getSelected()) {
case 0: //General Flightsticks (Default)
Robot.leftJoy = new Joystick(0);
Robot.rightJoy = new Joystick(1);
Robot.button1 = new JoystickButton(Robot.leftJoy, 1); //Trigger
Robot.button2 = new JoystickButton(Robot.rightJoy, 1); //Trigger
Robot.buttonCubeToScale = new JoystickButton(Robot.rightJoy,6); //(Top) Upper Right
Robot.buttonCubeToSwitch = new JoystickButton(Robot.rightJoy,5); //(Top) Upper Left
Robot.buttonCubeToGround1 = new JoystickButton(Robot.rightJoy,3); //(Top) Lower Left
Robot.buttonCubeToGround2 = new JoystickButton(Robot.rightJoy,4); //(Top) Lower Right
Robot.buttonWinchRelease = new JoystickButton(Robot.rightJoy,11);
Robot.buttonCubeToGround1.whenPressed(new CubeToGround());
Robot.buttonCubeToSwitch.whenPressed(new CubeToSwitch());
Robot.buttonCubeToScale.whenPressed(new CubeToScale());
Robot.buttonWinchRelease.whenPressed(new WinchRelease());
break;
case 1: //General Xbox
Robot.xbox = new Joystick(2);
Robot.buttonPortalL = new JoystickButton(Robot.xbox,5); //LB
Robot.buttonPortalR = new JoystickButton(Robot.xbox,6); //RB
Robot.buttonCubeToScale = new JoystickButton(Robot.xbox,4); //Y
Robot.buttonCubeToSwitch = new JoystickButton(Robot.xbox,2); //B
Robot.buttonCubeToGround1 = new JoystickButton(Robot.xbox,1); //A
Robot.buttonWinchRelease = new JoystickButton(Robot.xbox,3); //X
Robot.buttonPortalL.whenPressed(new CubeToPortalL());
Robot.buttonPortalR.whenPressed(new CubeToPortalR());
Robot.buttonCubeToGround1.whenPressed(new CubeToGround());
Robot.buttonCubeToSwitch.whenPressed(new CubeToSwitch());
Robot.buttonCubeToScale.whenPressed(new CubeToScale());
Robot.buttonWinchRelease.whenPressed(new WinchRelease());
break;
case 2: //Daniel
Robot.xbox = new Joystick(2);
Robot.buttonPortalL = new JoystickButton(Robot.xbox,5); //LB
Robot.buttonPortalR = new JoystickButton(Robot.xbox,6); //RB
Robot.buttonCubeToScale = new JoystickButton(Robot.xbox,4); //Y
Robot.buttonCubeToSwitch = new JoystickButton(Robot.xbox,2); //B
Robot.buttonCubeToGround1 = new JoystickButton(Robot.xbox,1); //A
Robot.buttonWinchRelease = new JoystickButton(Robot.xbox,3); //X
//Robot.buttonPortalL.whenPressed(new CubeToPortalL());
//Robot.buttonPortalR.whenPressed(new CubeToPortalR());
Robot.buttonCubeToGround1.whenPressed(new CubeToGround());
Robot.buttonCubeToSwitch.whenPressed(new CubeToSwitch());
Robot.buttonCubeToScale.whenPressed(new CubeToScale());
Robot.buttonWinchRelease.whenPressed(new WinchRelease());
break;
case 3: //Hannah
Robot.leftJoy = new Joystick(0);
Robot.rightJoy = new Joystick(1);
Robot.button1 = new JoystickButton(Robot.leftJoy, 1); //Trigger
Robot.button2 = new JoystickButton(Robot.rightJoy, 1); //Trigger
Robot.buttonCubeToScale = new JoystickButton(Robot.rightJoy,6); //(Top) Upper Right
Robot.buttonCubeToSwitch = new JoystickButton(Robot.rightJoy,5); //(Top) Upper Left
Robot.buttonCubeToGround1 = new JoystickButton(Robot.rightJoy,3); //(Top) Lower Left
Robot.buttonCubeToGround2 = new JoystickButton(Robot.rightJoy,4); //(Top) Lower Right
Robot.buttonWinchRelease = new JoystickButton(Robot.rightJoy,11);
Robot.buttonCubeToGround1.whenPressed(new CubeToGround());
Robot.buttonCubeToSwitch.whenPressed(new CubeToSwitch());
Robot.buttonCubeToScale.whenPressed(new CubeToScale());
Robot.buttonWinchRelease.whenPressed(new WinchRelease());
break;
case 4: //Hunter
Robot.leftJoy = new Joystick(0);
Robot.rightJoy = new Joystick(1);
Robot.button1 = new JoystickButton(Robot.leftJoy, 1); //Trigger
Robot.button2 = new JoystickButton(Robot.rightJoy, 1); //Trigger
Robot.buttonCubeToScale = new JoystickButton(Robot.rightJoy,6); //(Top) Upper Right
Robot.buttonCubeToSwitch = new JoystickButton(Robot.rightJoy,5); //(Top) Upper Left
Robot.buttonCubeToGround1 = new JoystickButton(Robot.rightJoy,3); //(Top) Lower Left
Robot.buttonCubeToGround2 = new JoystickButton(Robot.rightJoy,4); //(Top) Lower Right
Robot.buttonWinchRelease = new JoystickButton(Robot.rightJoy,11);
Robot.buttonCubeToGround1.whenPressed(new CubeToGround());
Robot.buttonCubeToSwitch.whenPressed(new CubeToSwitch());
Robot.buttonCubeToScale.whenPressed(new CubeToScale());
Robot.buttonWinchRelease.whenPressed(new WinchRelease());
break;
case 5:
Robot.leftJoy = new Joystick(0);
Robot.rightJoy = new Joystick(1);
Robot.button1 = new JoystickButton(Robot.rightJoy, 2); //Thumb button?
Robot.button2 = new JoystickButton(Robot.rightJoy, 1); //Trigger
Robot.buttonCubeToScale = new JoystickButton(Robot.rightJoy,6); //(Top) Upper Right
Robot.buttonCubeToSwitch = new JoystickButton(Robot.rightJoy,5); //(Top) Upper Left
Robot.buttonCubeToGround1 = new JoystickButton(Robot.rightJoy,3); //(Top) Lower Left
Robot.buttonCubeToGround2 = new JoystickButton(Robot.rightJoy,4); //(Top) Lower Right
Robot.buttonWinchRelease = new JoystickButton(Robot.rightJoy,11);
Robot.buttonCubeToGround1.whenPressed(new CubeToGround());
Robot.buttonCubeToSwitch.whenPressed(new CubeToSwitch());
Robot.buttonCubeToScale.whenPressed(new CubeToScale());
Robot.buttonWinchRelease.whenPressed(new WinchRelease());
break;
}
}
}
|
STACK_EDU
|
How do I change the color of the top bar in SharePoint?
You can change the header color of your SharePoint online sites by using the following steps:
- You can go to Office 365 admin center.
- Go to Settings > Org. Settings.
- Select the Organization Profile tab.
- Select Custom themes.
- Under Navigation bar color settings, select your preferred background color.
How do I change the Quick Launch bar in SharePoint 2010?
- SharePoint 2010.
- Quick Launch: Editing the Left Navigation.
- Add New Heading: Click New Heading. Type the URL and a description for the heading, and then click OK.
- Add New Navigation Link: Click New Navigation Link. Type the URL and a description for the link.
- Change Order: Click Change Order.
How do I change the background in SharePoint?
Change the Wallpaper on SharePoint:
- Click the COG (top right corner of the screen)
- Click SITE SETTINGS.
- Click CHANGE THE LOOK (under LOOK AND FEEL)
- Click the first theme on the page (named CURRENT)
- Drag a graphic into the box in the top left corner of the screen.
- Click TRY IT OUT (top right)
- Click YES, KEEP IT.
How do I change the top banner in SharePoint?
in the top right corner of your SharePoint site. For team sites, you can change the theme and header….Change your navigation style
- On your site, click Settings. and then click Change the look > Navigation.
- Select one of the following options:
- Click Apply to save your changes.
What is the SharePoint Quick Launch bar?
What is the Quick Launch? The Quick Launch menu is displayed on the homepage of a SharePoint site and contains links to featured lists and libraries on the site, sub-sites of the current site, and People and Groups. You can even add links to pages outside of your SharePoint site.
Can you change the background color on SharePoint?
To change the background, click Change and browse your computer or SharePoint site for the image you want to use. Or, to remove the background image, click Remove. To change the colors used in the design, click the color menu, scroll through the color schemes, and select the one you want to use.
How to change the color of SharePoint Online Bar?
To change the blue color of SharePoint online bar, the simplest way is to change the look of the site. Or you can make an composed look. Note: this will change the look of the site instead of just the color of SharePoint online bar. Please remember to mark the replies as answers if they help.
How do I change the name on the top link bar?
To change the name that appears on the top link bar, you must edit the top link bar. When you create a subsite, it appears by default on the top link bar of the parent site and has a unique top link bar. You can change these setting at any time.
How do I change the links in the site navigation bar?
To change the links in the Top link bar, click EDIT LINKS to the right of the menu. 2. To change links in the Left-hand menu (also known as Quick launch bar), click EDIT LINKS beneath the menu. Note: If you don’t see EDIT LINKS, you might not have permissions to customize the site navigation.
How do I change the navigation in SharePoint 2010?
On the Site Actions menu , click Site Settings. In the Look and Feel column, click Navigation. Notes The Navigation command appears under Look and Feel only if the publishing features are enabled for your site and you have at least the permissions obtained by being added to the default Designers SharePoint group for the site.
|
OPCFW_CODE
|
I keep welling up into massive blog entries on this topic. Then I write them. Then I delete them. What I have to say on topic at the moment is so mired in the organizational dysfunctions of a big client that it's way easier to vent than sift. I'm hating it, because for now I'm at a loss for the long stretch of reflection I need to sort learning points from the environment I have to work. I'm thinking it makes more sense to start the conversation and balance ready-to-rant me with others who'd like to chat about this stuff.
Those of you that follow the Creating Passionate Users blog see the manifesto behind the Head First series and in general the mission to get people thinking, feeling, relating and therefore learning. That's the game and goal right there.
In the writing job I currently have, there's a disconnect between the 'content development' group (who create/control published course products) -- and 'content delivery' group (who connect customers, times, places, instructors, products to create 'events'). Some of the disconnect is territoriality, as it is in many places. But what's more surprising to me is how two sub-organizations who claim to focus on the customer (and indeed go through their idea of proper motions for that) seem to keep their irreconcilable differences on their sleeve.
The process for producing an instructional book, guide, online tutorial, whatever, is different from getting people to apply that information in a classroom setting. We can help people absorb some number of learning points in five days, yet it requires months of effort to produce the materials that guide the session.
Anyway, I'm expanding when I should be focussing. Here's a question for kicking around the Lounge: how intensely do you teachers scrutinize the materials you're given to teach from? How do you use them? Are your course materials the Bible or a sometimes-useful guideline? Who is the course developer's customer? You or the student. For the sake of contrasts in discussion, assume the course developer will only focus on teacher or student. Let's leave out the "I'll be teaching myself, so, ha!" thing for the moment.
Make visible what, without you, might perhaps never have been seen. - Robert Bresson
When i used to teach, the materials i referred to were more of a 'guideline' thing. I could never stick to one 'material' so to speak. Because what i always found out was that each material tended to focus on one primary area, leaving others untouched. Like you so rightly said: It is of paramount importance to decide who the end-customer is, the teacher or the student. What i have noticed is that tutorial aimed at 'beginners(or students)' normally cover the basics, but not the 'concepts'. For ex: A beginner inheritance tutorial would describe what inheritance is(most likely than not, giving the famous 'car' or 'box' example) but would not talk about how it is actually implemented or the practical advantages and pitfalls of using it, whereas an 'advanced' tutorial on the same topic would assume i know all about the 'basics' of inheritance and would delve into more 'complex' issues. So what i have had to do over the years is to refer to all kinds of tutorials before broaching any topic and take students from the 'beginners' to the 'advanced' ones.
Again, i am not blaming the content developers or the tutorial authors, cause well, a beginners tutorial is meant for beginners!!! And covering each and every aspect of even any one topic would normally always require volumes!!!
Also, i am not saying tht i do the best job of either finding the best tutorials or getting the 'mix' of beginners and advanced right, but i base my stuff on what i see being implemented in the industry and bridge the gap between the book and actual 'production' code.
i hope this helps and many, many thinks to the authors whose tutorials/books i have referred to over the years...not only have i learnt and benefitted from hem, but i am sure many, many others have to.
In school, I always found classes that closely followed one book to be boring. It seems to me that an important part of motivating people to learn is trying to teach them the things they are interested in. For that you probably need a lot of flexibility - closely following one single source probably won't cut it.
Just my 0.02
The soul is dyed the color of its thoughts. Think only on those things that are in line with your principles and can bear the light of day. The content of your character is your choice. Day by day, what you do is who you become. Your integrity is your destiny - it is the light that guides your way. - Heraclitus
Neeraj, you might like this blog posting, which says in part what you are saying. I'd have more to say in response, but this article overlaps my thoughts well enough to dissuade me from further comment.
Ilja, you've hit on a point I'd like to develop and say more about. The biggest difference between a book (be it reference or learner) and a course guide, in my view, is the assumption that one does a better job of contributing to the classroom experience, if only because one of them tries to, and the other doesn't.
One might think that some authors with classroom training in mind would factor this in to course guides. I don't see a lot of that, and in my time I've taught from 50+ different course guides, not even counting major revisions among them.
I'm not talking just about dry, tech-transfer type course guides either. It seems to me a good course guide does two things at once. First, it presents a loose but coherent narrative that allows the student to see clearly a sustained build of ideas, a natural flow of topics. From that, it seems to me a sense of well-being that there's a plan (and a good one) will follow and so the student has every reason to stay engaged.
Second, the guide persuades the reviewing instructor that the plan is good enough to help them do what they do as teachers. That goal is a damn tall order, I'm here to tell ya. Part of my current kicking and screaming stems from trying to fix and update a spaghetti mess of an existing course guide. Given the time I have, there's no way I'll make it into something very good. Better, sure, but not worth the price they charge for it. There isn't enough calendar time to do the work, and I can't work 50 hours every week just to try and cover the difference. (Not on this kind of material anyway...)
And on particularly specialized topics, I suspect instructors want the course guide to be as completely behind their experience and desired presentation style as much as possible. So you'll rarely if ever get unanimous acclaim that a course guide is perfect, but done properly you could satisfy a large number of them perhaps.
What are some qualities you expect to see in a book or course guide that makes you think you're in good shape for the duration of the class?
|
OPCFW_CODE
|
Use-Site variance in Kotlin
open class A
class B: A()
fun <T> copy(src: MutableList<T>, dst: MutableList<T>) {
for (i in 0 until src.size) {
dst.add(i, src[i])
}
}
For the above mentioned code I understand that copy function expects both type parameters of exactly same type. With a slight modification copy(src: MutableList<T>, dst: MutableList<in T>) notice the in keyword, I am saying that src must be of exactly type T but destination can be of type T or any super type of T.
For the above modified code, I am able to call the method as following,
fun main(args: Array<String>) {
val l1 = mutableListOf(B(), B())
val l2 = mutableListOf<A>()
copy(l1, l2)
} // main
The above copy(l1, l2) does not work if I remove in from the destination (understood).
My question is, I am able to call the function without any error if update the function parameter src to accept out projection of the list. e.g.
fun <T> copy(src: MutableList<out /*notice out here*/ T>, dst: MutableList<T>) {
for (i in 0 until src.size) {
dst.add(i, src[i])
}
}
In this case, I am not able to understand what goes on under the hood.Can any one explain please?
Note that this is just an example from the book. I know I can use List instead of immutable list in src
out here works symmetrically to in:
in keyword, I am saying that src must be of exactly type T but destination can be of type T or any super type of T
So now you are saying that src must be a MutableList of type T or any subtype of T, while dst must be a MutableList of exactly type T.
So when you have l1: MutableList<B> and l2: MutableList<A>, the compiler infers the type parameter in copy(l1, l2) as copy<A>(l1, l2), and it typechecks: MutableList<B> is a subtype of MutableList<out A>.
Because you are only using out-compatible operations on src, and only in-compatible operations on dst, as @s1m0nw1 says it makes perfect sense to include both modifiers.
since you're using the function in only one way you should use the use-site variance modifiers anyways to make it clear to the caller that you may add to dst and get data from src:
fun <T> copy(src: MutableList<out T>, dst: MutableList<in T>) {
for (i in 0 until src.size) {
dst.add(i, src[i])
}
}
Further, since src is really used as a List rather than a MutableList, you should prefer it accordingly. As a result, you won't need the out modifier anymore since List already defined its type parameter T as out only:
fun <T> copy(src: List<T>, dst: MutableList<in T>)
In response to your question: The problem actually happens when you call copy with two differently typed lists in your main, once with MutableList<A> and once with MutableList<B>. The compiler cannot infer whether the type of copy shall be A or B. To fix this, you need to give more information:
1) When you set dst to MutableList<in T>, the compiler knows that you will only add T types based on src to it (in your example this is B).
2) When you set src to MutableList<out T>, the compiler understands that you will only add T and its subtypes to dst which is fine as well (in this case T will be inferred as A though).
Agree with everything in @s1m0nw1's answer above. Something that you might find useful is reading the official documentation page about generics, and notice their distinction between producers and consumers. Basically, something marked out is produced (hence immutable list use out, as they only produce values, not consume them); something marked in is consumed.
Thanks @s1m0nw1. in this case T will be inferred as A though got it now.
|
STACK_EXCHANGE
|
DAppNode is a plug n’ play full-node solution focusing on increased privacy and user control. Currently supporting Ethereum, Bitcoin, Monero, and many other blockchains.
Panvala, a DAO for funding Ethereum, released Batch #5 of its token grants. DAppNode received a 200,000 PAN grant to fund its development of a hardware layer of security for ETH 2.0 validators using SGX.
DAppNode earned over $2984 from 185 contributors and $8583 from CLR matching in the Gitcoin Grants round 4. For more news and product updates, subscribe to DAppNode’s monthly newsletter DAppNodeNOW!
QuikNode provides users with a dedicated node and environment, unlike shared infrastructure and API gateways.
Yesterday, the QuikNode team introduced QuikNode v2.0! v2 offers users both API service and Dedicated Node options for accessing Ethereum full and archive sync data with either JSON-RPC or WebSocket endpoints.
What else is new?
- Users can create a free account and obtain a complimentary API endpoint to either the Ethereum Mainnet, Ropsten, Kovan, or Rinkeby networks.
- Users can manage multiple nodes under a single account, for API service and Dedicated node options.
- Both API and Dedicated Nodes are powered by QuikNode Boost technology, which accelerates Ethereum node requests via an intelligent caching layer.
- Users can configure a Dedicated Node for Ethereum, Ethereum Classic, or Bitcoin to meet their needs and launch it in 8 globally-distributed locations.
- V2 provides request analytics that includes call breakdown, response status, requests over time, and with average response time metric coming soon.
Union is a stake delegation marketplace of validators supporting Tezos, Cosmos, Livepeer, and IRIS.
Announced earlier this month, Union v1 is the first stake delegation marketplace that connects stakeholders with validators. This not only allows for users to potentially earn more staking rewards by comparing validators but also increase network decentralization of various PoS blockchains through letting validators of different sizes participate in the marketplace. You can even argue Proof-Of-Stake is a digital labor market, with validators as the laborers.
Union currently supports Tezos, Cosmos, Liverpeer, Irisnet, with Ethereum 2.0 and other PoS networks coming soon.
Pocket is a trustless API layer for blockchain applications designed to coordinate API requests across a decentralized network of full-node operations.
In February, Pocket Testnet P1 launched with support for Ethereum, Tezos, Open Application Network, POA, xDAI, and more. Pocket Testnet P1 features the Pocket Core Daemon, Pocket CLI, Pocket-JS, Pocket Web3 Provider, Pocket Node Deployments, and a faucet for Pocket Test Tokens.
Pocket's incentivized testnet is planned to launch soon, too!
According to Pocket's changelog, the engineering team deployed a new release candidate v0.1.0 of the testnet, as well as, updated both the Pocket JS library and Homebrew deployment.
If you are interested in the monetary policy that influences how much a node operator can earn by running nodes or how much it costs a developer to relay API requests to and from their application, read the Pocket Economic Paper.
Namestack provides the easiest way to buy ENS domains with a one-click registration and multiple payment options.
Namestack shared a new feature release. As of mid-February, users can now configure their ENS domain and sell subdomains with a single click. Beforehand, it took many more steps and involved playing with code.
An ENS domain name owner will be able to sell subdomains by handing over control of the domain to a smart contract called the Subdomain Contract. This happens when a user configures their ENS domain to sell subdomains. Just set the price, and anytime a subdomain is sold that amount directly deposits into your wallet. It’s important to note that once you hand over control of an ENS domain to the Subdomain Contract, you’ll never get it back.
With this small feature update, it’s also possible to delist an ENS domain previously configured and relist it using a different price.
Would you purchase a disdat.eth subdomain? NameStack not only enables purchase via ETH but also through credit card, apple pay, or google wallet.
A CLI tool to create Ethereum-powered React apps with one command.
Paul Berg announced the release of Create Eth App v1.1.0. It was inspired by existing related projects Create-React-App and Create-Next-App. Create Eth App was first introduced around the time EthDenver hackathon was happening.
Create ETH App comes with Yarn Workspaces, everything included in Create React App, Graph protocol’s Subgraph templates, and minimal structure for managing the smart contract ABIs and addresses. This makes for a fantastic tool for those learning how to write Ethereum-powered apps, starting new Ethereum-powered React applications, and building examples.
This version contains four DeFi templates with pre-filled contact ABIs, addresses, and subgraphs:
- create-eth-app --template aave
- create-eth-app --template compound
- create-eth-app --template sablier
- Create-eth-app --template uniswap
Fortmatic is a simple authentication mechanism that lets users access Ethereum DApps from anywhere with just a phone number or email, no more browser extensions nor seed phrases.
A few weeks ago at ETHDenver, Fortmatic launched its new Whitelabel SDK as part of their paid plan. Instead of an email/phone and password authentication process, the Whitelabel SDK adopts a passwordless experience through a “Magic Link” that is sent to a users’ email and users are logged in after clicking the link.
Watch the magic happen:
ICON became their first major partner and client of Whitelabel SDK! This signifies the start of their multi-blockchain initiative. Who’s next?
Argent is an easy-to-use, mobile, smart wallet for Ethereum apps and tokens.
Co-founder of Argent Itamar Lesuisse announces their $12 Million Series A round. It’s being led by Paradigm, as well as, joined by Rober Leshner (Founder of Compound and General Partner of Robot Ventures), Index Ventures, Creandum, and FirstMinute. This will enable Argent to become the easiest and safest way to access all of DeFi, and it is very well on its way!
Building on Wallet Connect, Argent introduced a new method for making DApps faster, cheaper, safer, and easier to access from mobile wallets. They added a new approveAndCall() method to the wallet to orchestrate the approval of the exact amount of tokens required and then execute the contract call. This simplifies the user experience by bundling a bunch of transactions into one.
And they’ve open-sourced it!
Status is an open-source, secure, private messenger, crypto wallet, and Web3 browser.
Earlier in February, the Status Network officially announced the release of Status v1.0.
It launched with new SNT utility, such as a sticker marketplace and a decentralized DApp directory. Other features like the fiat-to-crypto Teller Network and incentivized messaging with Tribute to Talk are under development. The v1 changelog reflects they’ve added support for multiple Ethereum wallets within one Status account, improvements to wallet transaction flow UI, and more.
According to the Status Network Town Hall #52, Status shipped v1.0.2 HotFix, which patched a security bug.
Available on the iOS App Store (and soon Google Play), Rainbow is the pocket robot for your Ethereum-based internet money.
Built with the Uniswap Exchange subgraph, the Rainbow Wallet now lets users search for any token, including social money tokens, like the ones minted by Roll.
Acquired by MyCrypto, Ambo is a mobile wallet product focused on the interest of retail consumer buying into speculative assets.
Ambo released version 1.23.
The latest update added new features, like comparing prices across exchanges (Radar Relay, Bancor, Kyber Network, and Uniswap Exchange), simpler user experience for depositing funds with Wyre, transaction receipts and more.
Dapper is a smart contract wallet, available as a chrome extension, and on iOS and Android devices.
With Dapper, users can now take photos with their crypto-collectibles, like Cryptokitties, through using the Dapper Lens feature.
Althea is a system that lets routers pay each other for bandwidth, which allows people to set up decentralized Internet Service Providers (ISPs) in their communities.
In the Althea Development Update #82, Beta 11 release users can now use the Wyre widget to buy and send funds directly to their router.
In Althea’s March Community Update, they announced that a new Altheahood sprouted up in T’adi Ghana. Althea revealed that subscribers and organizers can now buy equipment with a six-month payment plan at buy.althea.net, including a new mini-mesh package for those looking to set up a small network. They’ve also shared they are working over the next couple of months on building a better-developed network organizer dashboard and tools to help operators manage their network.
Mysterium is an open-source, decentralized, virtual private network allowing anyone to rent their network traffic, providing a secure VPN connection.
The other week, Mysterium Network released version 0.22 of Mysterium node. This means nodes now only accept paid traffic, in which the price is calculated by traffic and session time, and any node runner can earn a bounty by being a top 3 performing node/region per month.
Thanks for reading!
💸💸💸 Claim $DISDAT Tokens 💸💸💸
Follow DIS Weekly on Twitter at @DISWeekly for more!
|
OPCFW_CODE
|
Our planet is home to some magnificent snakes of varying appearances, behaviors, and sizes. Although many people have ophidiophobia (a fear of snakes), snakes are essential to maintaining the equilibrium of Earth’s ecosystems. Encountering snakes anywhere in the world is nothing unusual, particularly in Africa, Asia, or the Americas, with lengths ranging from a few inches to as long as a school bus.
You may be familiar with the most well-known ones, such as pythons and boas, which serve as hosts to the largest snakes. Measuring them by weight or length determines which family has the most enormous reptiles. However, we bet you’ve never heard of night snakes. Which reminds us of an incredibly rare question: what is the largest night snake ever recorded?
Background On Night Snakes
The scientific name for the night snake is Hypsiglena torquata, and torquata is the Latin word for collar or neck chain. It describes the two broad, dark-brown spots at the snake’s base, which give the snake the appearance of having a collar. They are primarily nocturnal (active at night), as their common name suggests.
Night snakes are members of the extremely vast Colubridae family. Within that family, the Hypsiglena genus contains at least 17 different varieties of night snakes, and among these 17 types of night snakes, the spotted night snake is the largest. This article will explore the size of the largest night snake, its habitat, and other fascinating facts.
What Do Night Snakes Look Like?
Often, night snakes are mistaken for baby rattlesnakes, which can be a concern. While the former is not lethal to humans, the latter is exceedingly venomous. The night snake has a thin snout that gradually becomes wider until it reaches the base of the snake’s head, which shapes it into a triangle.
Additionally, the night snake has a white belly, and the pupils of its eyes are elliptical. This snake has light brown or dull gray scales, and on its back, it has a pattern of dark brown spots. It has a dark stripe connecting to each eye’s edge and two sizable dark brown splotches at the base of its head.
What Is The Largest Night Snake Ever Recorded?
Generally, the night snake often has a thick body and measures 12 to 26 inches in length. And although no particular night snake has been recorded to be the largest of them all, the only subspecies that can grow to a maximum size of 26 inches is the spotted night snake. The other subspecies, such as the Texas night snake, California night snake, and San Diego night snake, are smaller, and can only reach a maximum length of 16 inches.
Night snakes are considered small, commonly no greater than 26 inches, a size that’s generally too small to allow for comfortable handling. While there are no in-depth descriptions of the largest night snake, there haven’t been any subsequent reports of similar snake species sightings growing longer than the maximum length revealed.
Where Do Night Snakes Live?
The western and southern regions of the United States are home to night snakes, and their distribution extends from the states of Washington to Idaho and through California to Utah, continuing southward to Texas. Another location where they thrive is in northern Mexico. The last one is Canada’s British Columbia, where night snakes can also be found and are considered the province’s smallest snakes. However, little is known about population numbers and exact ranges due to the night snake’s extremely cryptic nature.
The night snake thrives in various habitats, such as grasslands, deserts, sagebrush flats, chaparral, forests, thorn scrub, and mountain meadows. Night snakes live in rocky and sandy regions and have been seen as high as 8,500 feet (2,600 meters). The night snake may live in either a tropical or temperate climate and is known to live in mammal burrows.
What Do Night Snakes Eat?
The primary food source for night snakes are lizards, and other prey includes salamanders, frogs, baby rattlesnakes, blind snakes, and big insects. The night snake’s mildly venomous saliva aids in the capture of small prey, such as amphibians and reptiles. They forage from mid-April to mid-September, emerging at dusk to roam around their habitats in search of prey. The night snake uses its sense of smell to find small animals after the sun sets and swallows its prey after dying.
How Do Night Snakes Behave?
Crepuscular (more active at dawn and dusk) and nocturnal are both characteristics of night snakes. In addition to being seen at night while crossing highways, they can also be discovered during the daytime behind rocks, boards, dead branches, and other surface debris. The winter is when night snakes hibernate, while the summer is when they are known to aestivate. They are typically most active from April through October, with June being the typical month of activity spikes. If a night snake feels threatened, it may coil up and thrust its body in the threat direction while flattening its head into a triangle defensive configuration.
However, night snakes are easily frightened. They have drawn interest from some reptile enthusiasts as pets due to their calm disposition and small size. Keep in mind that pet snakes require special maintenance. For them to be healthy, their habitat must be temperature-controlled, have the perfect amount of humidity, and serve as a source of proper nutrition.
Are Night Snakes Dangerous?
Although they have a small amount of venom, night snakes are not dangerous to humans. Only their prey, including lizards, frogs, and small snakes, are at risk from the venom that oozes from their rear fangs. In other words, the venom of a night snake bite is not likely to cause any harm to a human. Even so, the snake bite requires attention and care.
The first step is to wash the bite wound with warm water and soap. After that, dab first-aid cream on the wound and wrap a bandage around it. Keep an eye out for excessive redness or inflammation since these symptoms may lead to an infection. Visit a doctor for additional care if either of these symptoms intensifies.
Other Record-Breaking Snakes
The Eastern indigo snake (Drymarchon couperi) is a large, nonvenomous species of colubrid snake native to the southeastern United States. It is the longest native snake species in the U.S., reaching up to 8 feet in length and weighing up to 13 pounds. The eastern indigo snakes are typically black or dark blue, with pale yellow scales along their sides that can create interesting patterns, though some may have more vibrant colors such as orange or red mixed into their patterning.
The largest eastern indigo snake ever recorded was found near Ocala National Forest in Florida in 2019 by scientists from the University of North Florida’s Department of Biology. It measured an impressive 8 feet 3 inches long and weighed 14 pounds 4 ounces – making it both longer and heavier than any other known individual! This record-breaking discovery demonstrates just how large these majestic creatures can grow when given enough space. Eastern indigos are currently listed as threatened under U.S. law due to habitat loss, so efforts must be made to protect their populations and ensure they have adequate habitat for them to thrive.
Discover the "Monster" Snake 5X Bigger than an Anaconda
Every day A-Z Animals sends out some of the most incredible facts in the world from our free newsletter. Want to discover the 10 most beautiful snakes in the world, a "snake island" where you're never more than 3 feet from danger, or a "monster" snake 5X larger than an anaconda? Then sign up right now and you'll start receiving our daily newsletter absolutely free.
Thank you for reading! Have some feedback for us? Contact the AZ Animals editorial team.
|
OPCFW_CODE
|
Same CSRF token for multi-tab browsing
I have a little problem regarding to my CSRF token function (it changes the token every request). Here is the scenario of my problem:
When I opened 2 pages (with same CSRF Token), when I open the first page and submit the form there, the second page's form is not working(because the CSRF Token's value is changed). If you view it from the user's experience it will leave a bad taste, so I need to change it.
My question is, how can I make my CSRF Token function, works in multi-tab browsing, while not affecting the security along with better user experience? Because I want them to have a better user experience without affecting the security of their browsing.
Possible duplicate of Why refresh CSRF token per form request?
I would recommend just using one CSRF token per user instead. I have so far not heard any argument against that that I find compelling.
@SilverlightFox I read that, and the conclusion that I made after reading that is to remove all the tokens in my forms, except in login and signup forms? I can not comment to that thread that is why I made I post here.
If that is the case I would suggest you link to that question in your post and explain why that answer doesn't fit with your situation. Click [edit] to do this. The accepted answer was explaining to issue new tokens at the point of login, but not to remove them elsewhere.
Use session CSRF tokens
@SilverlightFox okay, then my question is not answered then, because, my issue is not like that.
Use a single CSRF token per session (rather than per request or per user). For example, the CSRF token can be the plain text session ID, or an encrypted or securely hashed (e.g. HMAC) version of the session key. Or you could store the CSRF token as a session variable that is associated with the session key. These are all common options.
If you're wondering if this makes your site any less secure, read this article for reassurance.
If you happen to be using the ASP.NET framework, the way to do this is simply set the Page.ViewStateUserKey to the session ID.
What if an attacker get the csrf token? what will I do ?
A typical CSRF mitigation assumes that the attacked cannot obtain the CSRF token. If you are concerned about how to guard the token, please start another question.
Don't use the session ID as the CSRF token, ever. It's a lot more likely that a CSRF token will leak than that a session ID (properly protected) will leak, and increasing the ways a session token could be leaked is a very bad idea. HMACing the session token with a server-side secret is pretty safe (using a completely random CSRF token is better but requires storing additional per-session state).
We need more details. Are you using a framework?
Furthermore, if you are creating a token for each request, how come two tabs have the same one? It kinda contradicts what you said.
Or maybe I misunderstood and you mean that you have only ONE csrf token "active" at a time. In that case the fix would be easy, allow for multiple CSRF tokens to be valid at the same time (by holding a list of sent ones server side). That way two different tabs have two different tokens, both of which are valid.
Hello, thanks for your response, how can I make a list? What do I need?
Again, this would depend on the technology/framework you are using. Since you are faced with this issue I am guessing you are not using a webapp framework like Django or Rails, as they would take care of this issue. In PHP i would hold the pending tokens in a table together with an expiration date. Every time you get a POST request, you check in that table, if you find the token and it is not expired, you process the request. Otherwise you reject it. I would also make sure that expired tokens get purged once in a while. But I would suggest you use an existing framework!
This is mostly a request for more information and not an answer. Please put this in a comment.
I was torn, but as the "answer part" was longer than the request for more information I opted for an answer.
then split this up as comments and an answer
maybe I can use the password_hash() function? Then I will set a secret value, for that even if the token is different when I use password_verify() it will remain true. Is it okay?
@googol8080 what would you hash? How often would you change the hashed value? To be honest I think the standard way is better here. Generate a random token, save it serverside, send it to the client. When a post request arrives, check that there is a token and that this token is present in your list of active ones
But to make a list, I need to make a table in my database and fetch it every time user/s make a $_POST requests and check it right?
@googol8080 yes
Then it will add more time before the <form> will perform right?
@googol8080 it should be negligible. It would really surprise me if you were to notice any measurable delays
so, every time a page with a form is visited or refreshed, I will insert a token value in the database (this is token-per-user method), then when the <form> is submitted, it will check the token and if it is verified, the token value will be deleted in right after that. What do you think?
@googol8080 I'd say it is a token-per-page, but yes, that sounds like the general workflow. I would give the tokens an expiration date tho, so that you can regularly clean that table by removing all expired tokens
|
STACK_EXCHANGE
|
#pragma once
#include <array>
#include <vector>
#include <memory>
#include <glm/vec4.hpp>
#include "Modules/Math/Rect.h"
#include "Utility/Bitmask.h"
#include "GLTexture.h"
enum class FramebufferCopyFilter {
Nearest, Linear
};
class GLGraphicsContext;
class GLFramebuffer {
public:
GLFramebuffer(int width, int height, size_t colorComponentsCount,
GLTextureInternalFormat colorComponentsFormat,
bool createDepthStencilComponent);
GLFramebuffer(int width, int height, const std::vector<std::shared_ptr<GLTexture>>& colorComponents,
std::shared_ptr<GLTexture> depthStencilComponent);
GLFramebuffer(const GLFramebuffer& framebuffer) = delete;
~GLFramebuffer();
[[nodiscard]] RectI getBounds() const;
[[nodiscard]] float getAspectRatio() const;
[[nodiscard]] int getWidth() const;
[[nodiscard]] int getHeight() const;
void clearColor(const glm::vec4& color, size_t componentIndex = 0);
void clearDepthStencil(float depthValue, int stencilValue);
void copyColor(GLFramebuffer& target, size_t sourceComponentIndex = 0, size_t targetComponentIndex = 0);
void copyColor(GLFramebuffer& target, const RectI& sourceRect, const RectI& targetRect,
FramebufferCopyFilter filter, size_t sourceComponentIndex = 0, size_t targetComponentIndex = 0);
void copyDepthStencil(GLFramebuffer& target);
void copyDepthStencil(GLFramebuffer& target, const RectI& sourceRect, const RectI& targetRect,
FramebufferCopyFilter filter);
[[nodiscard]] GLuint getGLHandle() const;
[[nodiscard]] std::shared_ptr<GLTexture> getDepthComponent() const;
[[nodiscard]] std::shared_ptr<GLTexture> getColorComponent(size_t index) const;
private:
/*!
* \brief GLFramebuffer Handles GL default framebuffer
* \param width
* \param height
*/
GLFramebuffer(int width, int height);
void copyTo(GLFramebuffer& target, const RectI& sourceRect, const RectI& targetRect,
GLbitfield copyMask, FramebufferCopyFilter filter,
GLenum sourceAttachment, GLenum targetAttachment);
void performInternalInitialization(const std::vector<std::shared_ptr<GLTexture>>& colorAttachments,
std::shared_ptr<GLTexture> depthAttachment);
void enableWritingToAllBuffers();
private:
GLuint m_framebuffer;
int m_width;
int m_height;
size_t m_colorComponentsCount;
std::array<std::shared_ptr<GLTexture>, 4> m_colorComponents;
std::shared_ptr<GLTexture> m_depthComponent;
private:
friend class GLGraphicsContext;
};
|
STACK_EDU
|
I have created a hotspot following a previous post . In this post I explain how I avoided the errors and succussfully created the hotspot on Kali Linux.
- If you have a fresh installation of Kali Linux, first ensure the internet connection. If you are connected to Wi-Fi and still having no internet, then use the following command.
sudo dhclient eth0 or sudo dhclient wlan0
- Next, install hostapd(hotspot server) and dnsmasq (dns dhcp server).
apt-get install hostapd dnsmasq
- Prevent the installed services starting at the start up
sudo service hostapd stop sudo service dnsmasq stop sudo update-rc.d hostapd disable sudo update-rc.d dnsmasq disable
- Setup the configuration file of dnsmasq.
# Bind to only one interface bind-interfaces interface=wlan0 dhcp-range=192.168.150.2,192.168.150.10
- Setup the configuration file of hostapd.
interface=wlan0 driver=nl80211 ssid=myhotspot # Set access point harware mode to 802.11n hw_mode=g ieee80211n=1 channel=6
- Create hotspot.sh.
#!/bin/bash # Start sudo ifconfig wlan0 192.168.150.1 sudo service dnsmasq restart sudo sysctl net.ipv4.ip_forward=1 sudo iptables -t nat -A POSTROUTING -o eth0-j MASQUERADE sudo hostapd /etc/hostapd.conf sudo iptables -D POSTROUTING -t nat -o eth0-j MASQUERADE sudo sysctl net.ipv4.ip_forward=0 sudo service dnsmasq stop sudo service hostapd stop
- Now, execute the shell script
sh <path to hotspot.sh>
nl80211: Could not configure driver mode nl80211: deinit infname=wlan0 disabled_lib_rates=0....
airmon-ng check killall wpa_supplicant sh <path to hotspot.sh>
A device is connected to the hotspot!!
Cheers !! 🙂
Thanks, this is very helpful. I followed all this and was able to connect to the hotspot from another PC but the internet connection was not shared through the hotspot. Internet is working on Kali PC on eth0.
Turned out to be a problem on Windows PC connecting to hotspot – it had a fixed IP, worked fine when I changed it to DHCP.
can`t create hotspot on parrot
New to linux 😦 However, I followed the instructions, as well as the referenced earlier guide. I get no errors when I enter ‘# ./hotspot.sh’, but I get no messages at all. I also get no hot spot. Any suggestions? Thank you.
|
OPCFW_CODE
|
Published at : 01 Oct 2021
Subscribe to SkitChama Ch.
#holoCouncil #hololiveEnglish #krotime
ไขข้อสังสัย the number of / a number of ใช้ต่างกันยังไง ใน 8 นาที
BEST LONG LASTING FOUNDATION THAT LOOKS LIKE SKIN | TOP 10 Foundations
Messenger with Kill Clip Might Be the Best Pulse in Destiny 2
Cost Accounting |Classification of Costs: Direct Material, Direct Labor, Factory Overhead
Kuttem Reese - Advantage (Official Video)
‘The Coup Is Still Underway’: MAGA Riot Plot Started Earlier Than You Thought
Eastdale CVI Commencement 2021
THE MOST TENACIOUS COUNTER RAID - Rust
all-new 2022 Range Rover REVIEW L460 - still a luxury SUV king?
Taking part in the Yellow Fish Project
DIY Adding An Additional Power socket UK Home - Games Room Update
How To Do The Perfect Jiu Jitsu Guillotine by Marcelo Garcia
And is also a trendy little decoration. You can also add the name of the | OriTheGameMaster
New fines at ports of L.A., Long Beach set to begin Monday
Successful "Young Entrepreneurship" is Mostly a Myth
#18 Deploiement d'une instance, floatting IP, External network, Routing sur IaaS OpenStack 🎯
Quelles améliorations pour Bitcoin ? [Julien Guitton]
I attempted the most Brutal Mario Odyssey Mod
That pro gamer friend - Git Gud
Everything Ryan Reynolds Has Said about Blake Lively
Aryas Javan - Gosht | OFFICIAL MUSIC VIDEO
Surely but Slowly
You need to pay attention to these things!
You can also code! Network Engineer programs Cisco Viptela SD-WAN using Python! DevNet demo
FUNNY Ways To SNEAK PETS | Awesome Pet Sneaking Ideas & Funny Situations by KABOOM!
10 Secrets to a Healthy Body - Ingenious Workers Skilled Working Amazing
UNPAID LEAVE is UNLAWFUL DISCRIMINATION -- Peggy to the Rescue
Motorhome built for very cold weather camping. Swedish RV built for minus 35C
Samaritan House 48th seen by some as a saving grace
Miley Cyrus: It Should Have Been Me | The Tonight Show Starring Jimmy Fallon
Reaction to Dream vs 5 Hunters FINALE REMATCH (Dream Minecraft Manhunt)
Narcissism and karmic relationships
Payung Teduh - Angin Pujaan Hujan Lyrics
The Basics - Have Love, Will Travel
Biden Delivers Remarks In Honor Of World AIDS Day | NBC News
SPIDER-MAN: NO WAY HOME - Post-Credit Scene, Explained
IT'S FINALLY CHRISTMAS!!! Also, I can't stop crying? fml
A distinct world (feat. Joe Rivers)
ll Alokjhari ll Excellent Place ll
Gift Items at Cheapest Price | Valentine gifts | Customized Gifts items | Birthday gifts,Gifts ideas
Hardy Caprio - Sponsored (Official Video)
285 How does tapping have an effect on the body?
How To Say Uncongenial
Age of Empires 4 - The Best Game You Will Ever See
Anatomy of the Cerebrum - FUN, SIMPLE, and MEMORABLE!
Can Bitcoin Break The Downtrend?
Eagles And Youngster 01(Karry Wang,Wen Qi)
DG ISI Chief & Pakistani Politicians || PM Imran Khan is a great leader || Maryam Nawaz Sharif
|
OPCFW_CODE
|
How do you extract an integer from a list in Python?
Use a list comprehension for a more compact implementation.
- a_string = “0abc 1 def 23”
- numbers = [int(word) for word in a_string. split() if word. isdigit()]
How do I convert a list into string to int?
To convert a string to integer in Python, use the int() function. This function takes two parameters: the initial string and the optional base to represent the data. Use the syntax print(int(“STR”)) to return the str as an int , or integer.
How do I convert a list to numbers?
Use int() to convert a list of integers into a single integer
- integers = [1, 2, 3]
- strings = [str(integer) for integer in integers]
- a_string = “”. join(strings)
- an_integer = int(a_string)
How do I extract numbers from a list?
“how to extract numbers from a list in python” Code Answer
- a = [‘1 2 3’, ‘4 5 6’, ‘invalid’]
- numbers =
- for item in a:
- for subitem in item. split():
- if(subitem. isdigit()):
- numbers. append(subitem)
Is a digit Python?
Python String isdigit() Method Python isdigit() method returns True if all the characters in the string are digits. It returns False if no character is digit in the string.
How do I convert a string to a list in Python 3?
To convert string to list in Python, use the Python string split() method. First, the split() method splits the strings and store them in the list. Then, it returns a list of the words in the string, using the “delimiter” as the delimiter string.
How do you convert data to float in Python?
Converting Number Types
- Python’s method float() will convert integers to floats. To use this function, add an integer inside of the parentheses:
- In this case, 57 will be converted to 57.0 .
- You can also use this with a variable.
- By using the float() function, we can convert integers to floats.
Why is float used in Python?
The Python float() method converts a number stored in a string or integer into a floating point number, or a number with a decimal point. Python floats are useful for any function that requires precision, like scientific notation. Programming languages use various data types to store values.
How do you extract a list from a string in Python?
“python extract list from string” Code Answer
- import ast.
- input = “[[1,2,3],[‘c’,4,’r’]]”
- output = ast. literal_eval(input)
- => [[1, 2, 3], [‘c’, 4, ‘r’]]
How do I convert a string into an integer in Python?
int() is the Python standard built-in function to convert a string into an integer value. You call it with a string containing a number as the argument, and it returns the number converted to an integer: print (int(“1”) + 1)
How do you convert an integer into a string?
Converting an integer to a string is a common practice when programming. Declare the integer variable. int myInteger = 1; Declare the string variable. String myString = “”; Convert the integer to a string. myString = Integer. toString (myInteger); Print the variable to the console. System.out.println(myString);
How do you float in Python?
An example is when you need more accuracy in performing a calculation than is provided with the integer data type. Use the float command to convert a number to a floating point format. Open your Python interpreter. Type the following: numberInt = 123.45678 float (numberInt) Press “Enter.”.
What is a long integer in Python?
Python supports four different numerical types − int (signed integers) − They are often called just integers or ints, are positive or negative whole numbers with no decimal point. long (long integers ) − Also called longs, they are integers of unlimited size, written like integers and followed by an uppercase or lowercase L.
|
OPCFW_CODE
|
from astropy.time import Time
import numpy as np
import erfa
def test_julian_century():
from approximate_coords.time import delta_julian_century
assert delta_julian_century(Time('J2000.0', scale='tt')) == 0
assert delta_julian_century(Time('2100-01-01T12:00:00', scale='tt')) == 1
def test_era():
from approximate_coords.time import earth_rotation_angle
from astropy.coordinates.builtin_frames.utils import get_jd12
time = Time(['2010-01-01', '2020-01-01'])
assert np.allclose(earth_rotation_angle(time), erfa.era00(*get_jd12(time, scale='ut1')))
|
STACK_EDU
|
Is a black hole a 5 dimensional vortex?
We know that a black hole behaves like a whirlpool or a tornado or any of the other rotating phenomena we experience on Earth.But the thing is, all these phenomena, except the black hole, are 2-Dimensional (plus 1 dimension of time) rotating vortices, they move objects from 2D planes to other 2D planes instantly, but through a 3D medium. My question is this, since a black hole is a rotating sphere(3D for that matter, plus 1D of time), does it mean that its endpoint in the vortex is 5D?
Related: http://astronomy.stackexchange.com/questions/1451/spacetime-curvature-illustration-accuracy, http://astronomy.stackexchange.com/questions/7879/how-deep-and-shaped-is-the-depth-of-a-black-hole and links therein.
Not bad, but i have a hunch there lies deeper explanations for the simple relationship of 3D-rotating vortices and black holes. Thanks though.
The short answer is no, with some caveats to the effect of sort of, depending on how loose an analogy you want to make.
Sound propagation in a fluid is limited by the speed of sound, which can be used to define a "sound cone" structure analogous to the causal light cone structure in spacetime. This is a described by an acoustric metric, which could have an acoustic horizon when the speed of the fluid exceeds the speed of sound, and even an analogue of Hawking radiation.
A general acoustic metric for a perfect fluid has the form
$$g_{\mu\nu}=\alpha^2\begin{bmatrix}-(c^2-v^2)&-\vec{v}\\-\vec{v}&\mathbf{1}\end{bmatrix}\text{,}$$
where $\alpha$ is a conformal factor. The Schwarzschild black hole can be put in this form, as in the Gullstrand–Painlevé chart is not just spatially conformally flat, but exactly Euclidean at every instant of time. One can imagine that a Schwarzschild black hole is like a drain sucking space down to the singularity at the local escape velocity.
The rotating Kerr black hole cannot be put in this form, although its equatorial slice can be. See Visser and Winfurtner (2005) for details, as well for a discussion as to why interpreting the metric as corresponding to a physical fluid is problematic even in the Schwarzschild case due to the the conformal factor.
However, Hamilton and Lisle (2008) found that the Doran chart of Kerr spacetime can be interpreted as a six-dimensional "Lorentz river" characterized not only by a velocity but also a twist. This really quite different from the acoustic fluid analogy, as the 'river' does not spiral inwards, but rather has an intrinsic twist that rotates infalling objects. Still, it is interesting in its own right.
Image by Andrew Hamilton. (Twist not shown.)
References:
Visser, M., Weinfurtner, S. "Vortex analogue for the equatorial geometry of the Kerr black hole", Class. Quant. Grav. 22:2493-2510 (2005) [arXiv:gr-qc/0409014]
Hamilton, A. J. S., Lisle, J. P., "The river model of black holes", Am. J. Phys. 76:519-532 (2008) [arXiv:gr-qc/0411060]
|
STACK_EXCHANGE
|
usage: binom [-s STRIKE] [-p] [-e] [-x] [--plot] [-v VOLATILITY] [-h]
Shows the value of an option using binomial option pricing. Can also show raw data and provide a graph with predicted underlying asset ending values. The binomial options model calculates how big an up step or down step in the next time period will likely be. Then it creates a tree doing this at each period. The end results of this is a tree with possible asset values at each “step”. For our calculations we use a day as our “step” time period. We then take all of the expected values at the finishing date and use this to begin a tree of option values at each step. The ending results is the value of the option today.
The up step is calculated by taking e to the power of volatility times the square root of the change in time during the step. This is the percentage we expect the stock to increase if there is an upward movement. The down step is the inverse of the up step. The probability of the up step is calculated by taking e to the power of the risk free rate minus the dividend yield. This is multiplied by the change in time for the step and then subtracted by the expected downward movement. This number is then divided by the up step subtracted by then down step. The probability of a downward step is just one minus the probability of an upward step.
Formulas: up_step = e ^ (volatility * (delta_t ^ (1 / 2))) down_step = 1 / up_step
prob_up = (e ^ ((risk_free - div_yield) * delta_t) - down_step) / (up_step - down_step) prob_down = 1 - prob_up
optional arguments: -s STRIKE, --strike STRIKE Strike price for option shown (default: 0) -p, --put Value a put instead of a call (default: False) -e, --european Value a European option instead of an American one (default: False) -x, --xlsx Export an excel spreadsheet with binomial pricing data (default: False) --plot Plot expected ending values (default: False) -v VOLATILITY, --volatility VOLATILITY Underlying asset annualized volatility. (None indicates that the historical volatility is being used) (default: None) -h, --help show this help message (default: False)
2022 Feb 16, 08:40 (✨) /stocks/options/ $ binom -s 3100 -e --plot AMZN call at $3100.00 expiring on 2022-03-25 is worth $136.85 2022 Feb 16, 08:41 (✨) /stocks/options/ $ binom -s 3500 -p --plot AMZN put at $3500.00 expiring on 2022-03-25 is worth $389.72
|
OPCFW_CODE
|
Working With Databases In JetBrains Rider
All of our IDE's are built on the same core IntelliJ IDEA Community Edition. This means that when we improve one IDE, our other IDE’s usually also benefit from our improvements. Rider is no exception! I recently reached for DataGrip, our amazing database administration tool, only to remember that Rider has many of the same features. (more…)
Rider 2019.3.1 Hotfix Is Out!
We’ve just published Rider 2019.3.1 hotfix. It has the following hotfixes: (more…)
Rider 2019.3 Release Is Out!
Hello everyone, We published Rider 2019.3 just moments ago. Let’s walk through the key improvements in this update which the Rider team has focused on for the last four months. Let’s highlight the most important features in this release: (more…)
Rider 2019.3 Early Access Program is Open!
Hello everyone, Are you looking for something new to try in Rider to help you become a more productive developer? Then we have something great for you! We’ve just started the Early Access Program for Rider 2019.3! In the first EAP build, you will find a highly requested feature in the debugger, lots of improvements in performance, initial support for MongoDB, and a timeline for GitHub Pu
Find your perfect database development style with Rider
Rider contains many features for styling code so that it’s easy to read, and therefore easy to understand and maintain that code. This post is part of a series around finding a coding style that fits you like a glove, and how Rider can help: Find your perfect coding style using Rider Find your perfect C# style using Rider Find your perfect database development style using Rider Find y
Rider 2019.2.1 is released!
Here is a hotfix for the latest and greatest Rider 2019.2. At the same time, Rider 2019.2.1 got the following fixes: (more…)
Rider 2019.2 is released!
Hello everyone, We have good news for you today – Rider 2019.2 is released and ready for you to download! Here are the biggest and best things about Rider 2019.2: (more…)
Building Azure Functions, SQL database improvements and more – Azure Toolkit for Rider 2019.1
Over the past few weeks, we have been busy making a number of improvements to the Azure Toolkit for Rider release. Hard work pays off! Rider 2019.1 introduces support for Azure Functions (V2), with project and item templates for C# and F#, debugging and deployment right from within the IDE. We've also improved SQL Database functionality with support for adding a firewall rule, creating new data
Work with databases and the Azure Cloud Shell – Azure Toolkit for Rider 2018.3 EAP
Many .NET developers work with Microsoft Azure to develop and deploy their applications, which we support in the Azure Toolkit for Rider. We are happy to announce the new release of the Azure Toolkit for Rider 2018.3 EAP! With this release, it is possible to work with Azure SQL Database, and we have also added Cloud Shell support. And for those using web apps with different deployment slots, th
Entity Framework support in Rider 2018.1
A fresh build of Rider 2018.1 EAP just landed, adding Entity Framework support! Rider adds functionality to enable migrations, add a migration, get migrations, update the database and more! On Windows, Linux and macOS! Let's check this out, shall we? Initializing Entity Framework and enabling migrations After installing the EntityFramework NuGet package, we can initialize Entity Framework in our
Working with Data in Rider
In previous blog posts in this series, we looked at connecting to a Microsoft SQL Server and getting familiar with the features in Rider 2017.3 to work with SQL databases. In this final post in our series, we will look at how to query and work with the data in the tables in our database. This post is part of a series around working with databases and data in Rider: Configuring SQL Server 20
Working with Tables and Indexes in Rider
In previous blog posts in this series, we looked at connecting to a Microsoft SQL Server and getting familiar with the features in Rider to work with SQL databases. In this blog post, we will look at how to work with a database to create tables and indexes. This post is part of a series around working with databases and data in Rider: Configuring SQL Server 2017 for Rider Getting started
Getting started with database support in Rider
If you are new to using Rider, it’s worth mentioning that you get all the .NET features of ReSharper, along with the web development features of WebStorm, which provides the developer with great overall experience. What’s more, you also get all the data features of DataGrip as well, so you have a single IDE to work with for all facets of your project work! In this blog post, we will look at set
Configuring SQL Server 2017 for Rider
Rider is not only a great .NET IDE, it also is a wonderful tool for working with databases that are associated with .NET Core solutions. Many .NET Core developers use Microsoft SQL Server on their laptops; more specifically, the Developer edition of the database server tool. Let's start a series of blog posts about Rider's database tools (powered by DataGrip)! Before you can start using Rider w
Rider EAP update: Version control and database editor improvements
We already mentioned numerous times that Rider is built on top of ReSharper, analyzing our code in the background, and the IntelliJ platform, providing the front end and editor capabilities for our cross-platform IDE for .NET. Doing so lets us ship the best of both worlds: both products have been evolving over years, and Rider profits. So when IntelliJ IDEA 2016.3 was released, we merged all of
|
OPCFW_CODE
|
What is the difference between CI( Control Interval)and CA (Control Area)?3 8197
What is an alternate index and path ?2 22638
what is a Base Cluster?3 7153
how yo view the vsam file?5 30703
By seeing a program how we findout it is a VSAM program?6 7030
In a file(PS), we dont know how many records are there. requirement is divide half n half the records n insert into 2 another files(PS).6 7320
in vsam at the creation of cluster what is the use of RECSZ parameter?2 9586
How can you create a VSAM dataset? Can you write a JCL for it?4 17689
write a program that withdrawals,deposits,balance check,shows mini statement. (using functions,pointers and arrays)
What is ?predefined characteristics??
speak an minute where u reside
how to get input credit in vat & service tax
how many times we can deposit the old currency in one a/c in the bank
What are the fundamental ways to estimate visual motion in 1-D?
I want to design a heating coil of 80/20 nicrome for 3.5KW please give me the calculations choosing for physical dimensions assume coil voltage will be 110V AC assume necessary conditions
why copper is a bad conducter at low temperature?
Calculate sum of salaries department wise. Then the sum will be repeat for all columns in each department. Develop a mapping for this.
What goals do you have in your career?
how lookup transformation is made active in new versions... When to use connected and when to use unconnected lookup and why? which is good for session performance. How to make lookup persistent and how to remove stale data from that lookup. how commit works - when we stop or abort data. Explain in both cases. What is factless fact table and have you ever used it in real time scenarios.
In which tables receopt application form Appliy to field Value will come. I know one table i.e, ra_customer_trx.trx_number. Could U please any one tell me other than this except(ra_customer_trx and ar_payment_schedules_all tables). plz mentioned tables_name.Column_name.
Can 33kva/415v transformer be run underground? If the answer is yes why and if the answer is no why?
how to comvert 120 into one hunderd twenty rupees only and vice varsa
Would the copy turn anyone off? Does it appropriately reflect our company?
|
OPCFW_CODE
|
Premiere Pro CC 2014 renders incredibly slow / Cineform
I created a movie with Adobe Premiere Pro CC 2014 and I have serious performance issues when it comes to export the movie.
The first 10% renders pretty fast and takes 1 hour. Then it takes for every additional 1% about 4 hours! (never rendered it to finish so far, 14% max so far)
Here are some facts that might help to isolate the problem:
Project:
Length: 24 minutes in total, splitted into several sequences (up to 4 minutes) and arranged on a "main-sequence"
Input: Cineform AVI files, 1080p, 59fps (I converted GoPro 3+ footage (3D!) using GoPro Studio to the Cineform format for further processing)
Desired Output: Same as input, preserving 3D (Render 3D Intermediate)
Effects used: Warp-Stabilizer (default settings) and Auto-Color (smoothing 1-2 sec) on almost every clip.
Hardware / Software:
i7-3770K @3.5 GHZ (Quadcore)
16 GB RAM
NVIDIA Geforce GTX 970 (also tried with a ATI Radeon R9 290X)
Windows 8.1
SSD for OS and programs, RAID 0 for media files and project, separate drive for rendering output.
Then I tried to render using AME using the same settings. The whole video rendered in about 14 hours what is acceptable in my case. There problem here is that the 3D gets lost, even if I selected "Render 3D Intermediate". The picture for the left eye is the same as the one for the right eye.
This is not happening when I do a direct export. But the direct export get almost stuck at around 10%.
There is almost no difference in choosing Software-Rendering and GPU. Also, the GPU has almost no load.
Premiere Pro uses the CPU for about 45% (total usage 65%) and takes up 5-9 GB of memory (total usage 65%). Disk usage at around 14b/s.
I think there are two problems. One is that AME renders differently (no 3D) and the other is that the direct export takes way too long.
Can someone help me with any of these problems?
Thank you,
Martin
Update 18.11.2014
I did some additional research and found out, that the slow rendering happens with this combination:
Auto-Color using Temporal Smoothing
Render 3D Intermediate
Without rendering the 3D intermediate, Temporal Smoothing is not a problem. And without temporal smoothing, Rendering 3D Intermediate is not a problem.
Might this be a problem in the codec or in Premiere?
It may very well be. Probably best to bring it to Adobe's support and see what they say.
|
STACK_EXCHANGE
|
This XP640 was obtained from Ebay and, once again, was working fine until it died without warning and the fluorescent display was blank. After opening the case I found that the PCB mounted sealed transformer was very hot which lead me to believe that it was yet another power supply failure caused by old capacitors. After removing the capacitors they were found to have been manufactured in 1984, very leaky and their capacitance, when measured on a meter, was changing all the time. The marked values were as follows
with some changes required because of component availability.
After replacing all the capacitors and powering on, the voltages on the PCB seemed correct but the display was still blank. The voltages are brought out to a male 6 pin Molex connector near to the VFD
and, numbering from left to right, are
The filament voltage, measured across the end pins of the VFD, was less than 1V AC which was not high enough to create a display. There is a TDK CD-4002 voltage regulator on the PCB to generate the filament voltage and -26V, -19V & -10V for anodes and grids and I removed it to test. It uses a +5V supply to generate these outputs and when driving a resistance similar to the filament the output was much higher, very strange.
I vaguely remembered that it should beep on power up which it now did not so I turned my attention to the Z80A CPU and, using a logic analyser, tried to check that it was running correctly. However, it seemed to be stuck in a loop and did not execute the program as it should. I had a dump of the EPROM contents and used this to check the instructions executed after a reset. The first instruction at address 0000 is "C3 4D 00", a jump to 004D, but I was surprised to see that execution jumped to 004A ! The instruction at address 004A was "C3 99 21", a jump to 2199, but it jumped to a different address instead which proved that the CPU was faulty. Unfortunately, the Z80A was soldered directly to the PCB and to minimise damage to the board I cut the legs and desoldered them individually afterwards. After fitting an IC socket and a replacement CPU, I switched on the XP640, it beeped and the display was working again. But why was the filament voltage so low before ?
By default at power on all outputs of the 10937P VFD driver IC are driven to -10V and I believe that, in these circumstances, the CD-4002 is unable to provide enough power for the driver IC and create the normal filament voltage of 6.36V AC. Also, in this state the CD-4002 must present a considerable load on the +5V supply and my theory is that the CPU failed, the display driver IC was left in a state where outputs were fixed which loaded the +5V supply and caused the transformer to overheat. The age of the capacitors fitted made the situation worse.
It should be noted that there is no fan in the XP640 meaning it is very easy for it to overheat. It is also quite difficult to desolder components on the PCB without pads coming loose so only remove components if you have to.
As a final note I feel that I must draw attention to a couple of interesting points in the design of the XP640.
The XP640 started to enter the device menu automatically at power on. Realising that the "Menu" keyswitch was to blame I desoldered it and it measured 1300 ohms across the contacts which was low enough to be seen as a key-press. There was some contamination at the base of the switch possibly caused by a liquid spillage, coffee probably :-).
I could find no way to disassemble it without causing damage so I left it in a solution of Propanol overnight. By the next day the switch was working again but, as the "Menu" key is used more than the others, I swapped it with the "Emulator" key and the XP640 was functioning correctly again, see below.
|
OPCFW_CODE
|
Here’s my next billion-dollar idea:
I don’t have a heck of a lot of time lately, but if you have any questions or ideas I’m happy to chat here.
Edit: I think I tried to reply to a direct message but I’m on mobile. Lol.
Didn’t get a chance to watch your video yet but I thought this would be a cool idea for learning languages or chatting with a bot to practice. It would correct your grammar and practice speaking at the appropriate level the learner is at.
If you add the prompt “Hello, my name is Ali. (Merhaba, benim adım ali)”, the response will be in the same format, that is, with the translation attached in paranthesis.
That’s a great idea. I’ll add that as a prompt when I expand on this idea. I think I’m onto something really valuable here.
I’m going to add a few cases:
the user is distracted for various reasons (sleep deprived, drama at home, hungry, etc). Teachers often have to engage with students who can’t show up for one reason or another. In this, I’ll add compassionate listening back to the toolkit
the user is reluctant to engage because they are shy, insecure, or detached from their passions. In this case I’ll use the reference interview to investigate what the user really wants and needs
the user is mischievous and keeps trying to talk about unrelated things like video games and gossip, but the chatbot uses it as a teachable moment by subtly redirecting the conversation. For instance, if the user wants to gossip, the chatbot might discuss healthy communication techniques and boundaries. If the user wants to talk about video games, the chatbot might discuss the art of storytelling. The idea is to meet the user where they are.
Let me know if there’s any way I can help. I’m working on developing business ideas and building working prototypes full-time.
I’ve gauged interest in the language learning idea on Reddit and it was received well, so I think there’s something there if you’re serious about commercialization.
That’s really great! I encourage you to borrow my code and join a startup. I am not presently interested in startups for a number of reasons, but primarily I will do more good for the world by focusing all my energy on research. However, if you have any ideas, proposals, or research problems I’m happy to make a YouTube about them!
Hi, I’m a new player of OPENAI,i have a practice in medical QA and have a problem, in our model has 100 examples, we did fine tunning but the last sentence in the picture exceeds the maximum length. How to fix this problem
Increase your token limit with the slider
Hi Dave, This is great! Would it be possible to include a textbook chapter in the prompt so the student can ask questions about it? I wrote an OER textbook at howargumentswork.org and want to create followup noncommercial interactive opportunities for students.
I doubt it would be necessary to include a chapter. GPT-3 was trained on hundreds of gigabytes of text.
Follow-up video is done now that finetuning is working. This video covers adding edge cases and adversarial behavior. It’s pretty solid. It can’t yet handle everything a real teacher would be confronted with, but it handled inappropriate sexual topics, anger, and frustration with flying colors.
Okay, I just want it to respond in ways that are consistent with the textbook’s approach to the material. I’m testing it.
Hi @daveshapautomator ,
A few months ago I was playing with a similar idea. I coded a script to convert audio(microphone) to text, translate it to english, send the question to GPT3, translate it back to the original language and speak it in audio again.
My prompt was engineered to answer questions to children on safe topics (it would redirect children to talk with their parents on more sensitive topics like sex, violence or religion), and using simple language.
My problem was, how can I check that the information being said by GPT3 is in fact correct? There were times when even if the prompt focused on factual info and the temperature was not too high, the information was still a bit controversial. How would you approach that in your billion-dollar education idea?
This is a nontrivial problem! Maintaining ground truth and “knowing what you know” requires theory of mind as well as a repository of facts. This was one of my earliest experiments in NLP and GPT-3 where I tried to create an offline repository of knowledge by downloading and registering Wikipedia in SOLR.
However, I found that GPT-3 is pretty well versed on a lot of facts and ideas. I will have to think about how to get GPT-3 to be more reliable on facts, and to verify. For instance you can just ask if a statement is true or false.
It even corrected me on the population of Toronto:
This is similar to my video on reducing confabulation.
Basically, what you do is split a task up into several parts. You can ask for facts, figures, ideas, and data. Then you can ask about its veracity. These are distinct cognitive tasks for humans, so it should be no surprise that they are separate tasks for GPT-3.
|
OPCFW_CODE
|
15 Types of Atomic Essays You Should Write
Today I want to share with you the 15 types of atomic essays you should try writing for #Ship30For30. I highly recommend you to write at least one essay of each type - the more formats you experiment with, the more you'll learn!
Start with your 3 main Content Buckets, and pick a topic you want to write about. For example:
Digital Writing > How to create an online course.
Mental Models > How to get better at original thinking.
Startups > How indie hackers create profitable businesses without funding.
Then, pick one format from the list below that you haven't tried yet, and challenge yourself to write a post using this format:
- Organize your best notes
Go through your personal notes, find and organize the most useful ideas you have written about the topic, and summarize them in your atomic essay.
- Share the things you're learning
Take a course, book, podcast episode, talk, or an interview about the topic, and write about the 3-5 most useful things you have learned from it.
- Write a tutorial
Create a step-by-step guide on how to do something related to this topic.
- Copy what already works
Find several most popular social media posts about the topic, and combine the most valuable ideas from them. Combine the ideas from at least a couple of posts in each category:
• Most upvoted subreddit posts
(or Indie Hackers, Hacker News, and LessWrong, if it applies to your niche).
• Most viewed youtube videos.
• Most retweeted tweets.
- Apply ideas across fields
Take a valuable idea from a different field, and apply it to the topic you're writing about. For example, I would take the most useful strategies for creating startup ideas I have learned, and write about applying these methods to generating creative ideas for writing fiction.
- Answer your own questions
Make a list of questions you have, things you would like to learn more about. Do some research, and find an answer to one of your questions. Bonus points for making a post in a relevant subreddit, asking people to help you to find the answer, and summarizing the most insightful replies.
- Answer community questions
Find communities where people ask questions about the topic, find the most popular question, do a bit of research, and write an answer to it. Or ask people on twitter/subreddit about the biggest problems they're struggling with, thigs they need help with, and write a good answer to one of them.
- ELI5 a complicated idea
Take a valuable but complex or confusing idea related to the topic, and find a way to explain it in a way even a 5 year old could understand.
- Lessons learned, mistakes made
Write about your biggest success in the last 5 years, or about your biggest mistake. Or, write about the 3 most important things about the topic you would want to tell the version of yourself from 5 years ago, or about the most important mistakes you want to stop them from making.
- Do an interview
Find a person you would like to interview (from our Ship 30 community, for example) about this topic, set up a 30 minute call with them, prepare 3-5 good questions, and write down the most useful things you have learned from their answers.
- Share a strong opinion
Share a strong opinion you have about the topic. React to already existing opinions - find a popular opinion you disagree with and explain why it's wrong, or find a misunderstood/unpopular/controversial idea you strongly agree with, and defend it, explain why more people should know about it.
- Analyze a person or a project
Take a successful person, project, or a product relevant to this topic, analyze them, and explain what makes them successful.
- Make a prediction
Make a prediction about the future. What will change about this topic in the next 5 years? Explain why you think that, so that when the prediction comes true or false you can analyze your thinking and learn from your successes/mistakes.
- Target popular keywords
Do keyword research about the topic. Analyze the most searched keywords about it, and analyze the most popular keywords big websites about the topic are ranking for. Write a post targeting these keywords.
- Create a list
Compile a list of the most useful tools, best learning resources, books, talks, podcasts, twitter accounts, tips, trends, etc. about this topic.
Write 2 essays of each type, and you'll have enough ideas for the rest of the month, write an essay of each type for each of the 10 topics in your 3 buckets, and you'll have more than a year worth of daily posts!
Did I miss anything? If you have a good idea for another post type - please leave it in the comments!
|
OPCFW_CODE
|
For my white paper, I will be writing about the gender wage gap in the United States. I intend to focus mainly on salaried workers, but depending on how my research goes I may include hourly workers. So far, a lot of the research I have found talks about how women are at a disadvantage because they are often the caretaker of the home and children. However, if possible, I want to avoid talking about how female’s other responsibilities interfere with pay. Although I understand how this role in life could affect work patterns, I don’t believe this is the reason for wage inequality. The problem here is simple: women with the same skills as men get payed less for doing the same work.
Although much progress has been made over the years, a clear wage inequality still exists. Even after differences have been accounted for by economists and statisticians an unexplained gap remains, which is attributed to gender discrimination. Some try to downplay this factor, insisting instead that work patterns such as education, hours worked, or experience create the difference. However the facts cannot be denied. I will be arguing about how gender discrimination is the root cause of the gender wage gap.
I have already found many reliable sources for the government portion of this paper. One main source will be the U.S. Census Bureau. In 2011, the U.S. Census Bureau conducted a report on Income, Poverty, and Health Insurance Coverage in the United States. Statistics show that women currently make .77 cents to every dollar a man makes. Another chart in a separate analysis provides a state by state breakdown of median annual earnings of full-time, year-round female workers compared to men. Lastly, a pdf file compares the changes over time and discusses possible reasons for the changes. Overall I believe that the U.S. Census Bureau will be of great help for my paper. They provide reliable statistical information for current and historical years that simply presents the truth of the situation.
I also believe that the United States Department of Labor will be a good resource for me. On their website I have found information that looks at gender wage inequality in various fields of business. I feel this will be useful because when talking about the gender gap, many people discuss male-dominated industries as being correlated to discrimination. They also have statistics on education-level compared to earnings. Information about women employment in general is also available and will be of use when related to hiring practices.
I currently do not have many ideas for how to enact policy that could solve these problems. I believe that I first need to consider and research what critics against wage inequality due to gender discrimination believe. I have found a document delivered to the Department of Labor that analyzes various reasons for the wage inequality that are not related to discrimination. There are many interesting factors to consider including women’s negotiations of their salary, their acceptance of other forms of compensation, and the effect that taking time off for having children can have on salary. I will try to remain unbiased throughout my investigation but I think it is fair to say that at least some of the gender gap can be attributed to discrimination. For now, I believe that the Government aspect of this paper has provided me with reliable statistical information. In the future, I plan on thinking about what role the Government could play in attempting to resolve these issues, and if in fact they are even capable of making a difference.
|
OPCFW_CODE
|
Not to prevent me from accessing the available materials.
Not to control access to information.
If what I am learning from your teachings and from your tests and from other students can be entirely replaced by Googling through test banks, then you're not helping me advance.
If a presenter is reading off the slides?
If you're not utilizing what is available, whether Google or Khan Academy or iTunes classes or otherwise, you're not helping me make connections. To think. To research.
We see similar transitions arising in many human pursuits. In journalism. Booking travel. Financial markets. Programming. Music. And education. And in an earlier era of teaching, simply bringing calculators to a test.
Don't make me memorize. Make me think. Make me research.
It appears the professor has unwittingly also proved his teaching approach has failed.
I'm puzzled by this. Let's say you teach basic physics or algebra... how are you supposed to "extend" the material, particularly testing material?
I always thought the benefits of having a teacher were:
1. Human contact
2. Ability to answer arbitrary questions in an instant
3. Ability to adapt lectures to the audience
4. Students in the presence of a room full of other people trying to learn the same thing at the same time
Those would all be great benefits even if the tests were all the same, administered in a standardized way, nationwide, by third-party proctors with third-party graders.
That's not to say that there's anything wrong with independent study, online courses, or ad-hoc groups of students learning together.
I just take issue with the idea that a teacher, in order to do their job, must also compose novel tests every year as though the new tests would somehow be better than all the other tests used over the years. If teaching honest students, it just doesn't sound like an efficient use of teaching resources to re-invent the wheel each time.
And there are objective benefits to using the same or similar tests from year to year. One is that you can see if your class is improving or lagging in specific areas compared with previous classes. That could help you hone your teaching over the years. Wow, I tried playing this game to illustrate economics, and these students scored way higher on the arbitrage questions than the previous 5 years! Or: "gee, I thought that group project might be good, but the test scores dropped this year".
You're clearly familiar with running the process line and incremental improvements for yourself and optimizing your work, but are you equally comfortable being the widget that's being processed within the assembly line, and whether the widget is getting the best value?
My trip down that educational assembly line was seriously and mind-numbingly unpleasant, and I can only imagine what it's like with all of the current standardized-tests model. Looking back, what we were taught and what we learned for those tests was sufficiently ridiculous and, well, wasteful. We didn't learn that most of what we learned would be outmoded, that the tools we were taught would be gone, and that memorization was far less practical than learning how to research.
As a presenter, I don't want to repeat that for the folks I am teaching. Though thankfully, I don't have to teach to standardized tests.
As an instructor, you're selling a service. Are your students buying?
A valid point. There are many ways of learning though, and if you want instruction and materials personalized to you, those are available -- albeit at a much higher cost. And there are other, self-directed methods of learning that are a much lower cost than either method (e.g., going to a library, doing research online, etc.). [Aside: who pays the cost is a separate issue, but someone must pay it.]
Given that society is constrained by scarce resources, I think that re-using tests is a perfectly reasonable allocation of resources for many teaching situations. Other materials are re-used regularly, such as textbooks, and there's nothing personal about that. Would you say that using the same textbook as someone else turns you into a "widget"?
It's unfortunate that your educational experience was so unpleasant. My K-12 experience seemed quite wasteful as well. But I think that has more to do with incompetence and laziness. Doing more personalized teaching requires more teachers, which means they will have an even harder time attracting enough quality talent, and an even harder time firing bad teachers. That doesn't sound like a net win on quality to me, even if it is more personalized.
|
OPCFW_CODE
|
Join GitHub today
GitHub is home to over 28 million developers working together to host and review code, manage projects, and build software together.Sign up
8051 Based Platforms
Clone this wiki locally
Table of Contents
All those SoCs are equipped with an enhanced Intel 8051 microcontroller core, which means that we build contiki using the Small Device C Compiler (SDCC).
Contiki platforms using this CPU code are:
- Sensinode: This is maintained externally.
- TI cc2530 Development Kit; This is maintained in the official repo.
Due to stack size limitations, it is very unlikely that you will be able to run IPv6 code from master on those devices. If you want to run IPv6 code on the CC2530DK platform, you best use the cc-ports branch (externally). This is documented extensively in the "8051 Install and Use" page.
The following hardware is supported:
- SmartRF Evaluation Board (SmartRF05EB rev. 1.7 & 1.8) with cc2530 Evaluation Module (EM)
- cc2531 USB Sticks
There is no support for the cc1110. However, similar to cc243x and cc253x, this SoC is based on the 8051 MCU. This means that the existing contiki cpu code (
cpu/cc253x) can serve as a basis for cc1110 porting efforts. Due to its RAM and Flash size, it is unlikely that a cc1110 port will support the same feature set (begging to be proven wrong here!). This thread on contiki-developers contains an exchange of ideas on how to go about porting for this SoC.
This table lists the various hardware features for the devices supported by Contiki. Unless explicitly mentioned otherwise, the features listed here have been implemented.
|Texas Instruments cc2530 DK|
|cc2530EMs on SmartRF05EB||cc2531 USB Dongle|
|MCU||Enhanced Intel 8051 core, using the standard 8051 instruction set.|
|RAM||8 KB with data retention in all power modes|
|RF||2.4 GHz IEEE 802.15.4 compliant RF transceiver|
|Sensors||VDD, On-Chip Temperature|
|Hardware AES Encryption/Decryption (AES Co-Processor)4|
|Hardware Random Number Generator|
|I/O Connectors||RS232 (UART0)||USB|
|USB (to program)||Debug Connector|
|Serial Flash||256 KB OnBoard SPI Flash4|
|Other||LCD4||Hardware USB Support|
- Only on devices with the RC2301 (cc2431F128)
- LED 4 is mapped to the same port/pin as B1. The current implementation configures the pin as input and supports the button.
- The board has two buttons but only B1 is connected to the SoC
- Driver not implemented
- Prepare your System - Requirements
- How to Install and Use Contiki for 8051 Platforms
- Understanding 8051 Memory Spaces (and how I learnt to avoid stack overflows)
- Understanding Code Banking (and how I learnt to spot banking errors before programming my node with firmware that crashes left right and centre)
- How to increase maximum available stack
- Testing and bug reporting is always welcome
- Experimentation with TCP and embedded webservers. Reports of success/failure and patches are very welcome
- RPL collect support needs work
- Missing SmartRF drivers:
- SmartRF LCD
- SmartRF Joystick
- SmartRF Serial flash
If you are willing to contribute driver code, have a good read at this page. Also, due to licensing, I can't consider code directly derived from TI software examples.
|
OPCFW_CODE
|
Downloads of v 0.11.0
MailKit is an Open Source cross-platform .NET mail-client library that is based on MimeKit and optimized for mobile devices.
* SASL Authentication via NTLM, DIGEST-MD5, CRAM-MD5, LOGIN, PLAIN, and XOAUTH2.
* A fully-cancellable SmtpClient with support for STARTTLS, 8BITMIME, BINARYMIME and PIPELINING
* A fully-cancellable Pop3Client with support for STLS, UIDL, and APOP.
* A fully-cancellable ImapClient with support for LITERAL+, NAMESPACE, CHILDREN, LOGINDISABLED, STARTTLS, MULTIAPPEND, UNSELECT, UIDPLUS, CONDSTORE, ESEARCH, SASL-IR, COMPRESS, ENABLE, QRESYNC, SORT, THREAD, ESORT, SPECIAL-USE, MOVE, XLIST, and X-GM-EXT1.
* Client-side sorting and threading of messages (the Ordinal Subject and the Jamie Zawinski threading algorithms are supported).
* S/MIME and PGP support via MimeKit
To install MailKit, run the following command in the Package Manager Console
PM> Install-Package MailKit
* Implemented the NTLM SASL authentication mechanism.
* Fixed CRAM-MD5 and DIGEST-MD5 SASL mechanisms to work properly.
* Modified the DIGEST-MD5 logic to use System.Security.Cryptography.RandomNumberGenerator instead of System.Random for generating the nonce. For the PCL version, it now uses Windows.Security.Cryptography.CryptographicBuffer.GenerateRandom().
* Modified ImapFolder.Fetch (int min, int max, ...) to work for empty folders. (issue #35)
* Added a work-around for IMAP servers that send "* OK [UNSEEN 0]". (issue #34)
* Fixed ImapFolder.GetBodyPart(int ...) to use FETCH instead of UID FETCH.
* Added a version of IFolder.GetBodyPart() that takes a bool headersOnly parameter.
* Added a BodyPartBasic.IsAttachment convenience property.
* Added new wrapper APIs so developers don't need to pass CancellationTokens.
* Improved documentation
- MimeKit (≥ 0.32.0.0)
|MailKit 0.11.0 (this version)||115||Monday, April 14 2014|
|MailKit 0.10.0||79||Monday, April 07 2014|
|MailKit 0.9.0||54||Thursday, April 03 2014|
|MailKit 0.8.0||30||Monday, March 31 2014|
|MailKit 0.7.0||91||Wednesday, March 12 2014|
|MailKit 0.6.0||67||Thursday, February 27 2014|
|MailKit 0.5.0||79||Sunday, February 16 2014|
|MailKit 0.4.0||46||Sunday, February 09 2014|
|MailKit 0.3.0||29||Thursday, February 06 2014|
|MailKit 0.2.0||15||Monday, February 03 2014|
|
OPCFW_CODE
|
Long Story Short:
Issue: editing cmdline.txt was the issue of RPi3 not booting
Solution: plugged SD card into an adapter (Polaroid Cube+ in my case) that could read the card and used Ubuntu to revert the cmdline.txt card
Original cmdline.txt contents: dwc_otg.lpm_enable=0 console=serial0,115200 console=tty1 root=/dev/mmcblk0p7 rootfstype=ext4 elevator=deadline fsck.repair=yes rootwait
I was following this tutorial to use a GPS with the RPi3. After I changed the cmdline.txt file and rebooted, RPi3 didn't reboot. I thought I shut the RPi3 down instead so I unplugged and replugged RPi3. It still didn't boot up after so I thought I messed up the SD card.
Initially, when I inserted the RPi3 micro SD card into my laptop, the card reader did not recognize the SD card. I then started to use different card readers and ended up inserting the RPi3 SD card into my Polaroid Cube+ and then connected my Polaroid Cube+ to my laptop. My laptop recognized the SD card that way; however, it only saw the "RECOVERY" partition. I didn't see "cmdline.txt" in there so I googled for different solutions.
Second attempt was to put the SD card back into the RPi3, press SHIFT on the attached USB keyboard while RPi3 boots to go into Recovery mode (NOOBS). Once I got into Recovery mode, I clicked the "Edit Config" icon. That allowed me to change some configurations on "Boot" and the "cmdline.txt". I copied the default cmdline.txt configurations from here: http://elinux.org/RPi_cmdline.txt and rebooted. This did not fix the issue; however, it got me one step further. In addition to the "VFS Unable to mount root fs on unknown block()", it gave me something like "entering kdb on processor 2 due to keyboard entry."
Third attempt worked like a charm. I was panicking because I didn't have any backup laptops or computers, but then I remembered that I have Ubuntu installed on my VirtualBox. I started Ubuntu and then attached the SD card (using the Polaroid Cube+) to Ubuntu virtual machine. HOORAY!!! Ubuntu read ALL the partitions on the RPi3 SD card!!!! The four partitions are root, boot, RECOVER, and SETTINGS. First thing I did was copy my code for backup. Second thing I did was look for the cmdline.txt file. Luckily, I made a backup of the cmdline.txt file in the same directory before I changed it. I replaced the cmdline.txt file with the backup file, inserted the SD card into my RPi3, and the RPi3 BOOTED AGAIN!
|
OPCFW_CODE
|
2.6. Data structure overview
Interchange has three major data stuctures, which correspond to the master server, the catalog, and the user.
You can examine two of these structures by setting in interchange.cfg:
This will by default dump an interchange.structure file which shows the global configuration, and a CATALOGNAME.structure file in each catalog directory showing that catalog's configuration.
The third structure, the user data session, can be viewed with the following ITL placed in a page:
This is held in a set of variables inhabiting the Global package. They define overall server behavior, and contain pointers to the catalog structures.
The Global configuration is defined in interchange.cfg and any files that it reads via include statements. The configuration is produced by parsing interchange.cfg with the routine Vend::Config::global_config.
Directives can be defined for parsing by the catalog configuration within the global configuration -- and they can be deleted as well.
The only way to define new global directives is via hacking the source. Luckily, this is just about never needed -- you can define settings for use by your programs in Variable or other repositories.
Each Interchange catalog has its own configuration completely independent from others. It is basically produced from the file catalog.cfg in the directory defined as the base for the catalog. It is parsed by the subroutine Vend::Config::config.
We say basically, because there are many ways to alter catalog configuration. (CATNAME below refers to the name of the catalog being configured.)
Global catalog configuration preamble, affecting all catalogs, can be defined by the Global directive ConfigAllBefore. It defaults to catalog_before.cfg in the Interchange software directory (/usr/local/interchange).
An individual per-catalog preamble configuration is defined in $Global::ConfDir/CATNAME.before.
By default it would be /usr/local/interchange/etc/CATNAME.before.
A file in the catalog directory which is read before catalog.cfg. Deprecated.
The normal configuration file.
An individual per-catalog postamble configuration is defined in $Global::ConfDir/CATALOGNAME.after. This can be used to prevent user catalogs from doing unsafe things -- for instance enforcing the use of encryption, or preventing running in WideOpen mode.
By default it would be /usr/local/interchange/etc/CATALOGNAME.after.
Global catalog configuration postamble, affecting all catalogs, can be defined by the Global directive ConfigAllAfter. It defaults to catalog_after.cfg in the Interchange software directory (/usr/local/interchange).
Any configuration passed on the command line at Interchange startup is applied last. For instance, to test out a catalog named foundation with a different invocation URL without having to alter the config files:
bin/interchange --foundation:VendURL=http://localhost/cgi-bin/found \
- foundation:SecureURL=http://localhost/cgi-bin/found \
That will set the foundation catalog directive values VendURL, SecureURL, and RobotLimit, overriding any settings in the configuration files.
Interchange has dynamic catalog configuration as well. See Programming Watch Points in catalog.cfg.
|
OPCFW_CODE
|
I see that labels are always transparent even if you turn transparency off in the inspector.
They become transparent at runtime, during ide they behave “correctly”.
Is it a bug ?
any tips to make them not transparent because I need to see their borders and space, thanks
Here is how to create a non transparent label control :
- Drag a Container control in the project in the left pane (not on a page)
- Select the container control. It appears as a box
- Set its HasBackColor to Yes in the inspector
- Set the color you want underneath. It will not show in the IDE but will show when you run.
- Drag over it a label
- Make the Container control size the same as the label
- Lock the label on all sides (in the inspector)
Now you drag the container control from the left pane onto a window.
You set the size as usual in the IDE or in code with nameofthecontrol.width and .height (for the autosize I taught you for instance), and address the content by nameofthecontrol.Label1.Text. For instance
Creating controls is the way to get features not normally present in regular controls.
are you planning the implement the transparency and the autosize soon or is it something that you don’t give much importance to ?
Horacio, I often use the comparison with giving a man a fish versus teaching him how to fish. I just tried to teach you how to fish by building your own controls.
Once you understand how to do that, you can support all sorts of features. The autosize I gave you can be applied to this by adding to the container control a method, for instance SetText(string) where you set the size of the control and put the string in Labe1.Text at the same time. If instead of setting the Container Control background color you create a picture to place in its backdrop, you can control its transparency. See http://documentation.xojo.com/index.php/Picture.Constructor(width_as_Integer,_height_as_Integer) to create a picture with a level of transparency, and http://documentation.xojo.com/index.php/Graphics.FillRect to fill it up with a color.
With the same Container Control technique, you could create an autosizing button too, for instance.
[quote=114159:@Horacio Vilches]are you planning the implement the transparency and the autosize soon or is it something that you don’t give much importance to ?
In case you where asking if Xojo Inc was going to implement the features you talked about, I have no idea. I am just a Xojo user like you, trying to provide the help I can by offering methods and workaround I usually test myself before posting. And I can tell you that the features you wish implemented can be through the method I described, I just tried the transparency for a Container Control backdrop image.
You can fill a feature request by using Feedback, or search to see if someone else requested it. I just did that for ‘autosize’ and found two entries. In one from 2007, the same kind of approach I offered is outlined.
If you search ‘label transparency’ you will find 16 cases, including 3 feature requests which do not match what I think is your request. About that, though, when you drag a control and don’t set it to Transparency On, it’s background is opaque grey. Windows 8.1. Is that not the case for you ?
I think you’re confusing transparency with background. If you want a background colour use the above suggestion, if you want it to be the system Window colour transparency = false
I can’t edit the above. Think of transparency meaning draw the things it’s place on or not. So if you put a label (100X100) on a Red Filled Canvas and transparency = False. The Red under the label will not be drawn at run time. You will be able to see the Window background. If Transparency = True then the Red will be drawn and you will see text on the canvas, not a ‘hole’
Good you make the distinction. Indeed, if Transparent=False the default window background color should show. But it is not so clear reading from the OP first post, where he says that apparently, even with that setting, he finds the label to be transparent.
And yes, the workaround I offer sets the background color, which is different.
I was talking about the label background that is never transparent even if you set it as transparent on the inspectir
Horacio, you get me confused. From your original post, I had understood that when you set the label NOT transparent, it was transparent anyway.
From what you just wrote, is it that when you set a label to transparent, it remains grey ?
Below is an example of how label transparency works.
I placed a canvas on the form and put a color into it.
I put two labels on the canvas one with and one without transparency.
Non-transparent labels always take on the window color not a custom color.
If you want a different background then you need to use Michael’s good suggestion above.
This is a native behavior on windows. And currently Xojo does not emulate the functionality on OSX and Linux.
So Label.Transparent state is just ignored on those platforms, being transparent all the time.
Indeed Mac and Linux Label is always transparent. But the method I posted works just as well.
How can you talk about the label background. Labels do not have backgrounds. You need to get your head around that. See the Picture posted by Bob above? That’s not a background being drawn on the second Label, that’s not drawing the Canvas underneath, that’s why it’s the same colour as the Window. Label’s are not created to have backgrounds. You need to put a label on something that will paint colour and set the Transparency to False to make it look like there is a background. No it probably won’t change in the near future in any language
I’m sorry but you are mistaken. The standard behavior for labels (in windows) is setting a background color, including a “transparent” color, the usual default is using the parent window color. So, it’s really painted. You can have whatever under the label and it will paint a background color over it (except of course a transparent color).
But I’m talking about Xojo, I’m not talking about what you can do in something like MSVS. In Realbasic before the ‘label’ was added, after the Label was added and in Xojo (I’m guessing I haven’t upgraded) there is no Background property so you can not have a background.
conversations online are never simple
Internally there is, and is set as the current window parent color. But it’s not exposed. They chose to have the transparency option partially exposed and did not implemented the label background painting color selection for the opaque option. What we see is just a matter of choices the Xojo designers did trying to make their life easier in the cross platform world. I’m not happy with the inconsistency of this behavior, it should behave as expected on others too. The best way for me should be having the backcolor exposed and emulating the function on those who does not have it natively painting a rectangle behind. Xplat consistency. But this is just my opinion. ( Just for the curious, In terms of Win API there are SetTextColor(), SetBkColor(), and SetBkMode() )
In VS you also have border style, background image, and padding to mention a few
Feature requests are the best way. Or using Declares. Or building controls. Or using VS
Yes. But those Xojo designers opted to keep out. Label background transparency was exposed as a design functionality. It’s fair to expect it working in the IDE and runtime, xplat. Horacio post proves this occurs.
|
OPCFW_CODE
|
Unlike SQL databases, NoSQL databases are good at storing unstructured data like texts, photos, videos, and PDF files. They also tend to be better about scaling up read-only operations. That said, each NoSQL database is different, and they’re all designed to optimize for specific things that you wouldn’t necessarily get out of a standard SQL database. MariaDB is used by a smaller percentage of developers – just 17.9%. It has an interesting history – back when Oracle was acquiring MySQL, a bunch of developers got worried about what that would mean for one of the most relied-upon SQL databases.
Alternatively, Platform-as-a-Service (PaaS) providers like Heroku or Back4app Containers offer a more straightforward approach. You can deploy your code on the ready-to-use platform, and the PaaS provider takes care of the infrastructure and scaling. Hopefully, the top backend technologies we explained above would help backend tech developers in making the right decision.
Why use a backend framework?
This means your site will now have to store information about products, purchases, user profiles, credit cards, and more. If you want easy recruiting from a global pool of skilled candidates, we’re here to help. Our graduates are highly skilled, motivated, and prepared for impactful careers in tech. Please check the Best 10 Mobile App Hosting Providers that will accelerate your time to market. Scala is a high-level language that combines object-oriented & functional programming to make it more concise. This concept of classes or object-oriented programming was missing in C language.
- These technologies interact with the front-end, often using APIs, to form a full technology stack.
- Backend developers often use SQL to communicate between relational databases.
- They are very similar technologies, but many developers agree with me that Postgres is simply the more modern solution.
- It prioritizes ease of use and convenience by providing default solutions for everyday problems.
- Still, they will need to manage servers and watch the servers around the clock.
- To keep your site secure, you shouldn’t give that same level of access to all other users.
The frontend is everything a user sees and interacts with when they click on a link or type in a web address. The web address is also known as at URL, or Uniform Resource Locator, and it tells what webpage should load and appear in your browser. Fortunately, you can become a back-end developer without a degree by taking classes and learning on your own.
Are you ready to discover your college program?
As a result, it can be customized to meet specific use cases and enjoys support from a dedicated developer community. As a result, a preference for Python problem-solving is essential. I was wondering if https://wizardsdev.com/en/news/quality-backend-is-it-possible/ anyone can compare/contrast the differences between frontend, backend, and middleware (“middle-end”?) succinctly. Find centralized, trusted content and collaborate around the technologies you use most.
What is Frontend?
Software engineers oversee much of the software development lifecycle, including the planning, delegations, design, and implementation. They may code the software requested by organizations or identify problems and develop solutions. These professionals work in many major industries, including computer systems design, manufacturing, publishing, management, and insurance. The amount of back-end development courses available in a bachelor’s program varies. Usually, when you hire a web developer to set up your server-side, they use the server offered by your web-hosting company. Web developers will set up the server to handle specific requests from your website’s unique IP address, and they will also set up a link between the frontend and server-side.
A backend technology is anything used server-side to build stable and efficient web architectures. Back-end technologies include programming languages, databases, communication mechanisms, or frameworks that make up the building blocks of a web application’s back-end. If you are looking for a backend solution written in PHP7, CakePHP is the best back-end solution.
Skills You Need to Become a Backend Developer
The back-end application handles the business logic necessary for buttons, forms, and other interactive functionality on the front-end to actually work. For example, when a user submits their username and password to log in to a web app, this information gets sent to the back-end for authentication. Then the back-end would check a database containing user credentials to verify the login information was correct, and send a confirmation response to the front-end. As a result, things are getting much simpler for mobile app developers.
Web forms bridge the communication gap between the company/organization and users. They are an easy way for users to make inquiries, contact customer service, submit data, and create accounts. In addition, frontend developers design and develop with many of the most popular and most used browsers in mind.
Frontend and backend
The back-end is the code that runs on the server, that receives requests from the clients, and contains the logic to send the appropriate data back to the client. The back-end also includes the database, which will persistently store all of the data for the application. This article focuses on the hardware and software on the server-side that make this possible. In general, a web designer uses Photoshop and other tools to create the graphics, typography, and visual layouts for websites or web apps.
PayScale reports that full-stack developers earned an average annual salary of $81,902 as of March 2023. When developing custom websites or programs, the backend is of utmost importance since it will be the engine that makes everything work. Knowledge of web services or API is also important for full stack developers. Knowledge of creations and consumption of REST and SOAP services is desirable.
|
OPCFW_CODE
|
This article includes a list of references, but its sources remain unclear because it has insufficient inline citations. (August 2017) (Learn how and when to remove this template message)
|Internet media type|
|Uniform Type Identifier (UTI)||com.apple.binhex-archive|
BinHex, originally short for "binary-to-hexadecimal", is a binary-to-text encoding system that was used on the classic Mac OS for sending binary files through e-mail. Originally a hexadecimal encoding, subsequent versions of BinHex are more similar to uuencode, but combined both "forks" of the Mac file system together along with extended file information. BinHexed files take up more space than the original files, but will not be corrupted by non-"8-bit clean" software.
Hexadecimal BinHex (.hex)
BinHex was originally written by Tim Mann for the TRS-80, as a stand-alone version of an encoding scheme originally built into a popular terminal emulator. It worked by converting the binary file contents to hexadecimal numbers, which were themselves encoded as ASCII digits and letters. BinHex files of the era were typically given the file extension .hex. BinHex was used for sending files via major online services such as CompuServe, which were not "8-bit clean" and required ASCII armoring to survive. CompuServe later addressed this problem in the mid-1980s with the addition of 8-bit clean file transfer protocols, and solutions like BinHex stopped being used.
The file upload problem still existed on CompuServe when the Mac was first released in 1984. William Davis ported BinHex to the Mac using Microsoft BASIC in a simple version that could encode the data fork only, ignoring the resource fork. The rise in use of Internet e-mail coincided roughly with the release of the Macintosh, and Davis's version was posted on the Info-Mac mailing list by Joel Heller in June 1984. Several newer versions were published during 1984, resulting in BinHex 3 that could encode both forks.
Yves Lempereur, author of the first assembler for the Mac, MacASM, found that in order to upload his files to CompuServe he had to use BinHex. The BASIC version was very slow, so he ported it to assembler and released it as BinHex 1.0. The program was roughly a hundred times as fast as the BASIC version, and soon upgrade requests were flooding in.
Compact BinHex (.hcx)
The original BinHex was a fairly simple format, one that was not very efficient because it expanded every byte of input into two, as required by the hexadecimal representation—an 8-to-4 bit encoding. For BinHex 2.0, Lempereur used a new 8-to-6 encoding that improved file size by 50% and took the opportunity to add a new CRC error checking routine in place of the earlier checksum. This new encoding used the first 64 ASCII printing characters, including the space, to represent the data, similarly to uuencode. Even though the new encoding was no longer hexadecimal in nature, the established name of the program was retained. The smaller files were incompatible with the older ones, so the extension became .hcx, c for compact.
Unfortunately, the compact format also had its problems. The 6-bit encoding produced a number of characters that some foreign-language mail programs would convert into local versions, thereby destroying the file. In addition, the file metadata information was still placed in the file in plain text, and therefore could become corrupted in the same fashion.
BinHex 4 (.hqx)
In order to solve all of these problems, Lempereur released BinHex 4.0 in 1985, skipping 3.0 to avoid confusion with the now long-dead BASIC version. 4.0 carefully selected its character mappings to avoid ones that were translated by mail software, encoded all the information including the file information and protected everything with multiple CRCs. The resulting
.hqx files were roughly the same size of the
.hcx's, but much more robust.
At about the time BinHex 4 was released, most online services started supporting robust 8-bit file transfer protocols such as Zmodem, and the need for ASCII armoring went away. This left a problem on the Mac however, as there was still the need to encode the two forks into one. A team effort among Macintosh communications programmers resulted in MacBinary, which left the contents of the forks in their original 8-bit format and added a simple header for combining them on reception. MacBinary files were thus much smaller than BinHex. Lempereur released BinHex 5.0, almost identical to 4.0 with the exception that it used MacBinary to combine the forks before running the 8-to-6 encoding, but it saw little use, as he expected.
However, on the Internet, e-mail was still the primary method of moving files. At the time relatively few people had full access to the Internet, and services like FTPmail were the only way many users could download files. Years later when he first got onto the Internet, Lempereur was surprised to find that BinHex 4.0 was still extremely popular. The same ends could be achieved by first using MacBinary or AppleSingle to combine the forks, and then using Uuencode or Base64 on the resulting file, but none of these solutions ever became popular and BinHex 4.0 survived well into the late 1990s. File archives of classic Mac OS software are still filled with BinHexed files.
BinHex 4 file format
Looking at the contents of a BinHex file, one will notice that it has a message usually on the first line identifying it as BinHex, followed by many 64-character lines made up of seemingly random letters, numbers, and punctuation marks. Here is a sample of what BinHex actually looks like:
(This file must be converted with BinHex 4.0) :$f*TEQKPH#jdCA0d,R0TG!"6594%8dP8)3#3"!&m!*!%EMa6593K!!%!!!&mFNa KG3,r!*!$&[rr$3d,BQPZD'9i,R4PFh3!RQ+!!"AV#J#3!i!!N!@QKUjrU!#3'[q 3"&4&@&483N)f!3#Xaj6bV-H8mJ!!!B3!N!0"!*!$[3#3!cR@iiY)!*!'[I%4!!J Fp$X%X3@J!mZE6!GRiKUi$HGKMf0U61S46%i1"AB!TI,fLl!d1X3RDDE8ALfTCbM 8UP9p4iUqY-0k4krHpk9XK@`rbj2Ti'U@5rGH@+[fr-i4T6-qXpfl26,k!H5$Nml TIkI'(l3GI4)f8mII&01CNEbC2LrNLBeaZ1HG@$G8!Z6"k)hh,q9p"r6FC*!!Se" (ic,Pd(4(b`pflKC`H1&JN5)GVX3mREdH55[l`%`Yhp%q092c`A(hPV)!83Dr&f4 $$L#I1aM-"VjqV-q$34KQq6$M$f8#,Zc,i),!(`*ZN!$K$rS!LA%3cL+dYi"@,K( Z"`#3!fKi!!!:
There must be a text line, which is used by users and tools to recognize BinHex versions:
(This file must be converted with BinHex 4.0).
Any text before this line is to be ignored.
Everything except the
(This file... line is then seen as an area of binary data, which is encoded to ASCII characters. The encoding algorithm says that three bytes input are divided into four 6-bit values, in a similar way as Base64 does it. Number 0-63 are given characters according to the following list
When encoding, a <return> should be inserted after every 64 characters. After encoding, a colon is placed before and after the data.
- For example, the source code of the CWI version of
hexbinincluded in macutils, in
hecx.cline 187, uses the expression
((c)-0x20) & 0x3fto obtain the numerical value of an HCX digit with the ASCII value
- RFC 1741 MIME Content Type for BinHex Encoded Files. Faltstrom, P. & Crocker, D. & Fair., E. (December 1994).
- Binary-to-text encoding for a comparison of various encoding algorithms
- Prehistory of BinHex
- BinHex 4.0 Definition - Peter N Lewis, Aug 1991.
- Convert::BinHex, a Perl module to encode and decode BinHex files
- macutils, converts between different Macintosh file encodings for UNIX
- UUDeview, cross platform command line decoder
- Online BinHex encoder/decoder
- OldHex, a macOS application for encoding BinHex files.
|
OPCFW_CODE
|
When is it necessary to use threading with serial communication in a GUI?
I am writing a simple GUI using Python 2.7 and the Tkinter module. The GUI needs to interface with two separate devices over serial, and therefore I will use Pyserial.
I will open two serial ports but for the sake of maintaining responsive of the GUI, I have considered using multiple threads to handle any serial back-and-forth between the GUI and the two devices. I would like to know if this is what others would recommend or if there is an alternative approach.
There are a few cases that will occur for my program.
1) The GUI will expect a response or multiple responses from a device only after it has sent the device a command.
2) The GUI will receive a response or multiple responses from a device without prompting it or expecting it.
For the first case, there are two ways in which commands will be to the devices where the GUI will be looking for a response.
1) The user will type in a line and hit send. The device will then respond. I imagine at the speed this would occur, this could get away with not threading if this was all that going on.
2) The GUI will enter a function which will read a text file line by line, each line representing a command to be sent to the device. The function will only send the next line when it receives a confirmation response from the device. Since this function will only end when it reaches the EOF, the user pauses, or the user forces a returns out of the function, the function will take a long time to complete and the GUI would freeze while running it. I feel this should be in its own thread to avoid freezing.
If I feel unprompted responses from the device is going to be a common event during the program or I don't want any kind of GUI unresponsiveness, I can design a function that the thread will take care that all it does is check for responses. I think one thread can handle looking for responses from the multiple devices. The function will also have a check value that will return out of the function where the check value is user-controllable, which should also close the thread.
A second thread will handle running the function which sends commands to the device line by line. The function could be sending some command lines to one device or the other. If I could have both threads open, is it bad to have both checking to see if serial.readline is true?
I would appreciate any input. Thank you.
You can poll the devices with root.after at, say, 100 millesecond intervals. In spite of claims that tkinter is not 'thread-safe', you should be able to change displayed values from a sub-thread. Exact answers depend on what you mean by 'serial port' and the exact handshaking and timing characteristics of the devices. Experiment, write some code, and when you have a problem, post a minimal example and the error and a specific question.
I second what Terry has said. use .after to periodically schedule tasks to update the GUI with data from the serial port.
|
STACK_EXCHANGE
|
10979 Posts in 2722 Topics by 1815 members
|Go to End|
4 September 2009 at 8:33pm
I'm looking for a way to do an activation for a new member sign up in my website. So i believe i will need to add a new column, like a flag status on the member table right, but my question is when i log in (using ss security/login), which page/line of code that i need to modify to confirm that only member have been activated ( 'status=approved') can login to the site.
5 September 2009 at 12:28pm
I just did this, and It can be done two ways. One you Add a Decorator to the Member Object for approval and have all sign up forms write directly to the Member Class. This also means you need to creates a function in your Page.php that does an ApprovedMember Check plus Add a field to your Page Class that only Approved Members Can View.
A slightly more complex way is to Do your Member Decorator then create a Registration Page with with a DataObjectManager that stores Member Applications and writes them to the Member Class only after they've been approved. This way you can use the standard CurrentMember control in Templates and the Who Can View this Page option in the CMS.
The attached file require the DataObjectManger Module installed. Hopefully I Didn't screw up the syntax when I Deleted the Site Specific Objects for my site.
There is a Member Decorator, a Registration Page and DataObjectManager That has an Approval Button.
You also need to add this line to your _config.php so that your decorator is recognized
These files were made with a lot of help from UncleCheese and willr so I can't take all the credit. Hopefully my in-ellegant code isn't too bad
5 September 2009 at 4:21pm
Thx for your help, i got the dataobjectmanager working and i go with your first option. I understand your idea by validating on the page.php if the status of the user is not approved, i can do redirection to the login page or something so they must logged in using the approved user id. But that's not exactly what i want, because on my page, users can open all the page, but just the logged in member will have more feature.
So what i'm looking here is to do the validation on the login process, so not just checkin useremail and password, but also it will check the status of the approved user, if not approved, they can't do login. Hope you understand what i mean.
6 September 2009 at 2:33pm
I understand what you're getting at. Unfortunately I couldn't find a way to easily extend the standard Login to check against the ApprovedMember. Hence the use of the DataObjectManager.
When A User signs up their application is saved in a separate DataObject from the Member Object. So technically they're not a member yet and they can't login. When an admin approves the Application the data is copied from the User Application to the Member Object and allows them to login.
This way In your template you can wrap any info you don't want the general public to see in the standard <% if CurrentMember %> <% end_if %>
Basically anything in that if statement is only rendered if the User is Logged in, otherwise its not shown. So everyone can see every page but ONLY Logged in Users Have the Extra Features. PLus you can always include the Standard Member checks into your controllers if you don't want to do it at the template level.
Hope this was more clear
|Go to Top|
|
OPCFW_CODE
|
to do nothing is the most difficult thing to do. Ask any person to stop doing whatever he/she is doing and not do anything for sometime. Its difficult. Some minds would still be engaged in the thoughts of the work they were doing earlier, some minds would be thinking of what all they could do if not asked to sit idly at one place, and kids especially find it difficult to sit in one place, but of course when sternly scolded they'd sit anywhere told. :P
The point is, doing nothing in this post implies leaving everything and sitting at one place, and just observing, within and around. It is like suddenly stopping when in a race and looking around at the other runners and the audience, and also at and within oneself.
To do nothing is closely related to meditation and also to various relaxation techniques. Since I have been practicing doing nothing for last 21 years of my life, meditation and relaxation comes easy to me, sometimes so easy that I doze off while relaxing myself during shavasan (in Yoga).
People want to be busy all the time, doing this or that. Sometimes we don't even know why we're doing most of the things we do. One person's lifestyle/opinion etc. influences other's lifestyle/opinions, and so on... everybody wants to achieve something and lead a comfortable life... running, jostling, sparring sometimes... and finally at the age of 60 when a person retires and looks at his past 60 years of his, especially the youth part, he wonders on why he took some decisions...
We all live in a system and follow its rules. When few people decide to break the rules, others are affected. Be it more people training for a skillset less required or some people getting more salary than deserved, an imbalance always has a profiting and a losing set of people. Do nothing for a while and look within yourself, at least know yourself. If everybody did this and made decisions based on oneself, I think the balance would remain.
Doing nothing once in a while helps a person see his/her position in life. Since the race is never ending one, a person can join in anytime and leave anytime, while others keep on running the same circle over and over again, perhaps a few shifting to wider circles once in a while.
The race is always there as an opportunity, someone in the audience is always there to cheer if you perform well, but catching one's breath occasionally is upto the person... I've been catching up my breath for last four years, finally time to get back into the race, and I hope the reader would cheer me up. :D
P.S. - I think the recession, especially in US, forced people to "do nothing" and look into their lives.
P.S. - Would such a situation arise that doing nothing would be an important skill?
P.S. - If you feel bored while not doing anything, then you're not doing the "doing nothing" correctly!
|
OPCFW_CODE
|
Heart failure in adults and children is one of the biggest issues in healthcare and are cited as the largest cause of death in the US and worldwide (300 000 per year in the US).
At Stanford, we have created the first technology that can non-invasively see inside the heart in 3D with flow and pressure using a simple 8 minute MRI exam with computational post-processing. This will revolutionize heart failure and help millions of people. For a start, check out our online demo at http://www.morpheusmedical.net, and your heart will be lifted!
We are a funded startup leveraging the latest and greatest of computational capabilities, physics, OpenGL, HTML 5 and cloud services.
The founding team of Morpheus Medical is composed of 4 people:
* The head of the pediatric MRI unit at Stanford with a BS from Caltech and a PhD from Stanford
* A Stanford PhD in Computational Fluid Dynamics specialized in cloud computing on large clusters
* A Caltech graduate in CS with a PhD in Bioengineering and currently an interventional radiology fellow at Stanford
* A Cambridge University PhD in quantum matter with a master in business from Stanford GSB and ten years experience in building tech companies
We need brilliant engineers that want to make an impact in the lives of millions of kids and adults around the world. The prototype product has been used at Stanford Hospital in over 100 patients and already impacted the care of several kids with congenital heart disease. We now need to assemble a team to translate the prototype into a viable clinical product. If you like physics, computation and a team of very smart people dedicated to making a difference, come and join us!
We are looking to build a team of talented software developers and are searching for skilled, experienced, professional developers who are passionate about working on great technologies and making a difference. We have needs for short-term contractors as well as longer-term contract-to-hire or permanent employees and are willing to consider all levels of experience in order to find the best people. We are primarily focused on finding developers with experience in C++, OpenGL, or GPGPU.
We work out of the Hattery Labs, created by ex Googlers, based in the SOMA district of San Francisco and we have a chef on site! http://labs.hattery.com/
Design and implement solid, testable software
Design and implement software tests
Collaborate with software team and with subject-matter-experts in building our product
Solid understanding of the C++ language
Professional experience developing C++ code
Understanding or at least solid familiarity with Java
Windows development using Visual Studio
Solid test-writing experience; we require 100% coverage for critical code sections and above 90% elsewhere
Strong knowledge of object-oriented design and data structures
Demonstrated experience working with an existing code base
Strong debugging skills
Excellent written and verbal communications
Works well in a team environment
Experience with Boost libraries
Database experience (of any kind)
Source control (GIT or SVN) experience
Agile development experience
Development experience on other platforms (Linux, MacOS, iOS, Android, etc.)
Professional Java development experience (in addition to the C++)
OpenGL and/or OpenScenegraph development
TDD - test driven development experience
GPGPU/CUDA development experience
DICOM development experience
Medical Image Processing: Segmentation experience
Medical knowledge (the ability to speak the language)
Advanced Mathematics and/or Physics knowledge
MS + 1-3 years work experience (or equivalent in education + experience)
|
OPCFW_CODE
|
Getting Started With Postgres: Three Free and Easy Ways
In this article, explore three practical, user-friendly, and absolutely free ways to kickstart your PostgreSQL journey.
Join the DZone community and get the full member experience.Join For Free
Hello, fellow developers! This year, approximately 90,000 of us participated in the Stack Overflow survey. Impressively, we crowned Postgres as the #1 database. Moreover, DB Engines also spotlights PostgreSQL as one of the fastest-growing databases worldwide. What does this mean for us?
It's clear that we should strive to become PostgreSQL experts. An essential step in this direction is setting up our own database for hands-on experiments.
So, whether you prefer reading or watching, let’s walk through three practical, user-friendly, and absolutely free ways to kickstart your PostgreSQL journey.
Option #1: Dive Into Postgres With Docker
The simplest and most pocket-friendly way to start your journey with PostgreSQL is Docker.
That's right: with a single Docker command, you have your database container humming merrily on your laptop:
docker run --name postgresql \ -e POSTGRES_USER=postgres -e POSTGRES_PASSWORD=password \ -p 5432:5432 \ -d postgres:latest
The advantages are immense! Setting up your database is incredibly fast, and guess what? It all happens right on your hardware.
Next, use your preferred SQL editor, like DataGrip, to open a database connection. Ensure you connect to
localhost and use the username and password from the Docker command mentioned earlier.
Once connected, execute a few simple SQL statements to ensure the Postgres instance is ready for more advanced experiments:
create table back_to_the_future(id int, name text); insert into back_to_the_future(1, 'Doc'); select * from back_to_the_future;
Option #2: Jump Into Cloud-Native Postgres With Neon
The second cost-free and straightforward way to learn PostgreSQL caters to those eager to delve into public cloud environments right from the start.
Eager to get started with Neon? For us developers, the command line is home, isn't it? Kick things off by installing the Neon Command Line Tool:
npm i -g neonctl
Then authenticate and create an account:
Create a new project and database instance:
neonctl projects create --name mynewproject --region-id aws-us-east-1 neonctl databases create --name newdb
Lastly, fetch your database connection string and you're set:
neonctl connection-string --database-name newdb
Use that connection string to link up with the database instance via DataGrip:
For quick verification, execute a couple of straightforward SQL commands:
create table matrix(id int, name text); insert into matrix values(1, 'Neo'); select * from matrix;
Option #3: Build on Scalable Postgres With YugabyteDB
Therefore, do you think you're done? Not at all, my friend!
We conclude with YugabyteDB - the PostgreSQL "beast." Not only does it scale up and out across zones and regions, but it also withstands the most challenging cloud armageddons. Plus, its special knack: pinning user data to specific geographic locations.
Want a taste of YugabyteDB straight in the cloud? YugabyteDB Managed (DBaas) offers a free tier, gifting you a dedicated single-node instance, to begin with, and you can simply transition to their dedicated plan when you're ready.
And now, for the grand tradition! Time to fire up a YugabyteDB instance straight from the command line. First, install the
brew install yugabyte/tap/ybm
Next, create an account and sign in using your authentication token:
ybm signup ybm auth
The final steps involve setting up your first database instance:
ybm cluster create \ --cluster-name yugabyte \ --credentials username=admin,password=password-123 \ --cluster-tier Sandbox \ --cloud-provider AWS \ --wait
And add your laptop to the database’s IP allow list (yep, YugabyteDB folks take security seriously):
ybm network-allow-list create \ --ip-addr \$(curl ifconfig.me) \ --name my-address ybm cluster network allow-list assign \\ --network-allow-list my-address \\ --cluster-name yugabyte
Alright, once the database is started, take its connection string:
ybm cluster describe --cluster-name yugabyte
And use that string to establish a connection via DataGrip:
With the connection opened, send a few SQL commands to YugabyteDB to make sure the "beast" is ready to serve your requests:
create table avengers(id int, name text); insert into avengers values(1, 'Hulk'); select * from avengers;
That’s All… For Now
With PostgreSQL's popularity on the rise, it's inevitable you'll cross paths with it in upcoming projects. So why wait? Dive in now. And there's no better way to learn than by doing. Roll up your sleeves, launch a Postgres instance using one of the three methods outlined in this article, and savor the journey ahead.
Opinions expressed by DZone contributors are their own.
|
OPCFW_CODE
|
Causation implies laws. A singular causal statement entails a general causal statement. If a caused b, we know that events like a will cause events like b. Thus universality is built into causation—the particular implies the universal. This puts causation in a very special class of relations: it is not generally true that a singular relational statement entails a general one. If a is to the left of b, it does not follow that everything like a will be the left of something like b: an apple can be to the left of a pear, but not all apples are to the left of pears. The same is true of all spatial relations: no general spatial proposition follows from the truth of a singular spatial proposition. Similarly for temporal relations: if a happens before b, it doesn’t follow that everything like a will happen before something like b—you might have dinner before going to a play, but it is not generally the case that dinners are followed by plays. Ditto for family relations: it doesn’t follow from my having a brother that everyone like me has a brother like mine. And the same seems true generally; only causal relations give rise to the kind of generality in question. This is because all singular causal relations are necessarily instances of general laws, whereas that is not the case for the other relations mentioned. The law need not be framed in the same terms as the singular causal statement, but some sort of description will exist under which the instance exemplifies a law. Everything happens by law; therefore all causal relations imply underlying laws.
We should view this as more surprising than we do. For how is it possible for the particular case to have implications beyond itself, covering indefinitely many other cases? How can we derive a universal statement from a singular statement? We can derive an existential statement from a singular statement, but how can we move from what is true in a particular instance to what is true in all instances? The causal relation between particulars seems to encompass causal relations between quite distinct and often remote particulars: if a certain causal relation holds on earth, we can infer that it generalizes to other galaxies. This gives us amazing powers of knowledge: we just need to know that this caused that and we thereby know that everything like the former causes something like the latter. Imagine if knowing that this cup is on the table enabled us to know that every cup is on a table! Yet causation seems somehow to condense the universal into the particular: if a really did cause b, then no matter where you go, whenever you have something like a it will cause something like b. Causation is not just the cement of the universe; it is a cement that repeats itself endlessly, holding things together in the same recurring pattern. Once you know one part of the pattern you know them all. The puzzle is how an individual instance of a relation can “contain” all the other instances. Generally, if a relation R relates individuals a and b, we can infer nothing about whether other similar individuals are related by R; but in the case of the causal relation, we can infer a universal proposition from a specific one. This is because every particular case is necessarily an instance of something more general. And that seems puzzling, almost miraculous, as if great tracts of the universe are coiled inside a particular localized case.
Consider two other relations that have generality built into them: logical and deontic relations. If a particular statement entails another particular statement, this is always an instance of something more general: the proposition expressed by the first statement entails the proposition expressed by the second, so that every individual statement will stand in the entailment relation. Similarly, if one person has a moral duty with respect to another, this implies that anyone relevantly like the first will have just such a duty to someone relevantly like the second. As Kant would say, particular moral maxims can be universalized. Is there a puzzle about how this is possible? If there is, it is surely superficial, since logical and deontic relations primarily hold between types not tokens—types of statement, types of person. Conjunctive propositions, say, have certain logical implications, which are inherited by particular expressions of them; and fathers and sons as general categories have certain duties to each other, which then apply to specific people. That is, the relations in question hold in the first instance between something other than concrete particulars and are understood as such. It is not a matter of inferring the universal from the particular but recognizing the universal in the particular. We know those relations to hold without having to inspect the empirical world of particulars. It might even be said, by way of emphasis, that logical and deontic relations don’t strictly hold between particulars at all—this is just a manner of speaking about more abstract relations, harmless enough if we don’t let it mislead us as to the true ontological situation.
This suggests an approach to the puzzle of causation that has some reassuringly familiar elements. What if we say that the causal relation holds primarily between types not tokens? Then generality will be built into it from the start. When token events causally interact this is an instance of a type-event interaction; the former is derivative from the latter. Thus universality is guaranteed because it is built into the nature of the basic causal relation: causation is a relation between event-types in somewhat the way logical and deontic relations are relations between types. This is familiar because it is commonly accepted that token events stand in causal relations in virtue of the properties they instantiate: it is not events tout court that stand in causal relations but events “under descriptions”, i.e. inasmuch as they instantiate causally relevant properties. It is the electric charge of a battery that causes an electrical device to work not the color of the battery, and events have causal powers in virtue of some of their properties though not all (being an event recorded in the history books, for example). Spatial and temporal relations relate things irrespective of their intrinsic properties, but causal relations between things depend entirely on their intrinsic properties (i.e. their nature). Thus the primary locus of causation is properties, which are inherently general. The puzzle arises when we think of cause and effect as particulars and then wonder how the particular can contain the general, but in fact causation is inherently general because of the essential role of properties in causation—they are the primary bearers of causal powers. Laws relate properties, and causation consists of laws in action. The universal is already present in the particular case. The form of a singular causal statement is, “a being F caused b to be G”, where the causally relevant properties figure essentially in the fact; so the singular instance already includes general properties as causal agents. The causal structure of the universe accordingly relates properties not just particulars. This is what makes the causal relation different from other relations, and solves the puzzle of causation. Causal structure is not the sum of isolated instances of causation between particulars but of general causal principles linking properties.
There is an extensive literature on this question, with notable contributions from Davidson, Anscombe, and others; but I won’t get into this and simply assume a well-known position.
Here we might think of Wittgenstein’s discussion of the way meaning seems magically to contain future use in Philosophical Investigations.
It may be true that singular causal statements are referentially transparent statements about token events, but it doesn’t follow that causation itself works without reliance on selected causally relevant properties. No event has causal powers just by being that event.
In a world of bare particulars, if such there could be, there could be no causation, because there would be no exemplified properties to do the work of causation. Bare particulars would have to be causally idle. In a slogan: no causation without exemplification.
|
OPCFW_CODE
|
I've got some money and there's a fair number of games now where I would like to increase quality/performance so I think its time to look at new GPU's Was hoping to go with one of the new HD 7000 series, but there's a massive price and performance gap between the 7770 and 7850, and so I'm not really sure where to go now
APPROXIMATE PURCHASE DATE: Next week or so would be good, keep putting off BUDGET RANGE: Around £150 (Inc PnP, VAT, etc), could go higher if the price/performance is worth it, and it would give me extra months before wanting another upgrade. If suggested to upgrade other components at same time I would pay more if worth it in the long run.
USAGE FROM MOST TO LEAST IMPORTANT: Software development (including 3D accelerated applications), Gaming (mostly RTS and games like the X series, also FPS and racing), watching videos, occasionally transcoding video and audio in large amounts.
CURRENT GPU AND POWER SUPPLY:
MSI AMD Radeon HD 5770 @875MHz (was running an over clock, but when I got the 1920x1080 display and made the 1600x900 a secondary monitor on extended desktop anything but stock seemed to become unstable...)
BeQuiet 600W Modular PSU (E7-CM-600W): has 2x6pin and 2x8pin PCI-e power connectors (not entirely sure how that works, since my understanding was PCIe+6pin+8pin = 300W, so two = 600W on a 600W PSU???)
OTHER RELEVANT SYSTEM SPECS:
CPU: AMD Phenom II X6 1055T @ 3.5GHz with Cooler Master Hyper 212+ Cooler
MB: Gigabyte 870A-UD3
RAM: 8GB (4x2GB) 1333MHz DDR3
Case: Antec 300
Hard drives: 6 3.5" (not a great deal of clearance between these and the 5770, and there's no spare bays to leave a gap, was going to replace the 500GB ones with a 2TB drive, but prices have not been great. Could perhaps move a drive into the spare 5.25 bay? Where can I get a bracket to do that)
PREFERRED WEBSITE(S) FOR PARTS: UK online retailers, best price including postage
PARTS PREFERENCES: None really, as long as the performance/price is there, and nothing really bad about the card (e.g. noise levels, reliability, etc.)
OVERCLOCKING: Yes (assuming this second monitor + stability issue is not going to be an issue again)
SLI OR CROSSFIRE: No (micro stutter seems to be an ongoing issue, so not sure really worth it)
Main monitor is 1920x1080
I have a second 1600x900 monitor, but due to it being 2 screens, and not the same I tend not to use in gaming. However stuff like supreme commander that does somthing useful with a second display (e.g. gives me a second independent view port) I do use it
Not entirely sure what to do with the old 5770. Perhaps keep it for the second display? Quiet would be good
So ok, from £150 to £270. I guess the question there is, will a £270 7870 now be a better deal over 3 years than £150 now, and £150 in 18months (i.e. does a 7870 have like a 99.9% chance of still working in 3 years, and will the performance still be "good")?
|
OPCFW_CODE
|
What If We Blocked The Sites Advertised By Spam?
from the thinking-out-of-the-box dept
Whenever a debate over stopping spam comes up, someone usually suggests not to worry about filters or knocking the spammers themselves offline. Instead, they say, we should target the people who pay the spammers to pay. There are tons of little scam shops set up that simply give a few hundred dollars to spammers and then rake in whatever money they can from gullible dupes who believe spam. Many of these sites are hosted by ISPs who just want the money and are willing to take all sorts of abuse before they take the site down. With that in mind, here’s a suggestion for a different way to stop spam: have other ISPs simply block all traffic to spamvertised sites. The idea would be to set up some sort of central database that major ISPs could pull off of. Sites that were being spamvertised would be blocked completely and all traffic wouldn’t be allowed to go there. So, you get the big guys (AOL, Earthlink and MSN) to agree to use this list, and you can really stop a lot of traffic to the sites that are being advertised by spam. Plus, by having a central database with this info about domain names, even if the domain moved, other ISPs could check to see if it had a history of spamvertising. Thus, it stops the gullible folks from buying off of spam, thus making spam less worthwhile to the spamvertised sites – and kills the economic incentives for spam. At least that’s the theory. Of course, it’s not to hard to imagine how this will backfire. First, sites may get blacklisted incorrectly. This happens all the time already with typical spam filters, and it’s a pain. Imagine how much bigger a pain it would be if all traffic was suddenly completely blocked from your website? People would go nuts. Second, spammers will quickly learn to rotate new sites up quickly and get the job done before it made it onto the list. The biggest problem, though, is that spammers will just start including legitimate links in their spam as well – or even “spamvertising” legitimate sites, just to piss people off. Then, for every spamvertised site that gets blocked, they’ll take down a legitimate site as well. It’s definitely a different idea, but I think it also brings up too many problems without solving the spam issue.
Comments on “What If We Blocked The Sites Advertised By Spam?”
It's a multi-step process
1. ISP installs spam filter (as some have apparently done).
2. Filter is enhanced to harvest links in suspected spam messages – symbolic links get resolved. Call these suspected vendor addresses.
3. Suspected vendor addresses are graylisted – a request for that address doesn’t fail (blacklist), rather it generates a “that looks like a spam-using site, are you sure” intermediate page.
4. Users that do click through (after all, the intent is not censorship, exactly) have their ID and/or IP noted, and added to a “suspected spam encourager” list.
5. Stats are kept.
Near-zero clickthrough rates on suspect pages increase the “this is a bad site” weighting on the suspect page, and the “this is spam” weighting in the original spam filter. High clickthrough rates suggest that perhaps this isn’t a spam-abusing site after all – the solicitation may look like spam, but boy, it’s sure popular. Similarly, users that always click-through to suspect sites are marked as dupes (in-duh-viduals in Dilbert terms) and are extended special offers on the Brooklyn Bridge, oceanfront property in Arizona, and favored entry positions in the Darwin Awards competition.
open can of worms
This opens up the “competitive” tactic of spamming a zillion people with a link to your competitor’s site…at which point he gets blocked (a day, a week? a month?) from a zillion potential customers while he sorts out the mess with ISPs.
No Subject Given
Then there will be DOS attacks by the spammers on those “master Database” servers of sites to block – thus making then inaccessable and probably force the database company out of business, just like what happened last week when two spam blacklist companies closed shop because of all the DOS attacks.
It will never work
I have a very popular site and it has been reported as “spamvertised” several times at places like Spamcop. None were anything to do with me – either people included a link in semi-bulk mail that someone mistakenly reported as spam, or a real spammer used a link to my site to try to appear more legitimate.
My URL never changes – It would destroy my business to change it. A system like this hurts innocent sites like mine. The spammers, on the other hand, have no trouble coming up with 50 or 60 new disposable URLs a week, and this will have very little effect on them.
Blocking sites that advertise via spam
I have developed a list of about 17,000 such sites. You can find it on http://www.geocities.com/filterlists for now. I’m looking for other lists to expand this one. I consider this list to be in the public domain.
I’m using it with a program I wrote for Exchange server that blocks messages based on content in the body of the message. The program also blocks based on content in the subject line – there’s a separate list on the filterlists page for this purpose as well.
I’m blocking an average of 97% over the last 2 weeks of all spam to my servers.
No Subject Given
Wouldn’t this be a violation of the first amendment?
|
OPCFW_CODE
|
import logging
import os
import boto3
import cv2
import json
logger = logging.getLogger(__name__)
logger.setLevel(logging.INFO)
INPUT_IMAGE_BUCKET = os.environ.get('INPUT_IMAGE_BUCKET', None)
OUTPUT_IMAGE_BUCKET = os.environ.get('OUTPUT_IMAGE_BUCKET', None)
s3 = boto3.client('s3')
def handler(event, context):
logger.info('INPUT_IMAGE_BUCKET: {0}'.format(INPUT_IMAGE_BUCKET))
logger.info('OUTPUT_IMAGE_BUCKET: {0}'.format(OUTPUT_IMAGE_BUCKET))
for record in event['Records']:
in_key = record['s3']['object']['key']
out_key = in_key[:-4] + '_gray.jpg'
in_tmp = '/tmp/' + in_key
out_tmp = in_tmp[:-4] + '_gray.jpg'
try:
logger.info('Downloading s3://{0}/{1} to {2}.'.format(INPUT_IMAGE_BUCKET, in_key, in_tmp)) # noqa: E501
with open(in_tmp, 'wb') as file:
s3.download_fileobj(INPUT_IMAGE_BUCKET, in_key, file)
logger.info('Reading in {0} as an image.'.format(in_tmp))
image = cv2.imread(in_tmp)
logger.info('Converting to grayscale.')
image_gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
logger.info('Writing grayscale image to {0}.'.format(out_tmp))
with open(out_tmp, 'wb') as file:
cv2.imwrite(out_tmp, image_gray)
logger.info('Uploading grayscale image to s3://{0}/{1}.'.format(OUTPUT_IMAGE_BUCKET, out_key)) # noqa: E501
s3.upload_file(out_tmp, OUTPUT_IMAGE_BUCKET, out_key)
except Exception as e:
logger.info(e)
raise e
message = "Grayscale image uploaded to s3://{0}/{1}.".format(OUTPUT_IMAGE_BUCKET, out_key)
response = {"status_code": 200, "body": json.dumps({"message": message})}
logger.info('Here is the response returned.')
logger.info(response)
logger.info('Goodbye!')
return response
|
STACK_EDU
|
Java exceptions are an integral part of the Java programming language and are used to signal that an error has occurred during the execution of a program. While some exceptions are well-known and commonly encountered, such as NullPointerException and OutOfMemoryError, many other exceptions are less commonly encountered. However, they can still have a significant impact on the performance and functionality of an application. This article will discuss 10 Java exceptions you may not have encountered and different ways to detect and diagnose them, including using an APM (Application Performance Management) tool, logging, debugging, code review, monitoring, and profiling.
ClassCastException is thrown when an application tries to cast an object to a class that it is not an instance of. This can occur when the JVM loads a class file, and the class file is not a subclass of the class to which the application is trying to cast the object.
SecurityException is thrown when an application tries to perform a security-sensitive operation it is not authorized to perform. This can occur when the JVM loads a class file and contains code that attempts to perform a security-sensitive operation.
There are several other ways you can find and diagnose less common Java exceptions in your application in addition to using an APM tool like FusionReactor:
- Logging: Enabling your application can help you capture and diagnose exceptions as they occur. Using a logging framework such as Log4j or SLF4J, you can configure your application to log exceptions at different levels (e.g., error, warning, info) and to output the stack trace of the exception to a log file.
- Debugging: You can use a debugger such as Eclipse or IntelliJ to step through the execution of your application and inspect the state of the application when an exception occurs. This can help you identify the specific code line that caused the exception and understand how the exception propagated through the application.
- Code review: Reviewing the application’s code can also help you identify and fix the root cause of an exception. This can be done by looking at the code that throws the exception, the methods that are called before the exception is thrown, and the variables used in the methods.
- Monitoring: Monitoring the application’s resource usage, such as CPU, memory, and disk I/O, can help you identify an exception’s root cause.
- Profiling: Profiling the application using a tool such as JProfiler or YourKit can help you identify performance bottlenecks and understand how the application uses its resources. This can help you identify the specific code line that caused the exception and understand how the exception propagated through the application.
It’s important to note that no single method can detect all types of exceptions, so it’s best to have a combination of techniques in place to have the most comprehensive coverage.
To find these exceptions, you can use an APM tool, such as FusionReactor. These tools allow you to monitor your application in real-time, identify performance bottlenecks, and track the occurrence of exceptions. They also provide detailed information about the exceptions, including the stack trace, the thread that threw the exception, and the line of code that caused the exception. By using an APM tool, you can quickly identify and fix the root cause of these exceptions, improving your application’s performance and stability.
One of the key features of an APM tool is its ability to trace the execution of a request as it flows through your application. This allows you to see the exact method calls and the sequence in which they were made. This can help you identify the specific code line that caused the exception and understand how the exception propagated through the application.
Another essential feature of an APM tool is its ability to aggregate and analyze performance metrics. These metrics can help you understand your application’s overall performance, including the number of requests per second, response time, and error rate. This can help you to identify trends and patterns that may indicate a performance issue, such as a sudden increase in the number of exceptions.
In addition to these features, an APM tool may include other capabilities, such as alerting and reporting. This can be useful in helping you to detect and diagnose problems quickly and to share information with other members of your team.
Conclusion – Diagnosing Less Common Java Exceptions: Techniques and Tools
In conclusion, less common Java exceptions can significantly impact the performance and functionality of an application. To detect and diagnose these exceptions, it’s important to use a combination of techniques, such as using an APM tool, logging, debugging, code review, monitoring, and profiling. By having a comprehensive approach to detecting and diagnosing exceptions, you can quickly identify and fix the root cause of these exceptions, improving your application’s performance and stability.
|
OPCFW_CODE
|
How do I write a shim?
If a shim doesn’t already exist, the following steps will help you get started
- Learn about the custom API you want to write a shim for.
- If you take a look at the Markdown documents for some existing shims, such as RunKeeper, you’ll get a good idea of the type of information you’ll need to learn about the custom API.
- Create a similar document for the API you’re working on to use as a reference.
- Your shim will convert some of the source data provided by the custom API into target data that conforms to Open mHealth schemas. Decide which some source data you will map and which schemas you will map it into.
- We recommend mapping all source data that has corresponding schemas.
- Your shim can also produce data that conforms to your own schemas, but your shim will gain wider adoption if it produces data that matches well known schemas.
- Add this information to your document to help you keep track of the measures you will map.
- Implement the mappers. You should use the Misfit shim as a guide on organising and writing mappers.
- The general approach taken in the mappers is to require data necessary to build the data point and treat everything else as optional. This follows the robustness principle.
- If you’re using Java, the mappers use the Open mHealth schema SDK to build objects that correspond to schemas. These objects are then serialized to create the output of the common API. The mappers leverage the following classes.
- JsonNodeMappingSupport: A utility class that makes it easy to extract pieces of typed data from source JSON data. The methods for required data throw exceptions on missing data, whereas those for optional data log missing information and return empty Optionals.
- DataPointMapper: An interface that takes one or more inputs to create data points. The purpose of taking more than one input isn’t to map collections. It’s to create data points that are composed from multiple inputs, such as a blood pressure data point that requires a user profile input to build headers and vital signs to build payloads.
- JsonNodeDataPointMapper: A specialization of DataPointMapper that takes Jackson JsonNode inputs.
- Implement the authentication to the custom API.
- This is typically OAuth 2.0, though sometimes OAuth 1.0a and sometimes a custom implementation entirely. Take a look at the Misfit shim for OAuth 2.0, the Withings shim for OAuth 1.0a, and the HealthVault shim for a custom implementation.
- If you’re using Java and the third-party API uses OAuth 1.0a or OAuth 2.0, extend OAuth1ShimBase or OAuth2ShimBase classes and implement the abstract methods. This lets you leverage common authentication code and tweak parameters based on the nuances of the custom API.
- Implement the data retrieval operation on the common API.
- The data retrieval operation is responsible for calling the custom API to get data. It must propagate the time frame filtering on the common API to the custom API, as well as paging and other parameters. This is a manual process with a case-by-case implementation, grounded on a solid understanding of the custom API.
- If you’re using Java, this retrieval operation is an abstract getData method implemented by each shim.
We are working on making Shimmer expose our Data Point API, which is the API exposed by the data storage component
. This will let an application treat a third-party API as a remote data storage component, with translation happening on the fly. This process is not a simple one, since it requires general and extensible strategies for resolving API mismatches in authentication, pagination, cardinality, filtering, and data formats, at a minimum. We will keep you up-to-date and ask for feedback using our mailing list.
We’re also considering making the shims themselves micro-services, to help people create shims in different languages. If you have any feedback on the topic, please let us know on our mailing list.
|
OPCFW_CODE
|
Add unescapeCsvFields to parse a CSV line and implement CombinedHttpHeaders.getAll
Motivation:
See #4855
Modifications:
Unfortunately, unescapeCsv cannot be used here because the input could be a CSV line like "a,b",c. Hence this patch adds unescapeCsvFields to parse a CSV line and split it into multiple fields and unescaped them. The unit tests should define the behavior of unescapeCsvFields.
Then this patch just uses unescapeCsvFields to implement CombinedHttpHeaders.getAll.
Result:
CombinedHttpHeaders.getAll will return the unescaped values of a header.
Should I also make get return the first value instead of the whole header value string? E.g., just return a, if the value is a,b,c.
SonarQube analysis reported 6 issues:
5 major
1 minor
Watch the comments in this conversation to review them.
Note: the following issues could not be reported as comments because they are located on lines that are not displayed in this pull request:
Make "commaSeparateEscapedValues" a "static" method.
Unnecessary use of fully qualified name 'io.netty.handler.codec.DefaultHeaders.NameValidator' due to existing import 'io.netty.handler.codec.DefaultHeaders'
Add a default case to this switch.
StringBuffer (or StringBuilder).append is called consecutively without reusing the target variable.
StringBuffer (or StringBuilder).append is called consecutively without reusing the target variable.
Unnecessary use of fully qualified name 'ObjectUtil.checkNotNull' due to existing static import 'io.netty.util.internal.ObjectUtil.checkNotNull'
@Scottmitch PTAL
@windie - Great work ... just 1 minor comment.
@Scottmitch - Updated.
Not sure if the current semantics of get and getAndRemove is correct. Here are the docs of Headers:
/**
* Returns the value of a header with the specified name. If there is more than one value for the specified name,
* the first value in insertion order is returned.
*
* @param name the name of the header to retrieve
* @param defaultValue the default value
* @return the first header value or {@code defaultValue} if there is no such header
*/
V get(K name, V defaultValue);
/**
* Returns the value of a header with the specified name and removes it from this object. If there is more than
* one value for the specified name, the first value in insertion order is returned.
*
* @param name the name of the header to retrieve
* @return the first header value or {@code null} if there is no such header
*/
V getAndRemove(K name);
But for CombinedHttpHeaders, its get just returns the combined String and getAndRemove just removes the whole header.
Not sure if the current semantics of get and getAndRemove is correct.
From a semantics stand point I don't see a problem here. From the Headers's point of view there should be no more than 1 "value" for each key because CombinedHttpHeaders makes sure if any values share the same key they are concatenated together
@windie - Once you squash your commits I can pull in
@Scottmitch Thanks for your clarifying. Squashed.
Cherry-picked 4.1 (333f55e). Thanks again @windie !
|
GITHUB_ARCHIVE
|
Oh crap! So I’ve been using a program to keep track of my billable hours, and today while exporting all my November hours, my wife discovered it was totally missing the last day of the month in the exported .CSV file The program only exported Nov.1 to Nov.29, completely leaving out all billable work I did last Nov.30! Yikes! It would have been a $445 loss!
And she only brought it up because we had a conversation about what happened during that particular day, and she noticed on the exported spreadsheet, the work/hours I did wasn’t there!
Feeling nauseous about all the hours I missed billing all the previous months and YEARS!, I randomly checked some of my old timesheets, but don’t want to check everything (can’t bear to think of it) – but now I totally don’t trust this program anymore.
For devs here that keep track of hours (study time, billable hours, task, etc)… what program do you use? I need a program that can categorize hours by client/project/task/rate.
I’m not a working dev, so I probably have a pretty naive sense of your needs, but maybe the Pomello add on for Trello could suit you? You have to manually allocate each pomodoro session (or part thereof) to the appropriate Trello cards, and it rather annoyingly tracks time spent in ‘Pomodoros’ rather than minutes, but if you use Trello anyway and have a board for each project, it could be something to consider.
I’ve mentioned Wakatime here before, but that only tracks active minutes in your code editor - if you wanted to track consulting, planning, design and QA time that’d all have to be logged separately.
For my work we use an internal tool, but I keep track of my own personal usages using a site called toggl. https://toggl.com/
I’ve seen settings for all the things you mentioned, and I know toggle provides ways for users to monitor worker hours, pays, and other things across different projects so it might help replace your old program.
I’d also like to ask, were you using a commercial application to keep track of your billable hours, and if so what was it so we can all avoid it?
It’s called Desktop Task Timer, bought it from the AppStore several years ago. I checked and it looks like the website is still around, but hasn’t been updated since 2011, and the program isn’t found on the AppStore anymore.
I checked and reviewed some of my old timesheets and it looks like the bug isn’t consistent… Most of the months are correct, the last day being included correctly in the CSV export. But sometimes it doesnt work.
In certain cases, the last day is missing… the bug is probably related to the datepicker UI it uses when selecting/clicking the start and ending dates you want to export. So the problem was bad, but not as bad as I thought… could have been worst. Something about the position of the last day of the month within the calendar week? maybe… didn’t spend much time chasing the exact cause of the bug. Since it seems to be abandon-ware.
Right now I’m currently evaluating Tyme. I’ve evaluated another one, from Timing https://timingapp.com/, but it seems convoluted to use… plus it logs every app/browser/website that is running… heck, don’t need my Chrome/Netflix activity to show up!
Thanks for the link. Unfortunately, Toggl wants $18/mo for a timesheet? seems pretty excessive to me for a timesheet…
When the whole Adobe CC Suite is only $50/mo. and the whole JetBrains All Products pack is $25/mo (and goes down to $12/mo at 3rd year).
I’m trying to avoid subscription for timesheet software.
Yea I guess the plan page is somewhat misleading. In somewhat smaller print it specifies the basic plan is always free, but shows you what seems like are the only 3 options below. So you should be fine signing up and checking it out. The UI is pretty extensive, so it takes a little getting used to. But its pretty straight forward once you know where to find everything.,
Just an update: After evaluating a few programs (some of them were too much detailed, or confusing to use or just overkill… i.e. logging every site visit or browser session!) I’ve settled on a program called Tyme2. And nice thing is it’s only a one-time payment of $23 at the Apple Store. None of that recurring monthly charge crap.
|
OPCFW_CODE
|
Many people have a misconception that Conda is a distribution or more particularly; a python package manager. Both Statements though are not completely wrong but neither do grasp the whole scenario. Conda, first and foremost is open-source, cross-platform package and environment manager which was originally built to regulate difficult package management issues and is a popular Python/R manager. It is released under the Berkeley Software Distribution License by Anaconda Inc.
Conda as a package manager helps you find and install packages. It provides the utility that, if you need a package that requires a different version of Python, you do not need to switch to a different environment manager.Now, lets understand its working and basic terminologies in Conda.
There are three methods of getting conda:-
- Installing Miniconda. Miniconda is a free,open-source,small bootstrap version of Anaconda which contains python,conda and a few other packages.
- Installing Anaconda. Anaconda is the most popular Python/R distribution which contains over 750 automatically installed packages along with conda and more can be installed using
- If you have already installed Python or any other package manager, then just install Miniconda or Anaconda and let the installer add the conda installation of Python to your PATH environment variable without un-installing other packages.
Conda commands cheatsheet
Conda provides with a basic set of commands and these are essential in acting as the interface between the user and software.these commands can be used for
creating and modifying environments, managing queries , installing and deleting packages, defining dependencies, etc. Given below is the table (by OpenGenus) for most used and basic commands of conda:
|Install a package
conda install $PACKAGE_NAME
|Update a package
conda update --name $ENVIRONMENT_NAME $PACKAGE_NAME
|Update package manager
conda update conda
|Un-install a package
conda remove --name $ENVIRONMENT_NAME $PACKAGE_NAME
|Create an environment
conda create --name $ENVIRONMENT_NAME python
|Activate an environment
conda activate --name $ENVIRONMENT_NAME
|Deactivate an environment
|Search available packages
conda search $SEARCH_TERM
|Install a package from specific source
conda install --channel $URL $PACKAGE_NAME
|List of installed package
conda list --name $ENVIRONMENT_NAME
|Create requirements file
conda list --export
|List all environments
conda info --envs
|Install other package manager
conda install pip
conda install python=x.x
conda update python
Further command more specified to the task such as management of R packages, dependencies etc. can be found using help in conda or -h.
A conda environment is structured as a directory which contains packages along with their dependencies and is the interface where the user analyzes different kind ao data as per its use . This system gives the user the utility that if an environment has a different version of Python and packages,you can create and utilize a new environment without changing or deleting the former environment. You can switch between environments by activating and deactivating environment or even give someone else your environment structure by sharing your
Creating and managing environments
To create an environment:
conda create --name myenv
to create an environment with a specific version of Python:
conda create -n myenv python=3.6
Create the environment from the environment.yaml file:
conda env create -f environment.yml
You can control where a conda environment lives by providing a path to a target directory when creating the environment.
conda create --prefix ./envs jupyterlab=0.35 matplotlib=3.1 numpy=1.16
You may also require to update your conda environment.The most common reasons for updating your conda environments are:-
- You have found a better package for data extraction and analysis.
- A new version of the core dependencies of the package has been released.
- The package is no more useful .
$ conda env update --prefix ./env --file environment.yml --prune
To see a list of all of your environments, in your terminal window or an Anaconda Prompt, run:
conda info --envs
A list similar to the following is displayed:
Further instructions for creating similar environments , activating and deactivating environment, cloning etc. and be found using help or -h.
Conda packages include a compressed tarball of files which contain system level libraries,language modules, metadata,executable files and clusters to be directly installed in the system excluding directories.
The structure of a package is pretty much the same across platforms.
Structure of a Package:
│ └── pyflakes
│ ├── LICENSE.txt
│ ├── files
│ ├── index.json
│ ├── paths.json
│ └── recipe
Packages are classified into various types depending on the files they contain, their architecture and use.
- Metapackages are packages which do not contain any files and are basically used for capturing metadata and making complex packages simpler.One of the example of a Metapackage is
AnacondaInstaller which contains certain links from where the data is to be downloaded and dependencies for low-level libraries.
- Noarch packages are packages which do not have a defined architecture and used to distribute source codes and docs to users. it can be built only once and are usually in python or generic.
- Check to see if a package you have not installed named "fatcat" is available from the Anaconda repository (must be connected to the Internet):
conda search fatcat
- Install a package into the current environment:
conda install [packagename]
Conda channels are where bundles are put away. They fill in as the base for facilitating and overseeing bundles. Conda bundles are downloaded from URLs to registries containing Conda bundles. The Conda direction look through a default set of channels, and bundles are naturally downloaded and refreshed.
Downloading same packages from different channels cause a fault called a conda collision .Conda resolves the issue by removing the lower priority channel bundle and keeping the latter so as to not to override the core package.
How does Conda work internally
Now that we are aware of the various terms associated with conda, let us understand how it works internally.
when we install Conda for the first time or install a new package in conda, the package consists of metadata and tarball of files to be installed. A tarball is a jargon for a .tar archive , in which all the files are just groped together, not compressed.
For eg. , the basic structure of a conda package for python is given below.
the installer extracts the files into the
pkgs folder and hard links files from
meta.yaml.once the software knows what is to be installed it starts running tests for dependencies and faults. once it is installed, the software allows to set environments as per user choice and we are able to use the software to its full utility as a package manager.
Step by step flow-
- Downloading and processing index metadata.
- Reducing the index.
- Expressing the package data and constraints as a SAT problem.
- Running the solver.
- Downloading and extracting packages.
- Verifying package contents.
- Linking packages from package cache into environments.
Given below is a flowchart of how conda installs packages.
A few methods can be implemented in improving the performance of the conda system likely-
- creating fresh environments as the older they grow the harder it becomes to resolve them. so creating small,dedicated environments can help in reducing compilation time.
- using specific packages instead of broad spectrum packages for use .
- setting strict priority controls . these will help in significantly reduce compilation time by removing mixed set of possible solutions.
- Another possible way of reducing compilation time is by disabling security checks, as conda spends a signification amount of the total compilation time resolving conflicts.But is is not recommended as it may crash your environment.
conda vs pip
Before comparing the pros and cons of conda and pip , let us understand the difference between them.
pip or Python package installer is the default package manager for python whine venv is the default environment manager . Conda provides both of these utilities
in a single package.
The major pros of conda over pip is that pip only allows python packages to be installed from PyPl while conda allows every package from all languages. pip does not have in-built support for maintaining its environments and has to depend on tools lie virtualenv for maintaining them, while conda allows packages to be utilized in isolated environments , making it an extremely valuable tool for data analytics.
Also pip fails to simultaneously work all dependencies of all packages installed, conda uses a resolver to make sure all the requirements of the packages are met.
Pros and Cons of conda
- The most significant pro of conda is the utility of managing isolated environments.As long as a package can be relocated, you can use it in multiple instances and independently for free.
- it can install packages in all from all available resources and can effectively maintain,manage and resolve conflicts among dependencies.
- Open-source ,free, multi-platform and language agnostic.
- It only installs binary scripts , so removes the use of a compiler.
- Since it only installs binary files leaving system packages, security patches are no longer available to you.
- Larger compilation time due to simultaneous running of packages.
- Conda fails to provide package provenance.
Read other Python related topics so that you can strengthen your knowledge.
I hope that this article is enough for giving a basic understanding about conda and gives a interest for it.
|
OPCFW_CODE
|
The Wand User Guide
a title for FCPX
The inference to a magic wand is on purpose. The Wand is truly a wide range effect.
On the surface, The Wand looks like the simple divider it is. The scene is divided between the storyline and a background with “auxiliary text”. Animating the divider across the text creates a text reveal. There is a built in Drop Zone behind the background (revealed by lowering the color solid opacity) creating a split screen effect with built in text. With the split screen effect and animating the divider, this template is turned into a “wipe” transition. Animating positioning and rotation of the divider, very interesting and complex wipe effects are easily accomplished!
There is an OSC with a “post”. The OSC controls a line. The post controls the angle of the line. The line divides the scene into whatever is below the title clip down to the storyline and either a solid color background or a drop zone for any other kind of media (or both with opacity on the solid color). On the solid color/drop zone side, there is a text object which can be used as a text reveal title, or hidden (recommended to change the text to just a few space characters so as not to lose the text bounding box).
That’s not it. Since this is a title template, and titles accumulate whatever is underneath them, it is possible to stack a number of “instances” of The Wand to create custom multidivider effects. Since this title contains a drop zone for a background, it can be used as a split screen. Since it is “stackable”, it can be used to create multi-split scenes (keep reading!)
The default divider is black and 12000 pixels long. This title will work on up to 5K (and possibly 8K) video. [Tested on 4K.] The OSC is *always* at the center of the line. The line rotates around this point. Drag the post around to set the angle of the dividing line. The rotational span values go from -720° to +720° giving you a maximum of four complete revolutions of keyframed animation (only if you start at one end of the 720 range and go to the other). Since the default orientation of the divider is 90° (vertical), that will limit the number of complete revolutions to three, in general. Most of the time, you will probably only be concerned with intervals of 180° of movement.
When the title clip is selected, the auxiliary text is selectable as well. You can use a mouse to move the text around on the screen to where you need it positioned, even if the text is obscured by the “foreground” side of the divide. You do, however, have to mouse over its region (a bounding box will appear).
Text can be animated as well. There are position and rotation controls in the FONT PARAMETERS section for your keyframing needs.
Below is a few examples to get you started and see the demo video below.
A clock wipe is easily accomplished by placing the OSC center along the edge of the video and rotating the line across the scene.
Animating The Wand to create a text reveal
When mousing over a parameter that can be animated, a keyframe mark appears. Every parameter that shows the mark when mousing over it can be keyframed (that includes colors!)
Clicking on the keyframe mark will cause it to become filled (a solid diamond shape). You can set a keyframe and make changes to the parameter or vice versa. The order does not matter.
Once set, all that needs to be done is to move the playhead to another point in time then update that parameter to whatever new setting is needed. Final Cut will interpolate values between the two keyframed values and the animation is executed. This template will often require keyframing if any movement is required. You’ve been given total control over how this template operates. If you need any further help with keyframing, there are many free tutorials available on YouTube or Vimeo.
Keyframing is easy and you will rock this template!
Set up The Wand to the “still” position you want the animation to pause.
Move the playhead to about 15 or so frames from the beginning and set a keyframe. Move the playhead to about 15 frames from the end and set another keyframe. Move the playhead to the beginning and move the divider just off the screen. Move the playhead to the end and move the divider to its end position. Play.
Solid color backgrounds can be any color you like and with opacity turned down, used to color cast the drop zone media behind.
To easily create an effect like the Slanted orientation example above, set the initial angle of a first title instance, then Option-drag a copy over the original. Use the OSC to slide the divider to a new position.
To line up multiple copies like this, multi select all of the titles used at once.
This will turn on all of the OSCs at once and you can fine tune the lineup (and you can also “do the math” and use the published position parameters for more accuracy). Dragging the OSC center control will not change the angle of the divider line.
|
OPCFW_CODE
|
The familiar contrary to animal companion has a minimalistic stat bloc, mostly because the main features of the familiar seem to be the familiar/master abilities and not the creature that comes with them.
But I feel that this super minimalistic approach doesn't really work for skills. Familiars can use skills using their level. They add the caster ability score for perception and 2 skills (acrobatics and stealth).
There are two main problems for me:
-First the proficiency doesn't appear anywhere. I imagine that mean the familiar is untrained for everything. Which is odd because being untrained in acrobatics (one of the only two skills it has a bonus) prevent it from using maneuvers in flight if it's a flying creature if really feel odd.
Plus now that the witch has a feat to give the bonus to more skills it really feel that proficiency for those should be increased to trained. Many action require to be trained and it would be nice to have a way for the familiar to use those too.
-Second, while untrained in everything, the familiar is better than anyone untrained at medium/high level because it uses their level as an untrained value instead of ability + 0 (or just 0 in their case). They basically have the bard ecclectic skill feat (a 8th level feat) which also feel odd. Ok the familiar is a study buddy for the mage but that doesn't explain why it has a bit of knowledge on every single topic in existence.
I really think that the familiar skills should be changed. They should be trained in a very limited set of skills (acrobatics + stealth + other skills via feats or even maybe an option through familiar abilities) and perception for which they can use their full proficiency bonus (level+2) + caster ability* and they have a +0 bonus for the rest.
*An other option can be to have their ability score always be 0 but to allow their proficiency to increase by one mean or an other (probably never above expert) for roughly the same final score.
This would make them consistent with the rest of the creatures and the proficiency system, more useful for the few things they are meant to be good at but less a jack of all trades that can use any skill untrained with relative efficiency.
My understanding was that the familiar was just what you said along with a dash of spy-eye rather than being on par with other creatures:
"the main features of the familiar seem to be the familiar/master abilities and not the creature that comes with them."
The skills were included to allow for exercise of these abilities and the spy-eye. The familiar was not meant to be a combat-involved npc (thus preventing the Druid/Ranger, anyone else from running 3 - 4 characters in a combat round). The familiar was not meant to be an animal companion-lite. The key is that they are not given any stats which implies that they should be viewed as an extension of the owner. In that sense they should not be considered "Study Buddies" and should NOT be able to aid in any way other than through the abilities (Otherwise the familiar is much too powerful and is too good to be true)
I agree it should have the descriptor "Trained" for the 3 skills listed and that the catch-all phrase about skills can be better defined. I think that the level modifier for unlisted skills is meant as a catch-all to cover if a person comes up an out of the box idea for the exercise of the familiar abilities rather than it being a bard who knows everything
|1 person marked this as a favorite.|
Familiars need to be better defined, and some of us are hoping for a dose of familiar love in the upcoming GMG.
For example, some forumites have argued that familiars are on a "2-action leash" even outside Encounter mode. Thus preventing them, say, from carrying a message a couple miles away.
Somebody else wanted to have his familiar load a crossbow for him - but they have no defined carrying capacity, and even though this task should logically require at least a small or medium physiognomy, nothing besides the absence of a carrying capacity prevents a familiar with manual dexterity from attempting such a task.
I think the intent is for familiars to have a possible narrative role in Exploration mode, but nothing specifies this possibility in the rules.
And they could certainly borrow proficiency in a couple skills from their masters, without breaking anything.
I really don't think a catch all is really needed. In PF1 the familiar had their animal stats for catch-all which were basically +0 to +4. I don't think rounding everything to 0 would be a big deal.
The strange thing with having their level in everything is that you can use you familiar for a double recall knowledge on basically any creature for one action granted you speak their language. I like the idea of familiar using recall knowledge but I don't like the fact thet they have it for free baseline on all skills (even if it's with a relatively low chance of success at level +0).
And while I agree with you that a familiar should be near useless in combat aside from the deliver spell ability (and the master abilities), I think that if a feat is spent on a familiar (like the 8th level witch feat) it should give the familiar real out-of-combat usefulness (or minor combat usefulness like doing your recall knowledge for you).
I think that making all skills UNTRAINED helps tone down the power.
The familiar can use demoralize, coerce (albeit with penalties) and recall knowledge. This is why I made the my comment about not being "study buddies". Perhaps they should be explicitly prohibited from rolling a recall knowledge check as anything other than an Aid action (no getting multiple recall knowledges from a single PC action). Perhaps, a familiar power would be ("provide a +1 to all knowledge checks that the PC uses"). This would be very powerful but still not game-breaking
The Skilled Familiar feat does not actually make the familiar trained, it just improves some untrained skills. Thus, it appears to be a very minor feat which should probably be replaced with another type of feat
Thus, as I look at the familiar it should probably remain UNTRAINED on every skill to keep getting a familiar limited to being a very powerful feat rather than an insanely OP feat which most characters would take immediately
I think that making all skills UNTRAINED helps tone down the power.
The familiar can use demoralize, coerce (albeit with penalties) and recall knowledge. This is why I made the my comment about not being "study buddies".
They can aid another. That's what I meant by study buddy (that and the whole fluff of familiars). They can give you +1 basically all the time in downtime and exploration by helping you (granted they manage to beat the DC which is not a given).That's also why I think +0 for most skills would be better: Aid another would fail for those. Level + ability +2 for the few you decide to upgrade using feats would mean better chances.
And you have to remember that class feats are meant to be powerful. I don't think giving your familiar trained in two skills (which is only for witch and at lvl 8 and only once) would be that much better than let's say the rogue multiclass feat that gives you expert in a skill and master in an other + a skill feat and can be taken 5 times.
I agree that the familiar one would mean a great action economy be it in combat or in downtime but I don't think it would be OP. First because of your familiar inherent limits (0 items bonus and low proficiency will make their check way harder than if you did it).
Like for the animal companion, I think that if you invest feats (particularly class feats) in your familiar you should gain extra options.
Right now for the witch feat, giving +ability to two skills and nothing more seem really weak compared to the other feats.
I rest my case:
For me a familiar should be almost useless at lvl 1 aside from the master/familiar abilities (and the extra pair of eyes) but if you choose to invest feats in them their use should dramatically increase (and that should not be direct combat use to separate them from animal companions).
Also I don't think familiar warrant special rules on skills. So if untrained their score should be 0 and if trained their score should be level+2. That makes them in line with other creatures without complicating the rules for them.
|
OPCFW_CODE
|
pkg/repro: does not reproduce original bugs
This needs to be additionally confirmed, but filing here so it's not lost.
It tried to do 86 reproductions, reproduction finished with a crash in only 36 runs, and only in 4 runs the resulting crash matched the one we tried to reproduce. This looks like too low number for both, may suggest some bug.
Orignal crash
Resulting crash
kernel BUG in next_uptodate_folio
possible deadlock in __ext4_mark_inode_dirty
general protection fault in put_pwq_unlocked
possible deadlock in ext4_xattr_set_handle
WARNING in kvm_put_kvm
WARNING: locking bug in srcu_gp_start_if_needed
general protection fault in put_pwq_unlocked
possible deadlock in ntfs_read_folio
possible deadlock in f2fs_get_node_info
general protection fault in dst_dev_put
possible deadlock in ntfs_set_size
possible deadlock in jfs_set_acl
general protection fault in lmLogSync
KASAN: wild-memory-access Read in __timer_delete_sync
possible deadlock in f2fs_handle_error
WARNING: refcount bug in sco_conn_del
BUG: unable to handle kernel NULL pointer dereference in deactivate_slab
general protection fault in put_pwq_unlocked
INFO: rcu detected stall in file_free
KASAN: global-out-of-bounds Read in __timer_delete
possible deadlock in join_transaction
WARNING in plfxlc_mac_release
WARNING in current_check_refer_path
WARNING in current_check_refer_path
WARNING in try_check_zero
WARNING: locking bug in rcu_pending_exit
INFO: rcu detected stall in vms_gather_munmap_vmas
INFO: task hung in bch2_copygc_stop
possible deadlock in diAllocAG
KASAN: slab-use-after-free Read in release_metapage
kernel BUG in submit_bh_wbc
BUG: MAX_LOCKDEP_KEYS too low!
INFO: rcu detected stall in udp_setsockopt
possible deadlock in __jfs_setxattr
WARNING: locking bug in rcu_pending_pcpu_dequeue
WARNING in hci_recv_frame
SYZFAIL: repeatedly failed to execute the program
WARNING: refcount bug in sco_sock_timeout
INFO: rcu detected stall in shmem_file_write_iter
INFO: rcu detected stall in rawv6_setsockopt
possible deadlock in xfs_ilock
kernel BUG in __bch2_trans_commit
general protection fault in aml_open
general protection fault in aml_open
KASAN: stack-out-of-bounds Write in imageblit
KASAN: slab-use-after-free Read in stop_tty
WARNING in bch2_trans_put
WARNING: locking bug in rcu_pending_exit
INFO: rcu detected stall in x64_sys_call
WARNING: locking bug in rcu_pending_exit
WARNING in srcu_check_nmi_safety
WARNING: locking bug in rcu_pending_exit
BUG: unable to handle kernel paging request in bitfill_aligned
WARNING in ib_uverbs_release_dev
KASAN: slab-use-after-free Read in hci_sock_get_cookie
WARNING in io_ring_exit_work
INFO: rcu detected stall in corrupted
INFO: task hung in del_device_store
BUG: using smp_processor_id() in preemptible code in nft_inner_eval
INFO: task hung in __closure_sync_timeout
possible deadlock in ext4_evict_inode
BUG: unable to handle kernel NULL pointer dereference in __put_partials
WARNING: locking bug in rcu_pending_exit
WARNING in delayed_work_timer_fn
BUG: MAX_LOCKDEP_KEYS too low!
WARNING in kvm_dev_ioctl
WARNING in ieee80211_rx_list
possible deadlock in jfs_mount_rw
WARNING in _xfs_buf_alloc
WARNING: locking bug in rcu_pending_pcpu_dequeue
WARNING: locking bug in rcu_pending_exit
possible deadlock in mgmt_set_connectable_complete
INFO: task hung in blk_mq_get_tag
INFO: task hung in blk_mq_get_tag
INFO: rcu detected stall in chrdev_open
INFO: task hung in exit_mm
general protection fault in xlog_cil_push_work
WARNING: locking bug in rcu_pending_exit
possible deadlock in diFree
lost connection to test machine
general protection fault in xfs_buf_bio_end_io
WARNING: locking bug in rcu_pending_exit
WARNING in kernfs_get
INFO: task hung in __closure_sync_timeout
possible deadlock in nilfs_evict_inode
BUG: MAX_LOCKDEP_KEYS too low!
INFO: task hung in nfsd_nl_threads_get_doit
INFO: task hung in __alloc_workqueue
KASAN: vmalloc-out-of-bounds Write in tpg_fill_plane_buffer
WARNING: locking bug in f2fs_getxattr
INFO: task hung in ext4_evict_ea_inode
WARNING: locking bug in rcu_pending_exit
general protection fault in put_pwq_unlocked
INFO: rcu detected stall in rtnl_newlink
possible deadlock in f2fs_evict_inode
WARNING: locking bug in ext4_mb_add_groupinfo
WARNING in cleanup_mnt
WARNING: locking bug in rcu_pending_exit
INFO: task hung in genl_rcv_msg
KASAN: slab-use-after-free Read in move_to_new_folio
INFO: task hung in disable_device
general protection fault in ip6_pol_route
KASAN: slab-use-after-free Read in handle_tx
INFO: task hung in do_renameat2
INFO: task hung in jfs_commit_inode
INFO: task hung in f2fs_stop_gc_thread
general protection fault in gtp_dellink
BUG: MAX_LOCKDEP_KEYS too low!
general protection fault in wg_packet_receive
WARNING: locking bug in rcu_pending_exit
WARNING: locking bug in sco_sock_timeout
WARNING in bch2_fs_release
BUG: unable to handle kernel paging request in drm_fbdev_ttm_helper_fb_dirty
lost connection to test machine
INFO: task hung in ima_file_free
INFO: task hung in ima_file_free
possible deadlock in f2fs_record_stop_reason
general protection fault in __fib6_drop_pcpu_from
KASAN: slab-use-after-free Read in bch2_get_next_online_dev
WARNING: locking bug in rcu_pending_exit
WARNING in call_s_stream
general protection fault in put_pwq_unlocked
My instances used "dashboard_only_repro": true mode. @a-nogikh hypothesis is that this is WAI, it just tried to reproduce only crashes that are notoriously hard to reproduce (syzbot did not manage so far). This sounds plausible.
Then there may be another action item here:
try to reproduce more of these
and try to produce the original crash in more cases (e.g. run all programs for longer, and then choose the program that triggered the original crash)
|
GITHUB_ARCHIVE
|
A NeoPixel strip is an LED strip where each individual LED can be independently controlled. This is the distinguishing feature of NeoPixel strips: the ability to program each LED on the strip to to display any color and brightness you want.
NeoPixel strips find versatile application in various domains, including decorative lighting, wearable technology, artistic installations, and dynamic visual displays.
Here are the steps covered in this tutorial:
- Arduino + USB Cable.
- NeoPixel LED strip
- Jumper wires
- External power supply (needed for longer LED strips)
Installing Arduino IDE
To begin, make sure you have the Arduino IDE installed on your computer. If it’s not already installed, you can download it from the official Arduino website at https://www.arduino.cc/en/software.
Installing FastLed Library
Follow this step by step guide FastLED Library. Or see the steps below.
- Open the Arduino IDE on your computer.
- In the Arduino IDE, navigate to the “Sketch” menu located at the top of the screen.
- Within the “Sketch” menu, find and select the “Include Library” submenu.
- A list of available libraries for installation will appear. Scroll down the list until you locate “FastLED” and then click on it.
- A pop-up window containing information about the FastLED library will appear. In this window, you’ll find an “Install” button. Click on this button to initiate the library installation process.
- The Arduino IDE will now proceed to download and install the FastLED library. You’ll be able to track the progress via a status bar displayed on the screen.
- Upon successful completion of the installation, you will receive a notification confirming that the FastLED library has been successfully installed.
- With the library now installed, you can incorporate it into your Arduino sketches. To include the FastLED library in your code, simply add the following line at the beginning of your sketch: #include <FastLED.h>
- Prepare the NeoPixel LED Strip:
- Identify the three connectors on your NeoPixel LED strip: +5V (or VCC), Ground (GND), and Data In (DI).
- Note that some NeoPixel strips may have additional connectors, but for basic operation, you only need these three.
- Connect +5V and Ground:
- Connect a jumper wire from the +5V pin on your Arduino to the +5V (VCC) on the NeoPixel strip.
- Connect another jumper wire from the Ground (GND) pin on your Arduino to the Ground (GND) on the NeoPixel strip.
- This provides power to the NeoPixel strip and establishes a common ground reference
- Connect Data In (DI):
- Connect a jumper wire from a digital pin on your Arduino (e.g., Pin 4) to the Data In (DI) on the NeoPixel strip.
- Make sure you choose a digital pin that can be used for PWM (Pulse Width Modulation) as NeoPixels require precise timing.
- Power Supply (if needed):
- If you’re using a longer NeoPixel strip or multiple strips, it’s essential to provide an external power supply in addition to the Arduino’s power.
- Connect the +5V and GND wires from the external power supply to the corresponding pins on the NeoPixel strip. Ensure the grounds are still connected together with the Arduino.
- This additional power supply ensures that there’s enough current to drive all the LEDs on the strip properly.
- If you want to avoid powering the arduino with USB use the “Stand alone” setup as pictured below
Download test examples (sketches)
To get started you can download some of our test examples from our Git Tutorial Repository.
- Download sketch files: From the NeoPixel tutorial Repository.
- Programming: Write or load an Arduino sketch to control the NeoPixel LEDs. In your code, specify the pin you connected the Data In (DI) wire to (e.g., #define PIN 4).
- Upload Code: Connect your Arduino board to your computer using a USB cable and upload your Arduino sketch to the board. Follow this guide if this is your first time uploading code to a Arduino.
- Testing: Once the code is uploaded, your NeoPixel LEDs should respond to the instructions in your sketch, creating various lighting effects and patterns.
FastLED library cheatsheet
||Initializes the LED strip/matrix.||
||Sets overall LED brightness (0-255).||
||Sends data in
||Turns off all LEDs.||
||Delays program execution.||
||Adjusts LED color temperature.||
||Configures dithering for smooth color transitions.||
||Sets all LEDs to a specified color.||
||Sets brightness correction for LED strip type.||
||Initializes an LED matrix.||
||Clears data buffer, resets LEDs.||
||Sets maximum power consumption for LEDs.||
||Sets gamma correction curve for color correction.||
FastLED provides these pre-conigured incandescent color profiles: Candle, Tungsten40W, Tungsten100W, Halogen, CarbonArc, HighNoonSun, DirectSunlight, OvercastSky, ClearBlueSky,
FastLED provides these pre-configured gaseous-light color profiles: WarmFluorescent, StandardFluorescent, CoolWhiteFluorescent, FullSpectrumFluorescent, GrowLightFluorescent, BlackLightFluorescent, MercuryVapor, SodiumVapor, MetalHalide, HighPressureSodium,FastLED also provides an “Uncorrected temperature” profile: UncorrectedTemperature;
Play around with your code to get different results. If programming isnt your strongsuit, try using ChatGPT to get the desired results.
Color Patterns: You can modify your code to change the colors displayed on the NeoPixel strip. For example, you can make it display a rainbow, cycle through different colors, or even respond to input (e.g., from a sensor).
Animation Effects: Experiment with various animation effects such as fading, blinking, or scrolling patterns. These can be achieved by adjusting the timing and brightness of the LEDs in your code.
User Interaction: Consider adding user interaction, like a button or sensor, to control the NeoPixel LEDs. For example, you could change the LED pattern when a button is pressed or adjust the speed of an animation based on sensor data.
Custom Patterns: Get creative and design your custom lighting patterns. You can create pixel art, simulate natural phenomena, or replicate famous light shows.
Online Resources: Explore online forums, communities, and tutorials for Arduino and NeoPixel projects. Many resources provide pre-written code for various effects that you can adapt to your project.
Remember that experimenting and tweaking your code is a great way to learn and achieve the desired lighting effects for your NeoPixel LED strip. If programming isn’t your strong suit, ChatGPT is a valuable resource for obtaining code snippets, and troubleshooting.
|
OPCFW_CODE
|
The HP Compaq t5710 makes a great DOS and Windows 98 retro gaming machine. But how can you install Windows 98 on a system with no CD drive? Can you install from USB? Here’s how to do it.
What you need
- 32GB (or larger) USB stick
- HP Compaq t5710 thin client
- Windows 98 SE install media – ISO and boot floppy
- Windows 98 driver package
- Recommended utilities – 7-zip 9.20, DirectX 7.0a, WinSCP 4.39
These instructions will also work for other driveless PCs (e.g. HP t5000 Series thin clients) with some minor adjustments.
Step 1 - Download and install Easy2Boot
First download and install Easy2Boot. This is a super useful tool that can prepare a USB flash drive to boot almost any floppy or CD image - even when you don’t actually have a physical floppy or CD drive on your system. Perfect for our thin client build.
Easy2Boot likes to work with contigious (not fragmented) files - this is why we want a 32GB or larger USB drive. With smaller drives, our images might get fragmented and we might run into weird install or boot issues.
When the install completes, the
Make_E2B utility will launch. Just ignore and close this. Instead, open the install folder and find
MAKE_E2B_USB_DRIVE.cmd. Run this batch script as Administrator:
Step 2 - Prepare USB install media
In the command window, select your target USB drive (in my case this is
5 - ADATA USB Flash Drive). Then hit
Y to format the drive and
0 to set the default partition options. You’ll get one last warning. Hit
OK to start the partition and format process.
Once the format is done, repeatedly hit Enter to accept the default options (we don’t need to do anything special here). When the process is complete the command window will turn green. Just hit Enter to close:
With the USB stick prepared, you should have two partitions:
E2Baka “Easy2Boot” partition. Any ISOs or images you copy in here will be bootable via the Easy2Boot menu system
E2B_PTN2aka “Easy2Boot data partition”. Any files you copy here will be mounted on the host operating system when we launch via Easy2Boot
Step 3 - Copy Windows install files
We’ll plan to install Windows 98 from hard drive. This will make the install much faster and it’s also super useful to have the install files on our hard drive so we don’t have to keep mounting the Windows 98 CD in future.
First, download the Windows 98 SE ISO. Double-click to mount it as a new drive.
Copy the win98 folder over to the Easy2Boot data partition (
Next we need boot media. Download the Windows 98 boot floppy and copy it to the
\_ISO\WIN folder on your Easy2Boot (
Once the copy is done, we need to change the file extension. Rename the file, and change the extension to
imgfdhd01. This tells Easy2Boot that this is a floppy image, and to mount the thin client internal drive as drive C - which is what we need for Windows 98 install:
Step 4 - Copy drivers and utilities
Lastly, download and copy the Windows 98 driver package to the Easy2Boot data partition (
E2B_PTN2). This contains the chipset, graphics and audio drivers for our thin client hardware:
That’s our USB setup done. Remove the USB stick and switch over to the thin client.
Step 5 - Partition thin client internal hard disk
Next we need to prepare the thin client internal hard drive for Windows 98 install. Insert the Easy2Boot USB stick and power on the thin client. The system will recognize the USB drive and load the Easy2Boot menu system.
From the menu, select
Windows Boot - this will load the Windows boot menu - and then select
Windows 98 Second Edition Boot. This will boot from our Windows 98 floppy image:
On succesful boot, you should be at the DOS with the Windows 98 floppy image mounted as
fdisk to launch the Microsoft Fixed Disk Partition Tool:
This is not a guide on how to use FDISK (there are plenty of others out there). Also the steps required here will vary depending on your hardware and disk setup. For my build, I did the following:
Yto enable FAT32 support
3to delete partitions and then
1to delete the Primary DOS partition
1to create a new partition, and then
1again to create a Primary DOS partition
Yto use the maximum available parition size
2to activate our new partition</code>
Esca few times to exit
Once you’ve made partition changes, restart the system and boot from the Windows 98 floppy again:
Step 6 - Format thin client internal hard disk
Back at the DOS prompt. Before we install Windows 98, we need to format the internal hard disk. Type
format c: and then
Y to start the format process:
Step 7 - Copy Windows install files, drivers and utilities to hard disk
With the format complete, it’s time to copy the Windows 98 install files (from Step 3 above) and drivers/utilities (from Step 4) to the thin client internal hard disk. You’ll find them mounted and available on drive
Use the DOS
copy command to bring them across to drive
md C:\WIN98 copy D:\WIN98\*.* C:\WIN98 md C:\DRIVERS copy D:\DRIVERS\*.* C:\DRIVERS md C:\UTILS copy D:\UTILS\*.* C:\UTILS
With everything safely on our internal hard disk, we’re ready to start the Windows 98 install process.
Step 8 - Install Windows 98
C:\WIN98\SETUP.EXE to launch Windows 98 setup. Follow all the defaults to install Windows 98:
Step 9 - Install 7-Zip
When Windows 98 loads, first install 7-Zip. We need this to extract the drivers. Double click to install:
Step 10 - Install drivers
With 7-Zip installed, extract and then install drivers in the following order. The sequencing is important here. If you do this out of order, it won’t work:
- IDE Hotfix (
IDE Hotfix - q245682.exe). This is required to make the thin client IDE controller work correctly in Windows 98
- Chipset drivers (
Chipset - 4in1435v\Setup.exe)
With these two installed, you can install the following in any order:
- GPU drivers (
GPU - 6-2_wme_dd_cp_30314.exe)
- Audio drivers (
Audio - vinyl_v700b\SETUP.EXE)
- Network drivers (
Network - via_rhine_ndis5_v384a\WinSetup.exe)
You’ll need to reboot after each driver install - it’ll take a while. But once complete you’ll have a fully working Windows 98 install. Enjoy!
More t5710 articles
- I added a 3dfx Voodoo2 to a Thin Client PC - 15 Feb 2022
- Playing Quake 2 on 3dfx Voodoo 2, on Windows 98 SE, on my HP Compaq Thin Client. A minor miracle this works as well as it does. Also v happy with my railgun skills - still got it! - 13 Feb 2022
- HP Compaq t5710 – Using a PS/2 Splitter Cable - 13 Nov 2021
- HP Compaq t5710 – How To Install Windows 98 from USB Flash Drive with Easy2Boot - 11 Jul 2021
- HP Compaq t5710 Review – Great for DOS and Windows 98 Gaming? - 11 Jul 2021
- How To Install Windows 98 from USB Flash Drive with Easy2Boot - HP Compaq t5710 - 11 Jul 2021
- HP Compaq t5710 Review - The Best Mini PC for DOS Gaming? - 02 Jun 2021
- Time to build a DOS gaming PC. Went for the cheap option: HP Compaq t5710 Thin Client, 800MHz Transmeta Crusoe CPU, 256Mb RAM, Sound Blaster Pro hardware compatibility, 1x PCI slot. Will definitely add Voodoo graphics, maybe OPL3LPT - 19 Sep 2020
|
OPCFW_CODE
|
# OO Basics: Student
# I worked on this challenge [by myself].
# This challenge took me [#] hours.
# Pseudocode
# finish out the initialize method
# find the average of the list of test scores
# assign a letter grade based on the average
# Initial Solution
class Student
attr_accessor :scores, :first_name, :grade
def initialize(first_name, scores) #Use named arguments!
#your code here
@scores = scores
@first_name = first_name
@grade = nil
end
def average ()
sum = 0
@scores.each do |num|
sum += num
end
ave = sum / @scores.length
give_grade(ave)
end
def give_grade (ave)
if (ave >= 90)
@grade = 'A'
elsif (ave >= 80)
@grade = 'B'
elsif (ave >= 70)
@grade = 'C'
elsif (ave >= 60)
@grade = 'D'
else
@grade = 'F'
end
end
end
alex = Student.new("Alex", [100, 100, 100, 0, 100])
ben = Student.new("Ben", [85, 91, 81, 86, 95])
matt = Student.new("Matt", [70, 75, 81, 74, 69])
jen = Student.new("Jen", [100, 76, 82, 93, 65])
jane = Student.new("Jane", [50, 32, 52, 0, 0])
students = [alex, ben, matt, jen, jane]
student_names = students.map { |kid| kid.first_name }
def linear_search(arr, name)
index = -1
arr.each do |person|
if person.first_name == name
index = arr.index(person)
end
end
return index
end
def binary_search(arr, name, min = 0, max)
midpoint = (min + max)/2
if arr.include?(name)
if arr[midpoint] > name
return binary_search(arr, name, min, midpoint - 1)
elsif arr[midpoint] < name
return binary_search(arr, name, midpoint + 1, max)
else
return midpoint
end
else
return -1
end
end
# Refactored Solution
class Student
attr_accessor :scores, :first_name, :grade
def initialize(first_name, scores) #Use named arguments!
#your code here
@scores = scores
@first_name = first_name
@grade = nil
end
def average ()
sum = 0
@scores.map { |num| sum += num }
give_grade(sum / @scores.length)
end
def give_grade (ave)
if (ave >= 90)
@grade = 'A'
elsif (ave >= 80)
@grade = 'B'
elsif (ave >= 70)
@grade = 'C'
elsif (ave >= 60)
@grade = 'D'
else
@grade = 'F'
end
end
end
alex = Student.new("Alex", [100, 100, 100, 0, 100])
ben = Student.new("Ben", [85, 91, 81, 86, 95])
matt = Student.new("Matt", [70, 75, 81, 74, 69])
jen = Student.new("Jen", [100, 76, 82, 93, 65])
jane = Student.new("Jane", [50, 32, 52, 0, 0])
students = [alex, ben, matt, jen, jane]
def linear_search(arr, name)
index = -1
arr.each do |person|
if person.first_name == name
index = arr.index(person)
end
end
return index
end
def binary_search(arr, name, min = 0, max)
midpoint = (min + max)/2
if arr.include?(name)
if arr[midpoint] > name
return binary_search(arr, name, min, midpoint - 1)
elsif arr[midpoint] < name
return binary_search(arr, name, midpoint + 1, max)
else
return midpoint
end
else
return -1
end
end
# Driver Code
puts linear_search(students, "Ben")
puts binary_search(students.map { |kid| kid.first_name }.sort(), "Ben", 0, students.length-1)
# Reflection
# What concepts did you review or learn in this challenge?
# => I further solidified working with classes. I also learned some
# => new search techniques with the binary search method.
# What is still confusing to you about Ruby?
# => There are still some methods I'm trying to get the hang of. Also, since there are
# => so many methods, it can be dificult to choose exactly which one is the proper method
# => to choose.
# What are you going to study to get more prepared for Phase 1?
# => Ruby docs for sure. Making sure I know the enumerables and methods for hashes and arrays.
|
STACK_EDU
|
Saves and fills passwords automatically and flexibly. Can generate portable edition. USB/Bluetooth-based authentication available. Can import from RoboForm, others. Automatic backup of password database. Supports many browsers.
RoboForm import caught only half the items it should have. Can't handle even slightly nonstandard log-ins that other products can. Help system needs work.
Large Software Password Manager handles average sites but trips up on the unusual, missing some sites handled by the competition. Its password capture and fill-in features are unusually flexible and it supports a huge number of browsers, but the help system needs a rewrite. It's a decent one-point-oh release.
Not So Flexible After All
I have over 200 log-ins stored in RoboForm, so the first thing I tried after installing Password Manager was the option to import those passwords. RoboForm's database format is proprietary, so there's no way to just read it. Password Manager cleverly rips the text from the print preview window of RoboForm's print feature.
I was disappointed to find that the import process pulled in only 80 of the 200 sets of stored credentials. It did incorporate RoboForm's submenus as name prefixes—for example, the "Gmail" item in the "E-mail" submenu became "E-mail\Gmail." But it completely failed to capture more than half of the log-ins.
I started manually logging in to the sites whose log-ins wouldn't import and found that Password Manager successfully captured quite a few of them. For sites with straightforward username and password fields, it did fine. But for any site using a different style, it flopped. For example, one site uses fields named "code" and "pin" for log-in, and Password Manager just can't swallow that. RoboForm, 1-click, Identity Safe, Eikon and DigitalPersona all handled this site and many others that use mildly nonstandard log-in styles.
Many banks are now using complex multipage log-in procedures to guard against hacking. 1-Click records and plays back all the pages. ID Vault 4.0 has its own database of log-in procedures for about 8,000 financial and shopping sites; it knows exactly what to do for those. RoboForm and Identity Safe generally can't manage multipage log-ins. Eikon and DigitalPersona are both flexible enough to capture the multiple pages separately. But Password Manager is completely unable to capture or autofill these complex log-ins. For all the flexibility in its user interface, it's too rigid about what passwords it can capture.
There's one more oddity that I have to mention. The help system was clearly written by someone for whom English is a second language. Articles like "the" are mostly missing, and you'll find plenty of peculiar constructions like "It is always appears…" and "If you registering new account…" Worse, the help system is poorly indexed, it has too few pages with too much text on each, and some important topics aren't covered at all. Out of 11 pages in the Settings dialog, the help covers just one, and its description doesn't match the actual page. There's no explanation of the account editing dialog, the portable edition, USB-based authorization—I could go on! There's really no excuse for releasing a product with a help system this dismal.
Large Software Password Manager has many good points. It handles the basic task of managing standard username/password log-in accounts just fine. But it lags seriously behind the competition in its ability to manage the numerous Web sites whose log-in systems aren't plain vanilla. And the help system is a mess. It's a decent one-point-oh release, but there's still lots of work to be done.
|
OPCFW_CODE
|
This Tuesday is another event in a year-long series of weekly conversations and exhibits in 2010 shedding light on examples of Plausible Artworlds.
So far the series has featured projects and initiatives whose self-understanding is somehow “art” related, however tenuous their relationship to artworld-making may be. This week, however, we shift away self-described “art” worlds altogether to strike up a conversation with the ‘volunteers’ at freenode (chat.freenode.net) – an Internet Relay Chat (IRC) network freely provided to a variety of groups and organizations. IRC itself is a bit like skype without the business model — that is, a form of real-time conferencing, essentially designed for group communication in discussion forums, called channels.
freenode, formerly known as Open Projects Network, is a popular IRC network used to discuss peer-directed projects — such as Plausible Artworlds amongst countless others. freenode provides discussion facilities for the Free and Open Source Software communities, for not-for-profit organizations and for related communities and organizations. In 1998, the network had about 200 users and less than 20 channels. Ten years down the line the network currently peaks at just under 60,000 users and 10,000 channels, making it the largest free and open-source software-focused IRC network.
Though some aspects of freenode philosophy are specific to the workings of its medium, because the network exists to provide interactive services to peer-directed project communities, some of the group’s basic principles may prove invaluable to rethinking we we are calling artworlds. They include:
- Community members benefit from better access to each other. Putting a number of projects in close proximity in an interactive environment creates linkages and exchange between developers and projects.
- Communication and coordination skills are important to community projects. Peer-directed projects work because the paradigm works. Developers and community members are not unusually gifted at project coordination and communication. But improving those skills can make projects work better.
- Friendly interaction is more efficient than flaming. Calm, relaxed discourse without angry contention provides for better exchange of information.
- Project developers are self-driven. No one guarantees whose work will be used nor whether a project is worth doing. There is no single right approach to any design, implementation or support problem, and friendly competition is a fundamentally good thing.
- Peer-directed project communities need to grow. Many valuable peer-directed projects chronically lack skilled, motivated developers with time to devote to them. The potential base for peer-directed project communities includes anyone with the skills and interest to participate.
- Licensing must be free. For peer-directed projects to succeed, their creative output must be widely available and usable without significant restriction.
Many of the “plausible artworlds” we’ve been looking at could be described, strictly speaking, as “free nodes” of common desire, skill sets and exchange. Beyond its mere name, it may well be that freenode’s modus operandi too can shed light on the dynamics of more plausible artworlds.
See you all then!
Join us every Tuesday night – in person, or on Skype, skypename: ‘basekamp’
If you come to the potluck chat in person, be sure to bring a dish :)
Basekamp space: 723 Chestnut St, 2nd floor, Philadelphia usa
Click to join this week’s Potluck Chat on Skype:
|
OPCFW_CODE
|
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import json
import urllib2
import re
#Stock symbols Change stock symbols to match your favorites
# format is {SYMBOL: % change to trigger}
#percent represents trigger % (1.50=1.5%) PERCENT change value stock to appear when continuous_ticker=False
#if sotck price exceeds by -% / +% it will be displayed
stock_tickers = {"AAPL": 1.50, "NFLX": 1.00, "T": 1.50,"GE": 1.50,"BP": 2.0}
json_data= None
def stockquote(symbols=None):
global json_data
stock_table=''
symbol_list= symbols.split(",")
if not symbols:
return json.loads('{"Stocks":["none"]}')
# Stock web scraping code, I know this is not the best way ... its quick and dirty :)
# this can be revised to use an API-key based stock price tool
# Consider rewriting this section using https://pypi.python.org/pypi/googlefinance
# or using pandas-datareader https://github.com/pydata/pandas-datareader
try:
base_url ="https://finance.google.com/finance?q="+symbols
print "fetching... "+base_url
headers = { 'User-Agent' : 'Mozilla/5.0' }
req = urllib2.Request(base_url, None, headers)
html = urllib2.urlopen(req).read()
#first use reg ex to extract the JSON snippet all the stock quotes we need
stock_table = re.search('"rows":(.*?)]}]', html)
json_string = stock_table.string[stock_table.start():stock_table.end()] # extract the json
json_string= "{"+ json_string + "}"
json_data = json.loads(json_string) #convert into a json data object
#print json_data
except Exception as e: print(e)
return json_data
symbols=''
for symbol, pct in stock_tickers.items():
symbols = symbols+ symbol+"," # comma delimited symbols list
symbols = symbols.rstrip(',') #stip off the last comma
print "Stocks: "+symbols
stock_data =stockquote(symbols)
#print stock_data
for s in stock_data["rows"]:
print s['values'][0]+' '+s['values'][2]+' '+s['values'][3]+' '+s['values'][5]
|
STACK_EDU
|
Web Development Project in Python
The internet is quite large, and approximately 4.10 billion individuals use it to interact online. According to reports, there are more than 100 billion websites, though the figure changes daily. All praise goes to the digital revolution and our quick progress toward moving our operations online.
The advent of visually oriented web browsers in the 1990s marked the beginning of the Global Internet era for users. Since then, web technology has grown exponentially, and the trend toward web development is currently at its height. Sounds quite thrilling.
So why are you still waiting? This blog will assist you in starting a career in web development by outlining exactly what you should study and how to put what you learn into action by creating projects and entering the field. The blog provides you with ongoing web development and professional web projects that you may work on to gain the knowledge necessary to succeed in the profession and acquire all the necessary abilities.
What is Web Development?
Understanding web development is crucial before beginning any initiatives.
The work required to create a web-based application or website on the internet is known as web development; it primarily deals with the non-design technical aspects of creating websites. It is divided into three groups by experts:
Web development services include:
Additionally, full-stack web development combines both front-end and back-and-backend web development techniques. Back-end web development deals with the connection to databases, servers, etc. In contrast, front-end web development deals with the visual side of a website or how people perceive its appearance and feel.
Use of Web Development
We know what web development is, but how can it be used? Naturally, to create websites!
The most significant application of web development is constructing websites. There are, however, a variety of additional motives for learning web development:
Building real-world projects are one of the finest ways to learn and enhance your coding abilities. It would help if you constructed an alluring portfolio to grow your career, whether you are an ambitious or intermediate front-end and backend or full-stack developer. But what tasks are available to me? Will they stand out sufficiently?
How is Python suitable for web development?
Python is the language that beginners learn the fastest, which is one of its benefits for creating online apps. Compared to C++ or Java, the language heavily uses common expressions and whitespace, considerably reducing the number of lines you need to type to develop code. After all, it is similar to our normal language easier.
Python web frameworks
Why are web frameworks important, and what are they?
A web framework is a group of modules and packages composed of standardized code that facilitates the creation of facilitates the development of web applications and enhances your project's dependability and scalability.
Only server-side technology, HTTP requests, including URL routing, database access, responses, and web security, use Python web frameworks.
Which prominent Python web frameworks are there?
Django and Flask are the most widely used Python web development frameworks.
Django's high-level, open-source Python web framework encourages clean, speedy development and good design. It is safe, scalable, and quick. Django provides thorough documentation and strong community support.
You may work with everything from mock-ups to much larger corporations with Django. Instagram, Dropbox, Pinterest, and Spotify are among the biggest Django users, for context.
Flask is considered a micro-framework, which is a minimalistic web framework. It lacks many tools and methods that full-stack frameworks like Django offer, like account authorization, a web template engine, and authentication. This is referred to as being less "batteries-included."
Other notable frameworks:
Which should you use?
What framework should I pick? It depends, is the response. Think about your level of web development expertise. Consider building your software with something more "barebones" if you have a lot of experience.
However, if you are a novice developer, it might be preferable to choose a framework like Django that offers greater support.
Ultimately, they may both do the same purpose; thus, starting to code is more important than worrying about which framework is better.
Python libraries for web development
Here are some helpful Python libraries for software development to keep in mind:
A Roadmap for Python Web Development
Step 1: HTML and CSS
The first step in studying web development is to become familiar with Html10, which is the building block of webpages. To begin your web development career, it would be preferable if you knew how to organize responsive static sites. Learning about the internet, HTTP, browsers, DNS, hosting, and more is also beneficial. Although it's not required, learning a CSS framework like Reoccur or Bootstrap, which greatly accelerates your development, is another option.
Step 3: DOM & jQuery
Step 4: Python
Step 5: Django + Databases
Additionally, you'll need to learn about databases like SQLite, how to run queries, and how to do CRUD operations. You can create a full-stack application using this! With Django, you can set up your rear environment and create the business logic.
Example: Create your first web application in Python à a quiz application as a small project
Screenshot of Output:
|
OPCFW_CODE
|
#include "blocks.h"
#include "metadata_disk.h"
#include "meta.h"
#include "list.h"
#include <linux/limits.h>
#include <inttypes.h>
#define CWD_LENGTH 128
char* normalize_path(char* resolved_path, char* cwd);
typedef struct stack_node
{
entry_disk* _entry;
unsigned int depth;
} stack_node;
void tree_print(char *metadata,int info)
{
list_t *stack;
list_create(&stack,sizeof(stack_node),free);
dinode_disk* di_n;
dentry_disk* d_en;
d_en = (dentry_disk*)(metadata + sizeof(dinode_disk));
stack_node *sn = malloc(sizeof(stack_node));
sn->_entry = &(d_en->tuple_entry[0]);
sn->depth = 0;
list_push(stack,sn);
while(list_get_len(stack)) {
stack_node cur_node;
list_pop(stack,&cur_node);
entry_disk *en = cur_node._entry;
di_n = (dinode_disk*)(metadata + en->dinode_off);
size_t n_dentries = di_n->n_dentries;
d_en = (dentry_disk*)((char*)di_n + sizeof(dinode_disk));
unsigned int i;
if (!info) {
for (i=0;i<cur_node.depth;++i) printf("-------");
printf("%s\n",en->filename);
} else {
printf("Filename :%s\n",en->filename);
printf("Dinode number :%ju\n",(uintmax_t)(di_n->dinode_number));
printf("Permissions :%3o\n",(di_n->permissions)&0777);
printf("User Id :%ju\n",(uintmax_t)(di_n->user_id));
printf("Group Id :%ju\n",(uintmax_t)(di_n->group_id));
printf("Time of access :%ju\n",(uintmax_t)(di_n->time_of_access));
printf("\n\n");
}
for (i=0;i<n_dentries;++i){
unsigned int j;
for (j=0;j<d_en->length;++j) {
en = &(d_en->tuple_entry[j]);
if (!strcmp(en->filename,".") || !strcmp(en->filename,"..")) continue;
stack_node *node = malloc(sizeof(stack_node));
node->_entry = en;
node->depth = cur_node.depth + 1;
list_push(stack,node);
}
d_en = (dentry_disk*)((char*)(d_en)+sizeof(dentry_disk));
}
}
list_destroy(&stack);
}
int query(char *metadata,char *path)
{
char *cwd = (char*)malloc(CWD_LENGTH*sizeof(char));
char *current_dir;
if(get_current_dir(&cwd,¤t_dir) < 0){
free(cwd);
return -1;
}
char *resolved_path = (char*)calloc(PATH_MAX,sizeof(char));
realpath(path, resolved_path);
char* normalized_filepath; //!!!!!!!!!carefull to free normalized path!!!!!!!!!
normalized_filepath = normalize_path(resolved_path,cwd);
free(resolved_path);
free(cwd);
if(normalized_filepath != NULL){
if(!strcmp(normalized_filepath,"")){
printf("Same directory\n");
}
else{
printf("The path in the hierarchy is: %s\n",normalized_filepath);
}
}
else{
printf("Error in normalization of the path\n");
free(normalized_filepath);
return 0;
}
int result = 1;
dinode_disk* di_n;
dentry_disk* d_en = (dentry_disk*)(metadata + sizeof(dinode_disk));
entry_disk *cur_entry = &(d_en->tuple_entry[0]);
char *token;
token = strtok(normalized_filepath,"/");
while(token!=NULL) {
di_n = (dinode_disk*)(metadata + cur_entry->dinode_off);
size_t n_dentries = di_n->n_dentries;
d_en = (dentry_disk*)((char*)di_n + sizeof(dinode_disk));
char matched = 0;
unsigned int i;
for (i=0;i<n_dentries && !matched;++i){
unsigned int j;
for (j=0;j<d_en->length && !matched;++j) {
cur_entry = &(d_en->tuple_entry[j]);
if (!strcmp(token,cur_entry->filename)) matched = 1;
}
d_en = (dentry_disk*)((char*)d_en +(sizeof(dentry_disk)));
}
if (!matched){
result = 0;
break;
}
token = strtok(NULL,"/");
}
free(normalized_filepath);
return result;
}
|
STACK_EDU
|
The first few labs in the SW Module are mostly a more in depth look at the SDK product provided with the Vivado Suite.
Lab 0 covers installing all the required tools. All of these steps were completed as part of the first set of labs, so there really wasn't anything to do.
They discuss the fact that the Editor is based on Eclipse IDE.
In Lab 1 they go into depth regarding the file we exported at the end of Lab 9 in the HW module. Z_system_wrapper.hdf. They explain that this file is the hardware definition file, it is a zip archive, and that it includes the following:
and a description of each file and its purpose:
So that was pretty much Lab 1.
We explore setting up an SDK Workspace,
The SDK uses the concept of workspaces to hold your software development work. A workspace is a
directory in your file system that SDK uses to hold meta-information about the projects with which
you are working. The workspace also contains your SDK settings, software project files, and logs.
We import the Hardware definition file described in Lab 1.
After that we are able to browse the Hardware platform, which includes:
• Peripheral set
• Address map
• Datasheets to peripherals
• System block diagram
Remember, this is the file that we exported at the end of the Hardware Labs, for
hardware and software engineers working together, this file is the only required item to be
transferred from the hardware team to the software team. For software engineers working
exclusively in SDK, they do not need anything else from the rest of the Vivado project.
At this point we are instructed to look at the peripherals displayed in the HDF.
That's a shot of the SDK view. From this point we are instructed to pick out one of the peripherals in the SDK, and where to find datasheets and programming information on the various peripherals and IP.
And again we are given a view of the Hardware System Block Diagram:
So that concludes Lab 2.
Lab 3 is a continuation, where they start to cover the bsp, or Board Support Package.
The first exercise is to use the SDK to generate the bsp.
This is the BSP report that is available after you complete the BSP generation. All those things that look like hyperlinks, actually are. They are links to documentation on the Processor, peripherals, and sample code exercising the peripheral.
They guide you through the process of finding where all these files are on the filesystem, and how it realtes to the system Workspace.
So Lab 3 is a continuation of getting familiar with the SDK, which will lead us to Lab 4, which will be to develop an application using sample code.
Now we had used the SDK in the HW labs, but this is a much greater in depth look at the files that are passed around, how to find the documentation you need to access the peripherals, and where to find important information regarding the memory ma of the processor, and what prebuilt libraries are available at your fingertips when using the SDK.
See you after Lab 4.
|
OPCFW_CODE
|
An assassin of darkness that approaches without form and sound.
A Tree that focuses on Debuffs and dealing damage over time [Curse]
A Tree that focuses on critically injuring enemies from the shadows [Dark]
Choose from 2 different Skill Trees.
※ The Damage and Debuff Chances of some Skills differ with the Skill Level.
Curses multiple enemies to deal damage over time for a certain duration.
Throws an envenomed dagger at the enemy to apply Wound and Poison for a certain duration and deal damage over time.
Attacks the enemy with a spark of Darkness.
Envenoms the weapon to apply Poison to enemies on Regular Attacks.
Absorbs the energy from a corpse to recover a certain amount of HP.
Reanimates a corpse to help aid in combat.
Continually absorbs HP and MP from the target for a certain duration.
Attacks the target and reduces its Movement Speed for a certain duration.
Increases Party Debuff RES for a certain duration.
Attacks enemies in a target area and deals damage over time for a certain duration.
Sets the enemy aflame with the Fire of Darkness, dealing damage over time for a certain duration.
Has a chance to dispel all Skill Enhancement Buffs for a target.
Detonates the Dark Energy deeply residing in one's heart,
Has a chance to greatly confuse the enemy and disable their Skill usage for a certain duration.
Poisons a selected area with the power of Darkness to deal damage over time.
Significantly increases your Debuff RES for a brief moment.
Enhances the Skill Damage of Cursed Pain, Cursed Fire, Demon, and Cursed Force
Attacks adjacent enemies.
Instantly pierces the enemy's heart.
Delivers a swift blow.
Retreats into an invisible veil to vanish from enemy sight and become immune to damage.
Throws Darkness powder to damage enemies. Has a chance to apply Blind.
Ambushes the enemy to deliver a critical blow.
Instantly moves in front of the enemy to throw Darkness powder and inflict damage.
Raises Party Defense Success Rate for a certain duration.
Increases Movement Speed and absorbs a portion of damage taken by forming a powerful shield for a certain duration.
Concentrates power and slashes downward at the enemy. Has a chance to apply Stun.
Focuses the enemy's weak points to deliver a critical blow.
Has a chance to remove all Debuffs from yourself or an ally.
Delivers a critical blow to nearby enemies with a spinning strike.
Strikes the enemy with a slash full of Dark Energy. Has a chance to apply Knockdown.
Attacks the enemy using a certain amount of HP proportional to Max HP
Significantly increases Defense Success Rate for a brief moment.
Summons evil spirits to reduce the Movement Speed of nearby enemies for a certain duration. Has a chance to apply Fear.
|
OPCFW_CODE
|
Европейская рулетка онлайн играть регистрация
Where order of play is relevant, the extensive form must be specified or your conclusions will be unreliable. The distinctions described above are difficult to fully grasp if all one has to go on are abstract descriptions.
Suppose that the police have игры с выводом денег это развод two people whom they know have committed an armed robbery together. Unfortunately, they lack enough admissible evidence to get a jury to convict. They do, however, have enough evidence to send each prisoner away for two years for theft of the getaway car.
We can represent the problem faced by both of them on a single matrix that captures европейская рулетка онлайн играть регистрация way in which their separate choices interact; this is the strategic form of their game: Each cell of the matrix gives the payoffs to both players for each combination of actions.
So, if both players confess then they each get a payoff of 2 (5 years in prison each). This appears in the upper-left cell. If neither of them confess, they each get a payoff of 3 (2 years in prison each). This европейская рулетка онлайн играть регистрация as the lower-right cell.
This appears in the upper-right cell. The reverse situation, in which Player II confesses and Европейская рулетка онлайн играть регистрация I refuses, appears in the lower-left cell. Each player evaluates his or her two possible actions here by comparing their personal payoffs in each column, since this shows you which of their actions is preferable, европейская рулетка онлайн играть регистрация to themselves, for each possible action by their partner.
So, observe: If Player II confesses then Player I gets a payoff of 2 by confessing and a payoff of 0 by refusing.
If Player II refuses, then Player Играть бесплатно в онлайн казино вулкан gets a payoff of 4 by confessing and a payoff of 3 by refusing. Therefore, Player I is better off confessing regardless of what Player II does. Player II, meanwhile, evaluates her actions by comparing her payoffs down each row, and she comes to exactly the same conclusion that Player I европейская рулетка онлайн играть регистрация. Wherever one action for a player is superior to her other actions for each possible action by the opponent, we say that европейская рулетка онлайн играть регистрация first action strictly dominates the second one.
In the PD, then, confessing strictly dominates refusing for both players. Both players know this about each other, thus entirely eliminating any temptation to depart from the strictly dominated path. Thus both players will confess, and both will go to prison for 5 years.
The players, and европейская рулетка онлайн играть регистрация, can predict this outcome using a mechanical procedure, known as iterated elimination of strictly dominated strategies.
Player 1 can see by examining the matrix that his payoffs in each cell of the top row are higher than his payoffs in each corresponding cell of the bottom row. Therefore, it can never be utility-maximizing for him to play his bottom-row strategy, viz.
Now it is obvious that Player II will not refuse to confess, since her payoff from confessing in приложение за которое платят деньги за игры two cells that remain is европейская рулетка онлайн играть регистрация than her payoff from refusing.]
|
OPCFW_CODE
|
I firmly believe in the need to create a more diverse and inclusive space in the sciences and to put equity and justice at the forefront of our work on the energy transition and climate change. As an interdisciplinary scientist focused on sustainability challenges, I am committed to 1) mentorship and teaching that encourages engagement by students who are under-represented in these fields, and 2) inclusive community science that places community needs at the forefront of research and action. Please see below for examples of my teaching and DEI-J work; you can find more on community science in my research tab.
Co-Creator, Course on Racism, Colonialism, and Extraction in the Geosciences
I co-led a course in MIT EAPS on the historical and social context of the geosciences. This course is run as a participatory seminar where we discuss relevant work on philosophies and methodologies of science, the geosciences in relation to extractivism and colonialism, and environmental justice and racism. We are in the second year of running the course.
Co-Author of Research on Gender Equity
One of my interests is providing data driven research on JEDI in the geosciences. This work can be found in my publications tab. As I continue to work on further research in this area, I am always open to collaboration and ideas.
Co-Coordinator of Application Mentorship Program
I served as the co-coordinator for MIT EAPS’s application mentorship program, which aims to provide support for graduate applicants who may not have access to the mentorship they need in applying to graduate school. I coordinated the assignment of mentors, training of mentors, and evaluation of the program.
Student Representative on EAPS DEI-Committee
I have acted as one of two student representatives on MIT EAPS’s DEI-Committee. As a student representative, I collaborate with faculty, staff, researchers and students on and off the committee to set priorities, develop action plans, and implement them. My personal focus has been advocating for and serving on the hiring committee for a DEI-Officer, serving as an intermediary between the student body and the committee, and working to collect data on DEI relevant issues in our department on an annual basis.
Mentorship and Teaching
Research Advisor: Gabby Cazeres (Fall 2019 and Spring 2020), Yuka Perera (Summer 2020)
Teacher and Coordinator: Racism, Colonialism, and Extraction in the Geosciences
Teaching Assistant: People and the Planet: Environmental Governance and Science (Undergraduate Course, Fall 2021)
Kaufman Teaching Certificate Program, Spring, 2022
Outside of Academia, I have mentored and managed multiple interns at the Wilson Center and the Rock Environment and Energy Initiative.
|
OPCFW_CODE
|
A practical example of GSDMM in python?
I want to use GSDMM to assign topics to some tweets in my data set. The only examples I found (1 and 2) are not detailed enough. I was wondering if you know of a source (or care enough to make a small example) that shows how GSDMM is implemented using python.
Do you just need link of code?
It's better than nothing. But at minimum a brief explanation of the process would be ideal.
I finally compiled my code for GSDMM and will put it here from scratch for others' use. I have tried to comment on important parts:
# Imports
import random
import numpy as np
from gensim.models.phrases import Phraser, Phrases
from gensim.utils import simple_preprocess
from gsdmm import MovieGroupProcess
# data
data = ...
# stop words
stop_words = ...
# turning sentences into words
data_words =[]
for doc in data:
doc = doc.split()
data_words.append(doc)
# create vocabulary
vocabulary = ...
# Removing stop Words
stop_words.extend(['from', 'rt'])
def remove_stopwords(texts):
return [
[
word
for word in simple_preprocess(str(doc))
if word not in stop_words
]
for doc in texts
]
data_words_nostops = remove_stopwords(vocabulary)
# building bi-grams
bigram = Phrases(vocabulary, min_count=5, threshold=100)
bigram_mod = Phraser(bigram)
print('done!')
# Form Bigrams
data_words_bigrams = [bigram_mod[doc] for doc in data_words_nostops]
# lemmatization
pos_to_use = ['NOUN', 'ADJ', 'VERB', 'ADV']
data_lemmatized = []
for sent in data_words_bigrams:
doc = nlp(" ".join(sent))
data_lemmatized.append(
[token.lemma_ for token in doc if token.pos_ in pos_to_use]
)
docs = data_lemmatized
vocab = set(x for doc in docs for x in doc)
# Train a new model
random.seed(1000)
# Init of the Gibbs Sampling Dirichlet Mixture Model algorithm
mgp = MovieGroupProcess(K=10, alpha=0.1, beta=0.1, n_iters=30)
vocab = set(x for doc in docs for x in doc)
n_terms = len(vocab)
n_docs = len(docs)
# Fit the model on the data given the chosen seeds
y = mgp.fit(docs, n_terms)
def top_words(cluster_word_distribution, top_cluster, values):
for cluster in top_cluster:
sort_dicts = sorted(
mgp.cluster_word_distribution[cluster].items(),
key=lambda k: k[1],
reverse=True,
)[:values]
print('Cluster %s : %s'%(cluster,sort_dicts))
print(' — — — — — — — — — ')
doc_count = np.array(mgp.cluster_doc_count)
print('Number of documents per topic :', doc_count)
print('*'*20)
# Topics sorted by the number of document they are allocated to
top_index = doc_count.argsort()[-10:][::-1]
print('Most important clusters (by number of docs inside):', top_index)
print('*'*20)
# Show the top 10 words in term frequency for each cluster
top_words(mgp.cluster_word_distribution, top_index, 10)
Links
gensim modules
https://radimrehurek.com/gensim/models/phrases.html#module-gensim.models.phrases
https://radimrehurek.com/gensim/utils.html#gensim.utils.simple_preprocess
Python library gsdmm
GSDMM (Gibbs Sampling Dirichlet Multinomial Mixture) is a short text
clustering model. It is essentially a modified LDA (Latent Drichlet
Allocation) which suppose that a document such as a tweet or any other
text encompasses one topic.
GSDMM
LDA
Address: github.com/da03/GSDMM
import numpy as np
from scipy.sparse import lil_matrix
from scipy.sparse import find
import math
class GSDMM:
def __init__(self, n_topics, n_iter, random_state=910820, alpha=0.1, beta=0.1):
self.n_topics = n_topics
self.n_iter = n_iter
self.random_state = random_state
np.random.seed(random_state)
self.alpha = alpha
self.beta = beta
def fit(self, X):
alpha = self.alpha
beta = self.beta
D, V = X.shape
K = self.n_topics
N_d = X.sum(axis=1)
words_d = {}
for d in range(D):
words_d[d] = find(X[d,:])[1]
# initialization
N_k = np.zeros(K)
M_k = np.zeros(K)
N_k_w = lil_matrix((K, V), dtype=np.int32)
K_d = np.zeros(D)
for d in range(D):
k = np.random.choice(K, 1, p=[1.0/K]*K)[0]
K_d[d] = k
M_k[k] = M_k[k]+1
N_k[k] = N_k[k] + N_d[d]
for w in words_d[d]:
N_k_w[k, w] = N_k_w[k,w]+X[d,w]
for iter in range(self.n_iter):
print 'iter ', iter
for d in range(D):
k_old = K_d[d]
M_k[k_old] -= 1
N_k[k_old] -= N_d[d]
for w in words_d[d]:
N_k_w[k_old, w] -= X[d,w]
# sample k_new
log_probs = [0]*K
for k in range(K):
log_probs[k] += math.log(alpha+M_k[k])
for w in words_d[d]:
N_d_w = X[d,w]
for j in range(N_d_w):
log_probs[k] += math.log(N_k_w[k,w]+beta+j)
for i in range(N_d[d]):
log_probs[k] -= math.log(N_k[k]+beta*V+i)
log_probs = np.array(log_probs) - max(log_probs)
probs = np.exp(log_probs)
probs = probs/np.sum(probs)
k_new = np.random.choice(K, 1, p=probs)[0]
K_d[d] = k_new
M_k[k_new] += 1
N_k[k_new] += N_d[d]
for w in words_d[d]:
N_k_w[k_new, w] += X[d,w]
self.topic_word_ = N_k_w.toarray()
Thanks, but I was more looking for some practical examples like what you normally might see on medium or towarddatascience. There so many on LDA, but very very few for GSDMM
I sent you a proper link, if they did not enough let me know.
Thanks, for that. I need a tutorial of how GSDMM is applied to assign topics to short texts using python. For example, what code must be written and how GSDMM package should be used (how to adjust the alpha and beta values, etc) to arrive at the final answer.
I understood what you want, so as I mentioned there is LDA which is the mother of GSDMM so let's start with many samples by LDA : https://www.kaggle.com/search?q=Lda you can see how alpha and beta would be adjusted there and so on.
As I understand it you have the code https://github.com/rwalk/gsdmm but you need to decide how to apply it.
How does it work?
You can download the paper A dirichlet multinomial mixture model-based approach for short text clustering, it shows that the clusters search is equivalent to game of table choosing. Image to have a group of students and want to group them on tables by their movie interest. Every student (=item) switches in each round to a table(=cluster) that has students with similar movies and is popular. Alpha controls a factor that decides how easily a table gets removed when it's empty (low alpha = less tables). Small betas means that a table is chosen based on similarity to the table than based on popularity of a table. For short text clustering you take word instead of movies.
Alpha, beta, number of iterations
Therefore low alpha result in many clusters with single words, while high alphas result in less clusters with more words. High beta result in popular clusters while low beta result in similar clusters (which are not strong populated). What parameteres you need is based on the dataset. The number of clusters can mostly controlled by beta, but alpha has also (as described) an influence. The number of iterations seems to be stable after 20 iterations but 10 is also ok.
Data preperation process
Before you train the algorithm you will need to create a clean data set. For this you convert every text to lower case, you remove non-ASCII characters and stop-words and you apply stemming or lemmatisation. You will also need to apply this process when you execute it on a new sample.
|
STACK_EXCHANGE
|
Note: Norm was the principle scribe on Thursday, Jim on Friday. The chat logs appear to have been incompletely captured. These minutes were constructed partly by hand by Norm.
Henry:We may need more time for the profiles document
Norm:We could conceivably have the whole of Friday afternoon if we moved the other stuff around a bit.
Henry:Far enough in the future that we don't have to plan it today.
Norm:Colocated with XML Prague is probably the next obvious opportunity.
... Anyway, we can focus on that later
< scribe> ACTION:A-237-01 Liam to update the charter page or link from the Processing Model WG homepage [recorded in http://www.w3.org/2013/09/26-xproc-irc]
Accept this agenda?
09 Oct 2013, accepted.
Alex raised some questions about alignment with serialization in the XQuery/XSLT 3.0 specs
Norm proposes retitling the "Abandon support for XPath 1.0" to "Align with XQuery/XSLT 3.0 specifications"
Alex:The new template stuff in XSLT 3.0, the ability to use curly braces in element content, is a lot like our p:template
... I also wonder about the issue I raised a while back:
Alex:It would be unfortunate to have different semantics than XSLT
< scribe> ACTION:A-237-02 Norm to review the curly-brace template stuff in XSLT wrt how it compares to p:template [recorded in http://www.w3.org/2013/09/26-xproc-irc]
Alex asks about step inventories
Norm:I think we should have a section where we consider new steps as part of the spec: promoting p:template and friends is one case, adding p:zip/unzip is another that occurs to me.
Some discussion of breaking the spec into two parts.
Some discussion of how versioning would would work.
Norm:Is there anyone opposed, in pricinple, to having two specs?
Jim:I'm not against it, but I wonder if it'll be better for users
Alex:What about writing a primer?
Norm:That would have to be in our deliverables if we put it on the REC track.
< jf_2013> Alex proposal for non xml docs http://lists.w3.org/Archives/Public/public-xml-processing-model-wg/2012Oct/0006.html
< jf_2013> Vojtechs proposal http://lists.w3.org/Archives/Public/public-xml-processing-model-wg/2012Sep/0020.html
Henry:We've discussed but never come to conclusion on the question of adding the XQuery/XSLT invariant to XProc
The balance of Thursday was spent discussing various notes and drafts related to the requirements for V.next. Ultimately, the WG produced XProc V2.0 Requirements which it intends to publish as soon as possible.
The following random items of discussion are recorded:
Some discussion of ‘standalone'. General agreement that it's not useful enough to warrant any further consideration.
Some discussion of adding validation. Can it be represented as a decision tree? Not really, it's a lattice.
Norm scribbled the following diagrams on the board during this discussion.
Henry goes to the whiteboard HT: I take it that you are not allowed to do xinclude if you choose the basic profile Alex: we should add sentence to section 2 Norm: dtd validation cant be done on infoset HT: I have seen ppl write docs which have dtd's which are meant to apply after xinclude HT: dont want to require ppl to rewrite dtd where xinclude is allowed and where it isnt HT: you could not do it in a pipeline, you would need resource manager Norm: I suppose you can, but its not practically useful Norm: an entity reference in pcdata would not be an entity reference HT: what this means is, we just describe the set of sentences w/o the grammar HT: this doesnt talk about ordering and it needs too scribe losing some of the flavour of the conversation, whiteboard battles ensue Henry v Norm <alexmilowski> You are in a dark room with an XML document. You see a DTD and an XML Schema. <alexmilowski> LOOK <alexmilowski> You see an XML Document, a DTD, and an XML Schema. There is a door on the west wall and a hallway going east. <alexmilowski> VALIDATE WITH DTD <alexmilowski> The DTD contained a fatal dose of parameter entities. You died.
ACTION:A-237-03 Henry to deal with “23603” and “23606” (remove old school capitalization).
HT: this is the serious one about standalone (4.1) Alex: we went down this road and said this doesnt work HT: no one trusts it Norm: all of these profiles explicitly ignore standalone, as it is widely untrusted HT: there is nothing a processor can do with standalone decl HT: pretty sure, that the value of standalone has no impact on processor behaviour and therefore it can't have impact on infoset content, ergo not relevant (HT attempts to revalidate this thinking) Alex: reminds us in the XML spec 'The standalone document declaration MUST have the value "no" if any external markup declarations contain declarations of:' HT: what he might want, taken literally ... treat standalone="no" as distinct from standalone attribute being absent HT: and thats contra the spec HT: I think thats true Alex: whats the default of standalone ? Norm: its 'If there are no external markup declarations, the standalone document declaration has no meaning. If there are external markup declarations but there is no standalone document declaration, the value "no" is assumed.' HT: we dont enforce any validity constraints in the profiles that dont involve validation
ACTION:A-237-04 Norm to draft response to 4.1 CMSCQ comments
ACTION:A-237-05 Henry to manage response to 4.3
ACTION:A-237-06 Henry to draft response to comment 4.4
HT: If CMSCQ wants a reference I will put it in Norm: he wants us to do an analysis of existing processors and match up profiles HT: we are ok to fill a small gap in the set of challenges who are writing specs for xml applications
ACTION:A-237-07 Henry to respond to CMSCQ comment 4.5
HT gets back to standalone HT: the only way an error arises, the only document that violates the standalone doc validity constraint is one that says standalone="yes" but isn't (has something external that matters) Alex: and you need a validating processor to tell this HT: what that means is that with the default you never get an error HT: its the opposite, its going to obsure errors where there are some HT: for our purposes we are done Norm: I think he is asking for is different behaviours, which is somewhat invalid HT: basic or full profile processors should produce and error i the presence of standalone="no" (Norm: only in the presence of dtd validation) HT: What Michael is suggesting is standalone="no", that a basic or id level processor should throw an error HT: you've said 'I need external information' ... thats what standalone="no" means HT: its a kind of profile error HT: the top level goal is to fix the xml spec - and we can't do that (requires a kind of profile level error)
Back to XProc and requirements and use cases
ACTION:A-237-08 Norm to editorialize usage of the term 'connection'
ACTION:A-237-09 Norm will send through 'we are dropping parameters' email
Working Group records its thanks to Henry and the University for magnificent hosting. Henry especially for the most excellent bacon rolls and cookies!http://www.w3.org/2013/09/26-xproc-irc]
|
OPCFW_CODE
|
- 【Notice】According to the feedback received from users of tactical pants, We have adjusted the size chart. Please see the detailed picture sizing information of the outdoor pants on the left instead of the default amazon size chart. Choose the right hiking shorts according to Waist/Hips/Length in the size chart.
- 【Quick Dry and Breathable】Poly/Cotton Blend which is scratch-resistant, wear-resisting, breathable, wrinkle resistant. The coated ripstop fabric helps the cargo shorts for men effectively prevent stain, liquid, dust. This Men’s hiking shorts will keep you comfortable all day.
- 【Loose Fit】Designed with a loose fit through the seat and thigh, these men’s cargo shorts sit at the natural and comfortable waist. Double-layer hip that allows full range of movement, wear-resisting, durable, comfortable and flexible. These shorts stretch when moving! The innovative elastic waistband allows you to move freely and stay flexible in any position, maintain the comfort during the journey.
- 【Multi Pockets】Featuring multiple cross-over zipper and velcro pockets,there is plenty of room to securely store knives,multi-tools,keys, flashlights,and other essential gear!Keep your hands free.
- 【Occasion】The cargo shorts for men apply to heavy-duty operating, tactical, cycling, fishing, motorcycle, military, combat, hunting, climbing, hiking, working, outdoor adventure, etc. Also the perfect gifts for your husband, father and friends.
Details: Men’s Quick Dry Cargo Shorts Stretch Tactical Waterproof Sports Hiking Shorts with Zipper Pockets.FEATURES:- Stay comfortable and dry under the toughest conditions.- Featuring multiple cross-over zipper and velcro pockets.- these shorts are built to trek, but stylish enough to wear every day- Move freely and stay comfortable on-the-go with the innovative elastic waist band.- Perfect for your next outdoor adventure or for daily activities,with durability that you can rely on for years to come.Specification:Gender:Men/MaleMaterial: Breathable Poly/Cotton BlendSizes: Small/Medium/Large/X-Large/XX-Large/3X-Large/4X-Large/5X-LargePackage Includes:1 x Tactical ShortsStyle: Camouflage Shorts/ Cargo shorts/ Tactical shortsTIPS:Amazon generic size chart is not our size. Please carefully refer to detailed size chart on the left picture before you purchase. As they are tight-fitting shorts, recommended to order one size larger.NOTES:1. Due to the different monitor and light effect, the actual color of the item might be slightly different from the color showed on the pictures. Thank you!2. Please allow 1-3cm measuring deviation due to manual measurement.
Package Dimensions: 0x0x0
|
OPCFW_CODE
|
**B Grade** GA-EX58-UD4P Intel X58 (Socket 1366) PCI-Express DDR3 Motherboard
Below is the original description for this product, any reference to warranty is to be ignored. Warranty for this item is 90 days as with all B Grade items.
B Grade items may have been used, have damaged packaging, missing accessories or a combination of these (Motherboards may be missing I/O shields).
Some items may have scuff marks or slight scratches but should otherwise be an operable product.
The EX58-UD4P was designed specifically to take advantage of the raw power of the next generation Intel® Core i7 processors and the Intel® X58 Express chipset, whose new evolution in computing architecture is able to deliver an amazing performance break through from past processor generations. Replacing the Front Side Bus is the new Quick Path Interconnect, or QPI, whose 25.6 GB/sec transfer rate (double the bandwidth of the 1600MHz FSB) eliminates the communication bottleneck between the processor and chipset. The Intel® Core i7 processors also feature an integrated memory controller inside the processor die and support 192bit 3-channel DDR3 memory that delivers a 50% memory bandwidth enhancement and lower memory latency for incredibly fast memory access. Additionally, The EX58-UD4P features Intel® Turbo Boost Technology, which is able to power down idle processor cores and dynamically reroute the power to the active cores for significant performance boosts, and at the same time, maintain greater energy efficiency.
- Support for an Intel® Core™ i7 series processor in the LGA 1366 package
- QPI 4.8GT/s / 6.4GT/s
- Chipset North Bridge: Intel® X58 Express Chipset
- South Bridge: Intel® ICH10R
- Memory 6 x 1.5V DDR3 DIMM sockets supporting up to 24 GB of system memory
- Dual/3 channel memory architecture
- Support for DDR3 2100+/1333/1066/800 MHz memory modules
- Audio Realtek ALC889A codec
- High Definition Audio
- Support for Dolby® Home Theater (Note 2)
- Support for S/PDIF In/Out
- Support for CD In
- LAN 1 x Realtek 8111D chips (10/100/1000 Mbit)
- Expansion Slots 2 x PCI Express x16 slots, running at x16
- 1 x PCI Express x8 slot, running at x8 (PCIEX8_1)
- 1 x PCI Express x4 slot
- 1 x PCI Express x1 slot
- 2 x PCI slots
- SLI Support with latest Gigabyte BIOS
More links for "**B Grade** GA-EX58-UD4P Intel X58 (Socket 1366) PCI-Express DDR3 Motherboard"
|
OPCFW_CODE
|
05-07-2020 01:07 AM
I am building petalinux fresh with my own hardware (Zynq US+). I have followed the steps in the pdf for this case and am having issues on the petalinux-build stage.
I am getting the following errors:
ERROR: fsbl-2019.2+gitAUTOINC+e8db5fb118-r0 do_compile: Function failed: do_compile (log file is located at /export/home/tking/bittware/build/tmp/work/plnx_zynqmp-xilinx-linux/fsbl/2019.2+gitAUTOINC+e8db5fb118-r0/temp/log.do_compile.2071
ERROR: Logfile of failure stored in: /export/home/tking/bittware/build/tmp/work/plnx_zynqmp-xilinx-linux/fsbl/2019.2+gitAUTOINC+e8db5fb118-r0/temp/log.do_compile.20711
ERROR: pmu-firmware-2019.2+gitAUTOINC+e8db5fb118-r0 do_compile: Function failed: do_compile (log file is located at /export/home/tking/bittware/build/tmp/work/plnx_zynqmp-xilinx-linux/pmu-firmware/2019.2+gitAUTOINC+e8db5fb118-r0/temp/log
ERROR: Logfile of failure stored in: /export/home/tking/bittware/build/tmp/work/plnx_zynqmp-xilinx-linux/pmu-firmware/2019.2+gitAUTOINC+e8db5fb118-r0/temp/log.do_compile.28201
(Logs are attached but renamed)
Has anyone seen this before and know of a fix? Or can help me get it fixed.
The other question I have is that these files can be built through Vitis so can I continue without them and recover the rest of petalinux for my Vitis platform build?
05-10-2020 05:38 AM
Could you please check if i2C0 and I2C1 are enabled in your design because as you can see the error message is coming from the FSBL and i2c is needed for board-specific configuration done in FSBL. Or please share HDF I can try out at my end.
You can read below wiki page for more information.
Thanks & Regards,
05-15-2020 12:49 AM
Looking at Vitis it says at least one of the I2C is enabled (I2C 1)
Here is my XSA for you to confirm (zipped as my XSA failed to upload)
This issue seems to be with my XDMAPCIe block which although shows up in the Hardware Specification but its defines are not generated in xparameters.h. This seems to be both in Petalinux and building it in Vitis.
|
OPCFW_CODE
|
package com.adobe.dp.css;
import java.io.PrintWriter;
import java.util.Hashtable;
import java.util.Iterator;
import java.util.Vector;
public class CSSStylesheet {
Vector statements = new Vector();
Hashtable rulesBySelector = new Hashtable();
public void add(Object rule) {
if (rule instanceof SelectorRule) {
SelectorRule srule = (SelectorRule) rule;
Selector[] selectors = srule.selectors;
if (selectors.length == 1)
rulesBySelector.put(selectors[0], rule);
}
statements.add(rule);
}
public Selector getSimpleSelector(String elementName, String className) {
NamedElementSelector elementSelector = null;
if (elementName != null)
elementSelector = new NamedElementSelector(null, null, elementName);
if (className == null)
return elementSelector;
Selector selector = new ClassSelector(className);
if (elementSelector != null)
selector = new AndSelector(elementSelector, selector);
return selector;
}
public SelectorRule getRuleForSelector(Selector selector, boolean create) {
SelectorRule rule = (SelectorRule) rulesBySelector.get(selector);
if (rule == null && create) {
Selector[] selectors = { selector };
rule = new SelectorRule(selectors);
add(rule);
}
return rule;
}
public void serialize(PrintWriter out) {
Iterator list = statements.iterator();
while (list.hasNext()) {
Object stmt = list.next();
if (stmt instanceof FontFaceRule) {
((FontFaceRule) stmt).serialize(out);
out.println();
} else if (stmt instanceof BaseRule) {
((SelectorRule) stmt).serialize(out);
out.println();
} else if (stmt instanceof MediaRule) {
((MediaRule) stmt).serialize(out);
out.println();
} else if (stmt instanceof ImportRule) {
((ImportRule) stmt).serialize(out);
out.println();
} else if (stmt instanceof PageRule) {
((PageRule) stmt).serialize(out);
out.println();
}
}
}
public Iterator statements() {
return statements.iterator();
}
}
|
STACK_EDU
|