Document
stringlengths 395
24.5k
| Source
stringclasses 6
values |
|---|---|
using System;
using Liquid.NET.Constants;
using Liquid.NET.Utils;
namespace Liquid.NET.Filters.Array
{
public class LastFilter : FilterExpression<LiquidValue, ILiquidValue>
{
/// <summary>
/// TODO: Update to new structure
/// </summary>
/// <returns></returns>
public override LiquidExpressionResult Apply(ITemplateContext ctx, LiquidValue liquidExpression)
{
//return ApplyTo(ctx, (dynamic)liquidExpression);
var arr = liquidExpression as LiquidCollection;
if (arr != null)
{
return ApplyTo(ctx, arr);
}
var str = liquidExpression as LiquidString;
if (str != null)
{
return ApplyTo(ctx, str);
}
return ApplyTo(ctx, liquidExpression);
}
public override LiquidExpressionResult ApplyTo(ITemplateContext ctx, ILiquidValue liquidExpression)
{
return LiquidExpressionResult.Error("Can't ask for an element at that index. This is not an array or a string.");
}
public override LiquidExpressionResult ApplyTo(ITemplateContext ctx, LiquidCollection liquidArrayExpression)
{
if (liquidArrayExpression == null || liquidArrayExpression.Value == null)
{
return LiquidExpressionResult.Error("Array is nil");
}
var positionFilter = new PositionFilter(LiquidNumeric.Create(liquidArrayExpression.Count - 1));
return positionFilter.ApplyTo(ctx, liquidArrayExpression);
}
public override LiquidExpressionResult ApplyTo(ITemplateContext ctx, LiquidString liquidLiquidStringExpression)
{
if (liquidLiquidStringExpression == null || String.IsNullOrEmpty(liquidLiquidStringExpression.StringVal))
{
return LiquidExpressionResult.Error("String is nil");
}
var positionFilter = new PositionFilter(LiquidNumeric.Create(liquidLiquidStringExpression.StringVal.Length - 1));
return positionFilter.ApplyTo(ctx, liquidLiquidStringExpression);
}
}
}
|
STACK_EDU
|
M: Ask HN: a service that needs SMS - swah
I wanted to create a service that needs send SMS to people (Brazil). I know that Google Agenda, for example, has integration with some cellphone companies, so they provide these APIs to Google. But how could I, a no one with no BigCo behind me, do something like this?
R: hunterjrj
Take a look at CellTrust:
[http://www.celltrust.com/Products/SDKAPI/CellTrust-API-
Overv...](http://www.celltrust.com/Products/SDKAPI/CellTrust-API-
Overview.html)
I don't know if they have support for sending SMS to Brazil, but we did have
success with them here in Canada.
R: cmelbye
Twilio is great for that stuff. <http://twilio.com/>
R: jsgoecke
Give <http://tropo.com> a try. It also does voice, IM and Twitter along with
SMS.
R: swah
I wish there was some service that would allow me to send SMS for free, and
they could add a 30-char ad along...
R: vlbeta
<http://www.zeepmobile.com/> and <http://www.textmarks.com/>
Although I'm not sure if they support Brazilian carriers.
R: adrianwaj
<http://www.clickatell.com/>
R: swah
Surprisingly, this allows sending SMS to all operators in Brazil, and just
tested on mine. I wonder, how they did it? Did they have to contact each
operator and make a deal, or I'm missing something?
|
HACKER_NEWS
|
AzureFunctionApp@1 ERROR: Invalid connection string - The "AzureWebJobsStorage" connection string configured is invalid
Type: Bug
Task Name: AzureFunctionApp@1
Environment: Azure DevOps, Hosted Agent
The AzureFunctionApp@1 deployment task has upgraded itself today and we now get this ERROR logged as a warning:
##[warning]ERROR: Invalid connection string - The "AzureWebJobsStorage" connection string configured is invalid (e.g. missing some required elements). Please check the value of the app setting "AzureWebJobsStorage".
With task version: 1.211.1
We are using Key Vault References for the 'AzureWebJobsStorage' connection string.
e.g.
@Microsoft.KeyVault(SecretUri=https://blah)
The previous task version 1.209.0 did not report this issue and there has been no change on our end.
Replacing the Key Vault Reference with the actual connection string and the ERROR goes away.
e.g.
DefaultEndpointsProtocol=https;AccountName=xxxx;AccountKey=yyyy==;EndpointSuffix=core.windows.net
There's also this other new weird error logged as a warning.
##[warning]ERROR: Function app is NOT VNet integrated.
Whoever said we needed our function app to be VNet integrated. Hardly an ERROR.
We have the same problem any update on this?
Facing similar issue with function app using keyvault reference.
The same problem seem to be here:
https://github.com/MicrosoftDocs/azure-docs/issues/27555
And it contains a workaround how ever it will give some downtime.
Having the exact same issue, everything works fine, just one giant error showing as a warning on the pipeline
The same problem seem to be here: MicrosoftDocs/azure-docs#27555 And it contains a workaround how ever it will give some downtime.
Don't think that is the same issue, our connection string resolves fine in the keyvault reference, only error is in the deployment pipeline. I did try set the specific version as they mentioned, but no joy
We are also not having any issue at runtime.
Our issue is that the deployment task reports phoney errors as warnings.
"Invalid connection string" -> The connection string is not invalid, as a key vault reference is perfectly valid.
"Function app is NOT VNet integrated" -> We don't have to do that. This is informational at best.
I am starting to think that the pipeline somehow needs permission to see the keyvault secret, or at least if its seeing a keyvault reference it should not try to see if its in a connection string format.
We do make use of vnets, but as soon as I set the actual connection string, it deploys perfect with no mention of the vnet issue
These phoney warnings\errors do not affect our deployments. The deployment succeeds with "2 warnings".
Yes, ours deploys fine as well, but seeing those warning/errors is not a very nice experience when viewing some of our larger pipelines, especially if there are actual errors that you need to concern yourself with
Had a look into the source now and I think I can see where the issue is coming from.
In AzureAppServiceUtility there is a method isAzureWebJobsStorageAccessible which has a call to AzureAppService method getConnectionStringValidation in there it calls of a service client with a connectionstringvalidation url.
Now I assume in that call it fails validation, returns a MalformedConnectionString status and then bubbles up that error.
So I am not sure if this issue would be solved in this repo or at the endpoint that is being called
Hello, we will look into this and will work on resolving this. @AltusBaard thanks for looking into this a bit further and this could be related to the Validation methods.
This issue is addressed in PR: https://github.com/microsoft/azure-pipelines-tasks/pull/17544 and it will be out in next release. Thank you for reporting.
This should be resolved in the 216 task version for both v1 and v2. Closing this and please re-open if this issue re-occurs.
|
GITHUB_ARCHIVE
|
July 11, 2020 — In this talk, we’ll discuss the importance of the SPF, DKIM, and DMARC email security protocols and how to use them to help prevent phishing and spam email. Without these protocols in place, other people may be sending emails under your domain name or even manipulating the content of your emails.
October 17, 2019 — As developers, we are constantly building tools to help make other people’s lives easier. How often have you stopped to focus on making your own job easier?
In this talk we will cover the three pillars of productivity, analyze our workflow for optimizations, and identify key leverage points that will allow us to exponentially increase our output.
August 8, 2019 — Caching can be tricky business. Fundamentally, the concept is simple: storing a temporary copy of data so future requests can be served faster. In reality, there are a number of different types of caching. Certain types of caching can effect our code in different ways. Data changes and the cache has to be invalidated. What happens if a caching layer runs out of memory or goes down?
This session will enlighten those new to caching as well as introduce some common misconceptions, pitfalls and strategies to caching in WordPress themes and plugins.
What caching is
What types of caching exist
What developers should know about caching when developing on WordPress
August 6, 2019 — Clients want to work with quality WordPress developers. They want to know that they can get their project done on time, in budget, and that the solution is going to be effective.
Project management aside, writing quality code is the key to delivering on all three of these expectations. If you write poor code, it is going to be harder to read and understand. As a result, it will take longer (and cost the client more money) to add new features. A lack of consistent quality also results in more bugs and a decreased effectiveness of the solution you are creating.
WordPress has a set of coding standards that are designed to provide a consistent level of quality for those who are contributing code to WordPress. However, it just got a lot easier to apply those standards to your own projects. Whether you are creating a theme or writing a plugin, you can leverage an “automated code mentor” to help you provide consistent quality to your clients.
Come learn how you can leverage the WordPress Coding Standards and automation to help you become a quality developer that clients can trust.
What coding standards are
Why you should care
A simple way to get started today
July 7, 2019 — Do you constantly feel like you are hacking code together? Do you find it difficult to maintain, adapt or even read code you’ve written previously? Chances are, you are not taking into consideration some of the basic principles of software architecture. Come learn how some simple habits and changes in perspective can transform you from a 1x developer to a 10x developer!
June 27, 2019 — The new WordPress block editing experience, code-named Gutenberg, was released the end of last year. At the time, some people chose to temporarily opt out of using the new editor. As the WordPress ecosystem introduces better and better integrations with the new editor, let’s take a closer look at what it looks like to use the block editor today.
This session will be a live demo with opportunities for interactive Q&A during the presentation.
January 17, 2019 — WordPress’ new text editor is awesome, but many people are still trying to learn how create their own blocks in Gutenberg. While there is certainly more to creating your own blocks than writing a simple shortcode, there are also some significant benefits. We’ll cover the basics of creating a Gutenberg block, how to use Gutenberg to create predefined page layouts, and how to make your blocks dynamic on the front end of the site!
January 9, 2019 — Do you constantly feel like you are hacking code together? Do you find it difficult to maintain, adapt or even read code you’ve written previously? Chances are, you are not taking into consideration some of the basic principles of software architecture. Come learn how some simple habits and changes in perspective can transform you from a 1x developer to a 10x developer!
How to keep your code simple and straightforward
How to reduce bugs in your code
How to become a more productive developer
February 20, 2018 — A friendly introduction to the WordPress REST API for both developers and non-developers. Learn what all the hype is about with the REST API, discover why you might want to use it and dig in with real-life examples and business scenarios.
A better understanding of what the REST API is, what the use cases are for it and the business reasons why it makes sense.
|
OPCFW_CODE
|
The Projection section allows the user to choose whether or not each newly created output file should either
use the same projection as the file that it was created from (Use Source File Projection option) or if each file
should use a projection specified by pressing the Projection button. If the user selects to specify an output
projection and they select a zoned projection like UTM or Gauss Krueger, they will have the option on the
projection selection dialog to have the best zone for the center of the output map automatically used if they
select the Automatically Select Best Zone option in the Zone box on the projection selection dialog.
The Setup Gridding (i.e. Tiling) button displays a dialog allowing the user to specify if and how to break up
each file being converted into multiple new files. This option is only available when converting to a raster or
gridded elevation format.
The Setup Sample Spacing button displays a dialog allowing the user to choose to either keep the sample
spacing of source raster and elevation files when converting the selected or to specify a new sample spacing to
resample all of the selected input files at when performing the conversion. This option is only available when
converting to a raster or gridded elevation format.
The Horz Datum selection allows the user to choose the horizontal datum that each newly created output file
should use. By default, each output file will use the same horizontal datum as the source file that it was
created off of. Alternately, the user can specify to have all output files created in NAD27, NAD83, WGS72,
or WGS84 with the appropriate offset being applied automatically.
The Vertical Units selection is present only for some conversions and controls the output elevation units.
The Palette selection is present only for some conversions and controls the palette, if any, used in the output
files. The palette values are defined as follows:
Image Optimized Palette - The palette generated will be an optimal mix of up to 256 colors that will
closely represent the full blend of colors in the source images. This option will generate the best
results, but can more than double the export time required if any high color images are present in the
export set. If all of the input data is palette-based and the combined palette of those files has 256
colors or less, then the combined files of the input file will just be used with no additional export time
Grayscale Palette - This palette consists of 256 scales of gray ranging from black to white.
DRG Optimized Palette - This palette is optimized for the exporting USGS DRG data. The palette
consists of only the standard DRG colors.
DRG/DOQ Optimized Palette - As the name suggests, this palette is optimized for exporting a mixture
of USGS DRG data and grayscale satellite photos (i.e. USGS DOQs). The palette consists of the 14
standard DRG colors with the remaining 242 colors being a range of gray values ranging from black
Halftone Palette - The palette consists of a blend of 256 colors evenly covering the color spectrum.
This palette is the best choice when exporting anything but DRGs and grayscale satellite photos.
Custom Palette from File - This option allows the user to choose a .pal file describing the palette to
use for the export. A .pal file should be a text file with one line per color, with the red, green, and blue
color components for each color in the palette separated by a comma. You can save a .pal file for an
existing palette-based file by opening the Overlay Control Center, selecting the palette-based layer,
press Options, then the Transparent Color button, then selecting the option to save a color palette file.
24-bit RGB - Create a full 24-bit color image with no palette. This will create the best resulting image
but will also take the most space.
Keep Same as Source File - The new file will use the same color encoding as the source file, either
palette-based, 24-bit RGB, multi-band, or grayscale.
Global Mapper User's Manual
Map GPS Coordinates - GPS Map Coordinates - GPSCoordinates Map
|
OPCFW_CODE
|
import { Method } from 'store';
/**
* Basic encryption types and methods
*/
export const encryptionTypes = {
caesar: 'caesar',
monoalphabetic: 'monoalphabetic',
polyalphabetic: 'polyalphabetic',
bigram: 'bigram',
};
export const encryptionMethods: Method[] = [
{ type: encryptionTypes.caesar, name: 'Цезарь' },
{ type: encryptionTypes.monoalphabetic, name: 'Моноалфавитный шифр' },
{ type: encryptionTypes.polyalphabetic, name: 'Полиалфавитный шифр' },
{ type: encryptionTypes.bigram, name: 'Биграммный шифр' },
];
/**
* Block encryption types and methods
*/
export interface BlockMethod extends Method {
withIv: boolean;
}
export const blockEncryptionTypes = {
aes256ecb: 'aes256ecb',
aes256cbc: 'aes256cbc',
aes256ctr: 'aes256ctr',
aes256cfb: 'aes256cfb',
aes256ofb: 'aes256ofb',
};
export const blockEncryptionMethods: BlockMethod[] = [
{ type: blockEncryptionTypes.aes256ecb, name: 'AES-256/ECB', withIv: false },
{ type: blockEncryptionTypes.aes256cbc, name: 'AES-256/CBC', withIv: true },
{ type: blockEncryptionTypes.aes256ctr, name: 'AES-256/CTR', withIv: false },
{ type: blockEncryptionTypes.aes256cfb, name: 'AES-256/CFB', withIv: true },
{ type: blockEncryptionTypes.aes256ofb, name: 'AES-256/OFB', withIv: true },
];
/**
* Asymmetric encryption types and methods
*/
export const asymmetricEncryptionTypes = {
rsa512: 'rsa512',
rsa1024: 'rsa1024',
// rsa2048: 'rsa2048',
// rsa4096: 'rsa4096',
// rsa8192: 'rsa8192',
};
export const asymmetricEncryptionMethods: Method[] = [
{ type: asymmetricEncryptionTypes.rsa512, name: 'RSA-512' },
{ type: asymmetricEncryptionTypes.rsa1024, name: 'RSA-1024' },
// { type: asymmetricEncryptionTypes.rsa2048, name: 'RSA-2048' },
// { type: asymmetricEncryptionTypes.rsa4096, name: 'RSA-4096' },
// { type: asymmetricEncryptionTypes.rsa8192, name: 'RSA-8192' },
];
/**
* Checksum types and methods
*/
export const checksumTypes = {
crc16: 'crc16',
crc24: 'crc24',
crc32: 'crc32',
fletcher: 'fletcher',
};
export const checksumMethods: Method[] = [
{ type: checksumTypes.crc16, name: 'CRC-16' },
{ type: checksumTypes.crc24, name: 'CRC-24' },
{ type: checksumTypes.crc32, name: 'CRC-32' },
{ type: checksumTypes.fletcher, name: 'Флетчер' },
];
/**
* Hashing types and methods
*/
export const hashingTypes = {
sha256: 'sha256',
sha512: 'sha512',
sha3: 'sha3',
};
export const hashingMethods: Method[] = [
{ type: hashingTypes.sha256, name: 'SHA-256' },
{ type: hashingTypes.sha512, name: 'SHA-512' },
{ type: hashingTypes.sha3, name: 'SHA-3' },
];
/**
* Since Javascript has two-character representation for Emoji
* then a method for obtaining real unicode codes is needed
*
* Reason: Javascript function String.prototype.charCodeAt() works fine only with UCS-2 format
* which is is a strict subset of UTF-16 - it can encode characters in the Basic Multilingual
* Plane (i.e., from U+0000 til U+FFFF) only. If you need to express characters in the
* supplementary planes (which includes some relatively rare Chinese characters), they must be
* encoded using pairs of two 16 bit code units ("surrogates"), and if so your data will not
* be valid UCS-2 but must be declared as UTF-16
*
* @param emoji symbol for getting the code
* @returns return the correct unicode code under UNICODE_RING_SIZE
*/
export function getUnicodeCode(emoji: string): number {
let comp;
if (emoji.length === 1) {
comp = emoji.charCodeAt(0);
} else {
comp = (emoji.charCodeAt(0) - 0xd800) * 0x400 + (emoji.charCodeAt(1) - 0xdc00) + 0x10000;
if (comp < 0) {
comp = emoji.charCodeAt(0);
}
}
return isNaN(comp) ? 0 : comp; // Extra check
}
|
STACK_EDU
|
Interfaces and hierarchy in Go (migrating from Java)
I'm rewriting some old Java app to Go. This application is built on JAXB technology and has classes hierarchy projectable to XML. I rewrote most of app without any problems but one feature seems for me is implemented not in a proper way. I'll give you example.
There is a basic abstract class Command with method toXML and a lot of descendants (really more than 1 level deep)
class Command {
String id;
String toXML() {
// every descendant of Command use this method to marshall its structure to XML
}
}
class SendCommand extends Command {
String sendData;
}
In Go I implemented this code below.
type ToXMLInterface interface {
ToXML()
}
type Command struct {
id string
}
func (Command) ToXML() {}
type SendCommand struct {
Command
sendData string
}
func (SendCommand) ToXML() {}
func ToXML(value ToXMLInterface) string {
m, _ := xml.MarshalIndent(value, "", " ")
return string(m)
}
But for me it looks a little weird and I would want to simplify it. Is there some way to do it more simple? (Maybe without implementing empty interface for each class?)
Go doesn’t have inheritance, don’t try to model types like you would in Java.
To complement JimB's comment, forget about type hierarchy and focus instead of the behaviour(s) that types exhibit.
Your approach doesn't look that off to me. A couple comments: there's already a way to encode in xml via encoding/xml.Marshaler. You're already using xml.MarshalIndent so it's not clear what value this added interface has.
Interface has no own value. It is some kind of "marker" interface, just to specify which kind of structs can be passed to marhalling function.
The major fact there is about interfaces in Go is that they work "backwards" to what you may be used to in other mainstream languages. In Go, types implement interfaces implicitly, and a type potentially implements any number of interfaces. This sounds weird but here's a simple example:
type T struct {}
func (t T) Foo() int {
return 42
}
func (t T) Bar() string {
return "42"
}
type F interface {
Foo() int
}
type B interface {
Bar() string
}
var f F = T{}
var b B = T{}
var t T
b = t
f = t
This is valid Go code (it won't compile as is but not because of incorrectness).
As you can see, the type T does not embed anything declared as type ... interface ... nor does it use any other way to "implement" interfaces F and B, and still it's easy to see it implements both of them.
This works because the checks for whether a type implements a particular interface happen at compile time or at runtime when Go has a value of some non-interface type and needs to type-convert it to some interface type.
I would recommend to read this classic piece on interfaces (written by one of the Go core devs) to get the full picture.
In your particular case, you only need to embed "more basic" types into "more specialized" types if some of the methods of these basic types are actually useful to be promoted to become methods of the specialized types.
I mean, if Command would have toXML which could be used "as is" by all the other types it gets embedded into, that would be a valid reason for embedding; otherwise there isn't.
Note that it could be guessed toXML is precisely the method which each "specialized" type needs to define diffetently, but suppose Command would also provide something like getStringID() which would be the same across all "specialized" types. Then it'd be a good idea to embed Command into all those types: they would get that method implemented automagically.
And in the end, when programming in Go, the rule of thumb is what JimB and jub0bs said: don't get drowned in designing type hierarchies; instead, implement types which perform particular tasks. Doing this, you might eventually detect that some set of types would benefit from embedding some common type or types; only then go on and do that.
Thanks for a complete explanation. As for remarks of JimB and jub0bs - this structs have no own behaviour, it's only used for interaction with other remote system
@ArtemZyuzko, then you really can just embed Command into them. Just do not think about this as inheritance; it's not some sort of egghead academic advice, it really helps to have a correct mental model for a given PL being used.
|
STACK_EXCHANGE
|
how can you adjust the current for charging from the network?
how can you adjust the current for charging from the network? there is no such setting
read and write
reading, it is still necessary to record, change
@heckmic Please subscribe this issue. It's about making
Maximum mains charging current 0.1A Uint 333 1 R/W
writable.
when can you add? a very useful function
My notes to simulate and test the feature locally.
[19:53:13][D][uart_debug:114]: >>> 01:03:01:4B:00:07:75:E2
# 01:4B -> 331
# 00:07 -> 7
[19:53:13][D][modbus_controller.select:014]: New select value 1 from payload
[19:53:13][D][select:015]: 'pow-hvm battery charging priority': Sending state PV priority (index 1)
[19:53:13][D][modbus_controller.sensor:025]: Sensor new state: 400.00
[19:53:13][D][sensor:127]: 'pow-hvm maximum charging current': Sending state 40.00000 A with 1 decimals of accuracy
[19:53:13][D][modbus_controller.sensor:025]: Sensor new state: 400.00
[19:53:13][D][sensor:127]: 'pow-hvm maximum mains charging current': Sending state 40.00000 A with 1 decimals of accuracy
[19:53:13][D][modbus_controller.sensor:025]: Sensor new state: 584.00
[19:53:13][D][sensor:127]: 'pow-hvm Eq Charging voltage': Sending state 58.40000 V with 1 decimals of accuracy
[19:53:13][D][modbus.number:023]: Number new state : 60.00
[19:53:13][D][number:012]: 'pow-hvm battery equalization time': Sending state 60.000000
[19:53:13][D][modbus.number:023]: Number new state : 120.00
[19:53:13][D][number:012]: 'pow-hvm equalization Timeout exit': Sending state 120.000000
[19:53:13][D][modbus.number:023]: Number new state : 30.00
[19:53:13][D][number:012]: 'pow-hvm two equalization charging intervals': Sending state 30.000000
[19:53:13][D][uart_debug:114]: <<< 01:03:0E:00:01:01:90:01:90:02:48:00:3C:00:78:00:1E:09:5F
0x01 0x03
0x0E # 14 bytes
0x00 0x01 # 331 (1) charging priority
0x01 0x90 # 332 (400) maximum charging current
0x01 0x90 # 333 (400) maximum mains charging current
0x02 0x48 # 334 (584) eq charging voltage
0x00 0x3C # 335 (60) battery equalization time
0x00 0x78 # 336 (120) equalization timeout exit
0x00 0x1E # 337 (30) two equalization charging intervals
0x09 0x5F # CRC
Please add these two number entities to your YAML and give it a try!
https://github.com/syssi/esphome-smg-ii/pull/42/files
work good, thenks)))
|
GITHUB_ARCHIVE
|
Recursive ORM models with overridden from_orm/model_validate behaviour change
Initial Checks
[X] I confirm that I'm using Pydantic V2
Description
In V1, An orm_mode=True model would call the from_orm method of referenced models. This doesn't seem to happen in V2.
The example attached is a modified version of the example from the V1 docs: https://docs.pydantic.dev/1.10/usage/models/#recursive-orm-models
The error produced is:
pydantic_core._pydantic_core.ValidationError: 2 validation errors for Person
pets.0.foo
Input should be a valid string [type=string_type, input_value=1, input_type=int]
For further information visit https://errors.pydantic.dev/2.1/v/string_type
pets.1.foo
Input should be a valid string [type=string_type, input_value=2, input_type=int]
For further information visit https://errors.pydantic.dev/2.1/v/string_type
In my real world code I store an integer in the database that is serialised to a string of a prefix + hash (based off the int) rather than just a trivial int -> str conversion.
Was this behaviour change intentional?
Example Code
# v1 version
from typing import Any, Self
from pydantic import BaseModel
class PetCls:
def __init__(self, *, name: str, species: str, foo:int):
self.name = name
self.species = species
self.foo = foo
class PersonCls:
def __init__(self, *, name: str, age: float = None, pets: list[PetCls]):
self.name = name
self.age = age
self.pets = pets
class Pet(BaseModel):
name: str
species: str
foo: str
class Config:
orm_mode = True
@classmethod
def from_orm(cls, pet: PersonCls) -> Self:
return cls(
name=pet.name,
species=pet.species,
foo=str(pet.foo)
)
class Person(BaseModel):
name: str
age: float = None
pets: list[Pet]
class Config:
orm_mode = True
bones = PetCls(name='Bones', species='dog', foo=1)
orion = PetCls(name='Orion', species='cat', foo=2)
anna = PersonCls(name='Anna', age=20, pets=[bones, orion])
anna_model = Person.from_orm(anna)
print(anna_model)
### V2 version
from pydantic import BaseModel, ConfigDict
from typing import Self
class PetCls:
def __init__(self, *, name: str, species: str, foo:int):
self.name = name
self.species = species
self.foo = foo
class PersonCls:
def __init__(self, *, name: str, age: float = None, pets: list[PetCls]):
self.name = name
self.age = age
self.pets = pets
class Pet(BaseModel):
name: str
species: str
foo: str
model_config = ConfigDict(from_attributes=True)
@classmethod
def model_validate(cls, pet: PetCls) -> Self:
return cls(
name=pet.name,
species=pet.species,
foo=str(pet.foo)
)
class Person(BaseModel):
name: str
age: float = None
pets: list[Pet]
model_config = ConfigDict(from_attributes=True)
bones = PetCls(name='Bones', species='dog', foo=1)
orion = PetCls(name='Orion', species='cat', foo=2)
anna = PersonCls(name='Anna', age=20, pets=[bones, orion])
anna_model = Person.model_validate(anna)
print(anna_model)
Python, Pydantic & OS Version
pydantic version: 1.10.11
pydantic compiled: True
install path: /Users/reubengow/Library/Caches/pypoetry/virtualenvs/acme-3p-C9Q-Z5i8-py3.11/lib/python3.11/site-packages/pydantic
python version: 3.11.4 (main, Jun 20 2023, 17:23:00) [Clang 14.0.3 (clang-14<IP_ADDRESS>.1)]
platform: macOS-13.4.1-arm64-arm-64bit
optional deps. installed: ['dotenv', 'typing-extensions']
---
pydantic version: 2.1.1
pydantic-core version: 2.4.0
pydantic-core build: profile=release pgo=false mimalloc=true
install path: /Users/reubengow/Library/Caches/pypoetry/virtualenvs/acme-3p-C9Q-Z5i8-py3.11/lib/python3.11/site-packages/pydantic
python version: 3.11.4 (main, Jun 20 2023, 17:23:00) [Clang 14.0.3 (clang-14<IP_ADDRESS>.1)]
platform: macOS-13.4.1-arm64-arm-64bit
optional deps. installed: ['typing-extensions']
Selected Assignee: @Kludex
This change was on purpose, but I think the issue isn't that from_attributes isn't working — it's that you can't override model_validate and have that affect nested validation.
So in particular, it's not using that method to parse foo by doing str(pet.foo).
Because of the use of the ORM classes it's not super easy for me to reproduce the issue using your code, but I suspect there's a way to use a field validator to get the behavior you want.
In particular, if you change:
class Pet(BaseModel):
name: str
species: str
foo: str
model_config = ConfigDict(from_attributes=True)
@classmethod
def model_validate(cls, pet: PetCls) -> Self:
return cls(
name=pet.name,
species=pet.species,
foo=str(pet.foo)
)
to
from typing_extensions import Annotated
from pydantic import PlainValidator
class Pet(BaseModel):
name: str
species: str
foo: Annotated[str, PlainValidator(str)]
model_config = ConfigDict(from_attributes=True)
it might work?
If you want different validation behavior, anything should be possible. I am not sure if we expose whether something is being validated from attributes on the ValidationInfo optional argument to validators, but if you need it and we don't, we could probably expose that as well.
Oh I realized you aren't actually using ORM classes, and it's fully self-contained. Making the change I suggested above does indeed get it to behave as desired, I think:
from pydantic import BaseModel, ConfigDict, PlainValidator
from typing import Annotated
class PetCls:
def __init__(self, *, name: str, species: str, foo: int):
self.name = name
self.species = species
self.foo = foo
class PersonCls:
def __init__(self, *, name: str, age: float = None, pets: list[PetCls]):
self.name = name
self.age = age
self.pets = pets
class Pet(BaseModel):
name: str
species: str
foo: Annotated[str, PlainValidator(str)]
model_config = ConfigDict(from_attributes=True)
class Person(BaseModel):
name: str
age: float = None
pets: list[Pet]
model_config = ConfigDict(from_attributes=True)
bones = PetCls(name='Bones', species='dog', foo=1)
orion = PetCls(name='Orion', species='cat', foo=2)
anna = PersonCls(name='Anna', age=20, pets=[bones, orion])
anna_model = Person.model_validate(anna)
print(anna_model)
#> name='Anna' age=20.0 pets=[Pet(name='Bones', species='dog', foo='1'), Pet(name='Orion', species='cat', foo='2')]
|
GITHUB_ARCHIVE
|
Refer to the HP MC/ServiceGuard Manual for more detailed information
/etc/cmcluster/cmclconf.ascii: Defines members of the cluster, lan interfaces, shared disks, and all other global resources for the cluster. To apply modifications, halt all packages, halt the cluster, and run cmapplyconf -v -C /etc/cmcluster/cmclconf.ascii.
/etc/cmcluster/packagedir/package.conf: defines package-specific parameters. Includes SUBNET directive, which should be set to the network where the "floating" package IP address resides. Do not edit this file while the package is running. To apply changes, halt the cluster and run cmapplyconf -v -P /etc/cmcluster/packagedir/package.conf. NOTE: The package.conf files must be identical on all servers in the cluster.
/etc/cmcluster/packagedir/package.cntl: control script for the package. Do not edit this file while the package is running. To make changes, halt the package, edit the file, and restart the package. NOTE: The package.cntl files must be identical on all servers in the cluster.
Commands: (note that all commands should be run with -v option for verbose
cmviewcl -v : shows status of the cluster, including lan interfaces, status of all nodes, current location of packages, critical subnet monitoring, and whether failover is enabled for each package.
cmapplyconf -v -[C|P] filename: used to create/modify Serviceguard binary files. Required after changes to cmclconf.ascii (-C option) and package.conf files (-P option). Should not be run while cluster is active.
cmhaltpkg -v package_name : halts package.
cmrunpkg -v package_name : runs package on the node where command is executed. Use cmrunpkg -n node_name -v package_name to specify a different node.
cmmodpkg -v -e package_name : enables failover for the specified package. You can see if failover is enabled using the cmviewcl -v command.
cmhaltcl -v : halts the cluster. Use cmhaltpkg first to halt each package.
cmruncl -v : starts the cluster and all packages
Working with shared disks
In some situations, you will need to access shared cluster disks without the associated package running. This is common with Oracle database setups, where stopping Oracle will trigger a package shutdown and leave the package disk(s) inaccessible. Follow these steps:
· Identify the shared volume group for the package - grep VG /etc/cmcluster/package_name/package_name.cntl.
· Identify the lvols for each shared VG and their mount points
· Halt the package
· For each shared VG:
· vgchange -c n /dev/vgXX
· vgchange -a y /dev/vgXX
· Mount each lvol to appropriate mount point
Once the work is done, the VGs must be returned to cluster mode:
· umount each lvol from the shared VG
· vgchange -a n /dev/vgXX
· vgchange -c y /dev/vgXX
· cmrunpkg -v package_name
If the VGs are reluctant to enter cluster mode, you can also halt the cluster, run cmapplyconf -v -C /etc/cmcluster/cmclconf.ascii, and restart the cluster. This will reset the clustering flag on all VGs.
|
OPCFW_CODE
|
Update badge
As an admin of ZITADEL with a self hosted instance I want to see when I have to update ZITADEL, so I do not have old code running.
Acceptance Criteria
[ ] I can see which version I am running
[ ] I can see an update badge so I know I need to update
[ ] I can click the update badge and it starts updating?
Additional Context
We should create a update badge in the admin section of console
More or less like gitlab does this (Screen Shot).
Hi @fforootd and @mffap, how about this?
https://github.com/zitadel/zitadel/assets/30386061/7a81b126-b90e-4ebd-b007-7fe0d87016a0
In this video, I'm pretending that 2.30.0 is not the latest version, so the 'update asap' message is shown. If you click on the badge you're redirected to Zitadel's releases pages, I'm not sure if I can implement a click to update action so as a workaround I think it's enough to point the user to Github.
My proposed code is developed in the frontend, if @peintnermax agrees. I fetch (HttpClient) Github's API URL: https://api.github.com/repos/zitadel/zitadel/releases/latest to know which is Zitadel's latest release and get its tag_name and html_url
I've thought about translating 'update asap' but asap is hard to translate so I personally would prefer to keep that badge untranslated.
Any thought, changes?
Thanks!
Nice work again, thank you!
I think we can keep this untranslated for the moment.
Just a thought that occurred to me while thinking about this.
There might be two "update states"
Update ASAP - when we need to advise people to pull an update fast due to security/performance reasons
Update - when people just should update
I had not yet a great idea how to solve this 😁 but I will give it a thought today
Maybe it is easier that we deploy a /update.json on our marketing page to read where we could read off versions with a tag (critical = true/false) or something similar.
Hi @fforootd,
I agree with your proposal with two states and if you could provide a update.json it would be even better as right now I'm requesting GitHub's API using its "free" tier and if you reach the requests limit it would affect how the badge work (low probability, of course).
I'll wait till you have time to grasp the best solution for the update.json endpoint.
Thanks!
Ok I think I got an idea
We could do an API with zitadel.com/update?version={version}
The response can then be something like this:
{
"version": "v2.29.0",
"critical": true,
"message": "please update your app to the latest version",
"url": "a url to a message we want to show the user"
}
Thanks @fforootd, your suggested json response looks perfect, let's see if I get how to use it.
As we're using a badge then we would query the API passing my current version and we have two possible messages for the badge: update (if critical is false) or update asap (if critical is true), right? Maybe we can set different background colors for these two states so ASAP is in red.
Now, if we click ok the badge we would be redirected to the URL set in json response, don't we? Showing the release note for example.
But the message that comes in the API response would be used un a tooltip for the badge, or where should we use it?
Thanks @fforootd, your suggested json response looks perfect, let's see if I get how to use it.
As we're using a badge then we would query the API passing my current version and we have two possible messages for the badge: update (if critical is false) or update asap (if critical is true), right? Maybe we can set different background colors for these two states so ASAP is in red.
Now, if we click ok the badge we would be redirected to the URL set in json response, don't we? Showing the release note for example.
But the message that comes in the API response would be used un a tooltip for the badge, or where should we use it?
Yeah you got this right, redirect should point to infos like release notes or security warnings and the message should be for a tooltip.
Cool, thanks @fforootd. I'll wait until the endpoint is ready as I can't code and endpoint for zitadel.com. Time to move to another issue 😄
Hehe let me cook up something ASAP
Hey I created a poc for this
I added an additional field "latest_version" for the gui to render.
https://website-git-updates-zitadel.vercel.app/updates?version=v2.29.0
It can react in three ways currently:
No version found (version pre 2.29.0 will not be found ATM)
No version provided
Version found
Hi @fforootd, thanks for the POC.
I've added the POC endpoint to my code. Here are two screenshots. The first one shows current situation: instance version 2.30 and endpoint response only suggests to update:
The second is simulating that 2.31.1 is a critical update:
The tooltip has been added with a default 'please update your app to the latest version' if no message is in the response.
What do you think? I've added the suggested version to the badge but I may leave it to update or update asap
Nice work.
You bring up a good point. I currently only have the latest version inside the response but not a update to version >v2.31.1 or something similar. But looking at this such an information makes sense to me.
What do you think of this?
As we can see in this discussion https://gitlab.com/gitlab-org/gitlab/-/issues/344682#note_1233579698 alerting users is somewhat complicated but I like simple things.
I think we should keep the badge simple: "update" or "update asap", that's enough for a sensible administrator and more is useless to careless admins.
I'd prefer the API to tell me which is the latest version (latest_version), if I'm running a version that has critical security updates (critical) and which url has information about this critical situation (url that points me to zitadel.com information and how I should update). Those are covered with your proposed endpoint.
A more complete message in the response would be useful if, for example, we decide to include a warning when you log in the console and the API says we have critical situation! But in that example (screenshot) how could we filter which users should see that message in the console? If we only use the message response for a tooltip for the badge then I don't think a complex message is needed.
As the badge is shown only when you visit your instance, we should warn somewhere else in the console if something critical affects us. I run Matomo for my blog analysis and it shows me a badge e when I do need to update as soon as I log in.
@fforootd this issue in crypto/tls (https://github.com/golang/go/issues/61460) would be a good example for this badge
@fforootd this issue in crypto/tls (golang/go#61460) would be a good example for this badge
Yes, true that make absolut sense. Let me work out the last few things on the API 😁
@fforootd this issue in crypto/tls (golang/go#61460) would be a good example for this badge
Yes, true that make absolut sense. Let me work out the last few things on the API 😁
@fforootd what's the state of this work?
This feature does (not yet) have an ETA since I did not prioritize it to high.
What we have is a POC on both ends. The UI as well as an API.
I would recommend we move this to the regular sprint cycle and discuss priority.
cc @hifabienne
This feature does (not yet) have an ETA since I did not prioritize it to high.
What we have is a POC on both ends. The UI as well as an API.
I would recommend we move this to the regular sprint cycle and discuss priority.
cc @hifabienne
From my perspective this feature is not really urgent and doesn't need high priorisation. Thats why it is in the backlog right now.
If you do see that differently I can take the necessary steps to estimate and plan the feature.
We should also consider showing a message in the logs, f.e. when an important update is due. Not sure what's better: showing this on startup only or also logging an info when we receive that information.
With that notifications could be automated by operators. Maybe there is also a more standardized way of doing this?
We should also consider showing a message in the logs, f.e. when an important update is due. Not sure what's better: showing this on startup only or also logging an info when we receive that information. With that notifications could be automated by operators. Maybe there is also a more standardized way of doing this?
Hm, yeah we could integrate an upgrade check and output a stdout message for example daily.
|
GITHUB_ARCHIVE
|
Brilliantfiction Beauty and the Beasts update – Chapter 1272 – TeaChapter a Lesson measly deceive suggest-p1
Novel–Beauty and the Beasts–Beauty and the Beasts
Chapter 1272 – TeaChapter a Lesson unadvised annoy
Finding the entire family members obtaining along amicably, the fox tribal brain was amazed. Anything similar possessed taken place to his family, even so the consequence was totally different.
Muir’s entire body shook, and this man subconsciously looked toward Bai Qingqing.
Morrigan’s Cross – Circle Trilogy 1
Finding the full loved ones finding along amicably, the fox tribal travel was astonished. A little something equivalent acquired happened to his family, though the end result was different.
the hot zone hulu
“Parker!” Bai Qingqing furiously bellowed and walked in excess of swiftly.
Winston discontinued and walked approximately him for shut down eye-to-eye contact. Even though he was an individual levels weaker, his frame of mind was no weaker than his. Rather, his disposition of an individual within a exceptional place, devising methods and options. He was quoted saying each word very definitely, “Your protection isn’t an item that issues only by yourself, both.”
Muir’s human body shook, in which he subconsciously searched toward Bai Qingqing.
Princess Of Passyunk
Chapter 1272: Educate a Class
Winston questioned, “Do you do have a communication you will need us to pa.s.s through to Arthur?”
In comparison, not merely was his mate a whole lot worse off, but even he couldn’t do a comparison of with Winston.
Winston promptly bundled up their bags. Just like they had been going to keep, he was discontinued by the fox tribal brain.
Parker smacked the fox in the surface with one come to, tiny bit at his shoulder blades, and tugged fiercely. The fox’s distinct scream immediately rang out.
Winston requested, “Do you do have a information you may need us to pa.s.s onto Arthur?”
Winston replied using an “en”. He walked in to the bedroom and stuffed up their suitcase. While doing so, he explained to Muir, “You stay in this article to relax. Following you’re carried out relaxing, go straight back to the area of Beastmen to manage the youngsters.”
Viewing the full family members finding along amicably, the fox tribal head was astonished. A little something very similar obtained took place to his spouse and children, but the consequence was very different.
“Right is our family’s kid! This isn’t just your task!”
Malcolm sensed lose faith. Within the beastman king’s horrifying strain, he instinctively changed into his monster shape. That has a bang, he swelled up and tore his best wildlife skin area skirt he especially have on so as to match Bai Qingqing. Then he fled easily.
Winston inquired, “Do you have a content you need us to pa.s.s onto Arthur?”
Arthur was reproached and driven out even though this female’s partner continued producing considerations for the men who hadn’t taken good care of these little ones. Her other buddies didn’t have issues, frequently.
Muir experienced a pang in his coronary heart almost like it had been remaining filled up by some thing. This made him uncontrollably get onto his heart and soul.
The City Who Fought
Malcolm was elated. He suppressed the frustration within his coronary heart, forced himself to remain composed, then walked up.
She, far too, checked through that has a worried term. “You can just remain on this page. If that continues, you are going to collapse.”
Winston asked, “Do you do have a content you need us to pa.s.s to Arthur?”
Winston speedily crammed up their luggage. Quite as they had been getting ready to depart, he was discontinued via the fox tribal go.
It turned out another four-striped beastman…
Parker smacked the fox into the flooring with just one attack, little at his arm, and tugged fiercely. The fox’s razor-sharp scream immediately rang out.
“Let’s fixed out of straight away!” Bai Qingqing said to Winston decisively.
Chapter 1272: Educate a Class
Chapter 1272: Show a Idea
Experiencing the whole family acquiring along amicably, the fox tribal mind was amazed. A thing identical acquired occurred to his household, though the effect was totally different.
“Cough!” Wasn’t this… also strong? Bai Qingqing couldn’t carry it and choked in her individual saliva.
|
OPCFW_CODE
|
const initViewMarkup = (wrap: HTMLElement, vertical: boolean) => {
const controlWrap = wrap;
const contolMarkup: string = `<div class="ts-slider__container">
<div class="ts-slider__bar">
<div class="ts-slider__toggle ts-slider__toggle--min">
<div class="ts-slider__toggle-value ts-slider__toggle-value--min"></div>
</div>
<div class="ts-slider__toggle ts-slider__toggle--max">
<div class="ts-slider__toggle-value ts-slider__toggle-value--max"></div>
</div>
<div class="ts-slider__range"></div>
</div>
</div>`;
controlWrap.innerHTML = contolMarkup;
const div = controlWrap.querySelectorAll('div');
if (vertical) {
div.forEach((item) => {
const firstClass: string = item.classList[0];
const verticalClass: string = `${firstClass}--vertical`;
item.classList.add(verticalClass);
});
}
return controlWrap;
};
const markup = (slider: HTMLElement) => {
interface Markup {
min: HTMLElement;
max: HTMLElement;
range: HTMLElement;
bar: HTMLElement;
minTab: HTMLElement;
maxTab: HTMLElement;
}
const markupSlider: Markup = {
min: slider.querySelector('.ts-slider__toggle--min'),
max: slider.querySelector('.ts-slider__toggle--max'),
range: slider.querySelector('.ts-slider__range'),
bar: slider.querySelector('.ts-slider__bar'),
minTab: slider.querySelector('.ts-slider__toggle-value--min'),
maxTab: slider.querySelector('.ts-slider__toggle-value--max'),
};
return markupSlider;
};
export {
initViewMarkup,
markup,
};
|
STACK_EDU
|
const readCsv = require("../src/readCsv");
test("Reads a CSV file", async () => {
const data = await readCsv(`${__dirname}/../sample-data/simple.csv`);
expect(data).toBe("name,age\nJohn,30");
});
test("Throws an error on non-existent file", async () => {
await expect(
readCsv(`${__dirname}/../sample-data/non-existant.csv`)
).rejects.toThrow("Cannot find the specified CSV file.");
});
test("Throws an error when passed a non-string file path", async () => {
await expect(readCsv(false)).rejects.toThrow(
'The "path" argument must be one of type string, Buffer, or URL. Received type boolean'
);
});
|
STACK_EDU
|
When to close for product recommendations
I do a lot of closures on the main Stack Overflow site so I've seen more than a few recommendation requests. While DIY is a different subject set, I've been increasingly baffled by people wanting to close questions for product recommendations for what is arguably an open request for a DIY solution.
Case in point: Central supports for bed (Close queue)
It even has this comment
VTC - Questions seeking product recommendations are off topic here
The problem is I don't see a product recommendation request. In fact, the accepted answer mentions no specific products at all. Compare it to this question, where the product recommendation is explicit.
We need to have a better standard of closure here. I am proposing we have this as our standard
Questions directly asking for a service or product should be closed
Questions describing a problem should be left open
I'm basing this on Community Manager guidance for Stack Overflow
"Recommendation question" is shorthand for "you didn't describe a problem, you just asked for a list of things."
Most DIY questions need a push in the right direction, and mentioning and linking products to a general question. If we're going to close all the DIY questions where a product would solve the problem we're not going to have many questions left to answer.
I almost completely agree. The issue is when the question is: "I need to solve XYZ; is there a product for that?", which is problematic when the OP suggests that they have no interest in "doing something" to solve it, they just want to buy something. I agree with you on the bed example. Maybe the test in those cases is whether the question can be tweaked to "how can I accomplish that" without changing the intent of the question.
I just tweaked the bed question wording so that it doesn't appear to be a product request.
My personal opinion is that product recommendations should be allowed on all SE sites because it's usually useful to others. The real test should be "is this question/answer set useful to more people than the OP". Sometimes the solution to a problem is to buy a product.
@Nick it's without a doubt useful for another, but until when? In 1 or 2 years, the product might have gone and replaced by different product rendering the answer obsolete. That's the problem with product recommendation question. Instead of recommending specific products, explain which kind of product with specific criteria that could help solve the problem. After that, mentioning a product example as a bonus is acceptable.
Questions on the site should be of the form
How do I solve this problem?
with sufficient detail about the problem of course.
Then the answer is how to solve that problem, be it
Do this procedure
or
Buy this product
etc.
This should be the same across the network, the only problem being that on some sites people abuse the system so questions that are clearly product recommendations tend to get shut down quite quickly.
I think we should be fine here as long as the questions can be rephrased as above.
My personal dividing line is if the question invites products as a potential answer, making users unfamiliar with the site likely to be accidentally labeled as a spammer. The site feels more inviting if new users aren't encouraged to post content that would get them banned from the site after a single mistake.
Even the first linked question I personally see as a product recommendation request (holding off on posting my own binding close vote in case I'm way off base). While the problem was stated, the insufficient solutions are two off the shelf products, and the question at the end was looking for an "option" which feels like a synonym for "product" in the context of the question.
|
STACK_EXCHANGE
|
How to use xpcall with a function which has parameters?
On this website there is an example on how to use xpcall on a function without parameters. But how can i use xpcall on a function like this:
function add (a, b)
return a + b
end
And it should get the return value.
This is my attempt (does not work, i get: false, error in error handling,
nil):
function f (a,b)
return a + b
end
function err (x)
print ("err called", x)
return "oh no!"
end
status, err, ret = xpcall (f, 1,2, err)
print (status)
print (err)
print (ret)
Rolled back to remove answer from question for reasons explained by @EtanReisner.
If you are using Lua 5.1, then you need to wrap your desired function call in another function (that takes no arguments) and use that in the call to xpcall.
local function f (a,b)
return a + b
end
local function err (x)
print ("err called", x)
return "oh no!"
end
local function pcallfun()
return f(1,2)
end
status, ret = xpcall (pcallfun, err)
print (status)
print (ret)
In Lua 5.2, 5.3, and 5.4 xpcall now accepts function arguments directly:
xpcall (f, msgh [, arg1, ···])
This function is similar to pcall, except that it sets a new message handler msgh.
So the call would be:
status, ret = xpcall (f, err, 1, 2)
in your sample code.
As a comment points out, LuaJIT also accepts optional arguments after the error-handling function. (LuaJIT essentially targets Lua 5.1, but it extends stock Lua 5.1 in several ways.)
Thank you for your solution! I also post an alternative solution on my main post in a sec.
Also note that LuaJIT, despite supporting 5.1, also allows you to pass arguments via xpcall.
@EdwardBlack Your "alternate" solution is what I pointed out in the 5.2 and 5.3 section of my answer. And you shouldn't put answers in your post. If you answer your own question then add an answer to your question. That's perfectly acceptable.
Your answer had no example. If i add an answer to my question then i can't format the code in the answer.
I didn't think I needed a literal code example given the snippet of example in the manual and the clearly applicable concept there (you almost had it in your test snippet for example).
You can always ask for one if you don't understand what the manual is saying. But that's not the point. The point is my answer covered it and even if it didn' you shouldn't edit it into your post. You should add your own answer.
I ofc understanded it, but provided an extra code example for others who don't... i don't know why you are so angry about another helpfull example, this site exists to help others, but i think most people only help to get points...
I'm not angry. Nothing in my response was at all angry. I was confused and wanted to make sure you'd understood that my answer had covered the solution you listed as "alternate" (in case you hadn't understood that). I also wanted to explain to you how one goes about correctly answering a question that they themselves has asked (by posting an answer not by updating the question).
Note: the following example works for Lua 5.2, 5.3, and 5.4. It also works for LuaJIT. However, as Etan Reiser explains in his answer, this code does not work for Lua 5.1. In Lua 5.1, xpcall accepts only two arguments: the function to call and an error handler. (LuaJIT supports Lua 5.1, but it extends stock Lua in several ways.) See Etan Reiser's answer if you want to support stock Lua 5.1.
function f (a,b)
return a + b
end
status, ret = xpcall (f, debug.traceback, 1, 5)
print (status)
print (ret)
|
STACK_EXCHANGE
|
ffmpeg draw a line from point to point
I'm needing to draw a line from point to point in ffmpeg. I don't see a drawline filter in ffmpeg, so I'd assume drawbox would need to be used (see command below). How could I adapt this to draw a diagonal line from, say, 10,10 to 500,500?
I've used pythagorean theorem to calculate how 'wide' the line needs to be, but that is as far as I've gotten:
ffmpeg -i input.mp4 -vf drawbox=x=10:y=10:w=692:h=1:color=red output.mp4
Thank you
In this specific case, since the line is at 45%, we can use the method given below.
ffmpeg -i in.mp4 -filter_complex
"color=red:s=490x490,geq=lum='p(X,Y))':a='if(eq(X,Y),255,0)'[c];
[0][c]overlay=10:10:shortest=1"
out.mp4
The GEQ filters allows one to manipulate individual pixels using expressions. If a line is at 45 deg, that means all points are on the line X = Y or X = -Y. The latter case is irrelevant here.
So, first a blank canvas is created. Its size is the coverage needed to draw the entire line (W = 500-10; H = 500-10). Then the GEQ sets all pixels with X = Y to opaque but all others to transparent. (The lum expression is needed due to a quirk of the filter design; all it does is retain the existing value of the three planes - luma & two chroma).
Then this output is overlaid with an offset of (10,10). The shortest is needed because the color/geq input never terminates.
For the general case of a line at an arbitrary degree, you would draw a straight line i.e. keep alpha of 255 for a single row i.e. 'if(eq(Y,100),255,0)', then use the rotate filter to get it to the correct angle. (The rotate padding should be fillcolor=anycolor@0). Then overlay that.
This works but seems to cause ffmpeg to burn a lot of CPU. Is there a more efficient way? I assume this is so slow because the geq expression is being evaluated for every pixel for every frame.
That seems particularly inefficient, especially in this case when there's no time element. drawtext or drawbox take up almost no CPU in comparison. Any ideas?
|
STACK_EXCHANGE
|
Project Handover
I'm afraid it is unlikely I'll be able to get back into this - I have several other projects I'm also struggling to find time for. Unfortunately, I'm not using this in my work either. I have neither an excuse / a need to improve it nor the time. :/
It should be clear to anyone that the rework branch work was arriving at a point where I was reinventing a loader system in order to account for Aurelia (at the time) having 3 separate ways to load dependencies (webpack, systemjs, requirejs). I came to the conclusion that the best solution would be to simply have all of this linting stuff done as a webpack plugin - that way the dependency resolution would be handled for me and this project could just focus on linting. A bridge too far and I lost the enthusiasm.
I suspect the vscode plugin is in far better shape / more useful than this project anyway. In the unlikely case that any of this code is still useful, I'll be happy to transfer it and/or transfer ownership of the npm package names if they are wanted.
@EisenbergEffect
Sad news, but understandable.
When i was doing FE development, i found it to be most useful Aurelia related tool - at that time VS code plugin wasn't even available, so i can't really compare these two (never used VS Code either).
@eriklieben, perhaps you could shed some light on the common and distinct linting related features of aurelia-template-lint vs Aurelia VSCode Extension. It would be fantastic, if VSCode extension could be refactored into "framework service" (like TypeScript language service), that could be consumed by Aurelia plugins for any IDE (or build tool for only linting purposes), not just for VSCode.
@MeirionHughes We're greatful for all the work you put into this. Let's hear from @eriklieben to see how he sees this with respect to the VS Code plugin. We'll probably end up accepting ownership of this in one way or another. I'd like to get Erik's thoughts.
Thank you for all the work you put into this!
To answer the questions above:
The visual studio code extension has a language server and a language client part; this is something which the vs code team created to create decoupled components that can be reused. So, in theory, we should be able to, for example, create a new client in SublimeText and reuse the same server language server. Except for some parts that are specific to the vs code editor that lives in the language server client now.
I've recently moved over the validations that were in the client language server part of the extension to the server part and did some refactoring to make creating validations easier. What we can do is migrate the current code to the vs code validations. And if there are any typescript validations move them to a TypeScript language service extension, so they can directly be reused in every editor that talks to TypeScript.
Or we can leave it as it is, and try to integrate it, but that might require parsing files once in the vs code extension and once in the lint tool.
I am currently also fighting a bit to find the time and first want to finish the intellisence/ smart autocomplete first. After that, I want to dive into this and get it better integrated or migrated.
Maybe we can fork or move the lint tool to the Aurelia GitHub, so we can find ways to provide support for it.
@MeirionHughes Do you want to just transfer this to the aurelia GitHub org? We can take the repo and then update the readme to indicate that we're going to work most of these ideas into the VS Code plugin. Thoughts?
@EisenbergEffect can do; seems I can't transfer unless you give me permission on Aurelia to make repositories; or I transfer directly to you?
I'll take it into my personal account and then move it to Aurelia. Thanks!
can be closed
|
GITHUB_ARCHIVE
|
But that doesn't mean that Haskell has a nice way to do absolutely everything that you might choose to do in Lisp.
For instance, let's take a look at data-directed programming. One of the sections in SICP discusses dynamically building a dispatch table for functions using the following calls:
(put <type> <op> <item>) installs <item> in the table entry indexed by <type> and <op>
(get <type> <op>) looks up the <type>, <op> entry in the table and returns the item found there. If no item is found, get returns nilBy using these calls we can create polymorphic functions. The first step is to define function real-part-rect and real-part-polar that will return the real part of a complex number (from rectangular form and polar form, respectively). The put function defined above is then used to place these into the dispatch table.
(defun real-part-rect ...If we have an object that responds to (type obj) with one of polar or rectangular and responds to (contents obj) with a complex value in either polar or rectangular form, then
(defun real-part-polar ...
(put 'polar 'real-part real-part-polar)
(put 'rectangular 'real-part real-part-rect)
(define (operate op obj)creates an environment where real-part is a polymorphic function that gets the real part of the complex number.
(let ((proc (get (type obj) op)))
(if (not (null? proc))
(proc (contents obj))
(error "Undefined operator for this type"))))
(define (real-part obj) (operate 'real-part obj))
I think it's pretty easy to see how this could be extended to dispatch on the type of more than one argument -- in other words, to create multi-methods. In fact, with a trie structure this could be used to dispatch on a variable number of types. That could be useful in handling default values for function arguments, or in creating a polymorphic lambda. The latter example points out that the name of the function might well be viewed as just another argument with some syntactic sugar on top.
This is an example of where Lisp dynamic typing shines, and Haskell's typing restrictions disallow some types of functionality...if there's a way to do this in Haskell, I am unaware of it.
As an aside, one of the things that has always rubbed me the wrong way about many OO languages is the need to attach every procedure to one and only one data object. If you want a method to multiply a complex matrix by a complex number, the method naturally belongs at the intersection of two different classes. Making it belong to one class or the other seems like forcing a square peg into a round hole.
*To the best of my knowledge. I'm definitely still learning.
|
OPCFW_CODE
|
Render different tabs based on condition (isLogged)
Version
Tell us which versions you are using:
react-native-router-flux v3.36.0
react-native v0.36.0
Can I render different tabs based on application state condition?
I have an application that has sections that are only visible to logged in users, I wish that some tabs appear only if the user is logged in. I tried to do this with the component Switch:
selectRouterTab(props) {
console.log('select tab', props);
return props.isLogged ? 'authenticated' : 'notAuthenticated';
}
render() {
const ConnectedSwitch =
connect(state => ({ isLogged: state.user.isLogged }))(Switch);
return (
<RouterWithRedux getSceneStyle={getSceneStyle}>
<Scene
key="root"
hideNavBar
hideTabBar
>
<Scene key="tabbar" tabs tabBarStyle={styles.tabBarStyle}
component={ConnectedSwitch}
unmountScenes={true}
selector={this.selectRouterTab}
>
<Scene key="notAuthenticated" tabs>
<Scene key="superHome" title="Home" icon={TabIcon}
iconTitle="home"
navigationBarStyle={styles.navigationBarStyle}
leftButtonStyle={styles.backButton}
titleStyle={styles.titleStyle}>
<Scene
key="home"
component={Home}
renderRightButton={this.renderIcon}
/>
<Scene
key="postDetail"
component={Detail}
title=""
navigationBarStyle={styles.navigationBarStyleDetail}
hideTabBar
/>
</Scene>
<Scene key="settings" title="" icon={TabIcon}
iconTitle="user">
<Scene
key="tab1_1"
component={Progress}
title="Settings"
/>
</Scene>
</Scene>
<Scene key="authenticated" tabs>
<Scene key="bookmark" component={Spinner} title="" icon={TabIcon}
iconTitle="bookmark">
</Scene>
<Scene key="notification" component={Examples} title="Notifiche" icon={TabIcon}
iconTitle="notification"/>
<Scene key="settings" title="" icon={TabIcon}
iconTitle="user">
<Scene
key="tab1_1"
component={Bar}
title="Settings"
/>
</Scene>
</Scene>
</Scene>
</Scene>
</RouterWithRedux>
);
}
however, I encountered a problem with this approach:
I had to repeat a scene settings and tab1_1, and this causes me the error console Key settings is already defined!
Is there a way to solve only show the tabs you want depending on a condition?
Even a different approach to achieve the desired result!
@dragfire thanks for the quick answer.
But it seems not to be the same situation.
When the application is launched the initial route is already within the TabBar. Also launching Action.home({type: 'REMOVE_TWO_TABS'}) in componentDidMount they remain displayed.
Only when I press another tab, children are removed.
Another problem is that when I go to another tab, Do I login and back of the home tab, the tab children no longer appear. It seems that this will be permanently erased.
Update ..
By using the method of removal of children through reducer I think definitely is not the correct solution.
The situation is this:
const reducerCreate = (params) => {
const defaultReducer = new Reducer(params);
return (state, action) => {
switch(action.type) {
case 'REMOVE_TWO_TABS':
const children = _.merge([], state.children);
if(children[0].children.length > 2)
children[0].children.splice(1, 2);
return _.assign({}, state, { children });
default:
return defaultReducer(state, action);
}
};
};
class AppNavigator extends Component {
render() {
<RouterWithRedux createReducer={reducerCreate} getSceneStyle={getSceneStyle}
navigationBarStyle={styles.navigationBarStyle}
leftButtonStyle={styles.backButton}
titleStyle={styles.titleStyle}>
<Scene
key="root"
hideNavBar
hideTabBar
>
<Scene key="tabbar" tabs tabBarStyle={styles.tabBarStyle} initial>
<Scene key="superHome" title="Home" icon={TabIcon}
iconTitle="home">
<Scene
key="home"
component={Home}
title="HOME"
renderRightButton={this.renderIcon}
/>
<Scene
key="postDetail"
component={Detail}
title=""
navigationBarStyle={styles.navigationBarStyleDetail}
hideTabBar
/>
</Scene>
<Scene key="bookmark" component={Bookmark} title="" icon={TabIcon}
iconTitle="bookmark">
</Scene>
<Scene key="notification" component={Notification} title="" icon={TabIcon}
iconTitle="notification"/>
<Scene key="settings" title="" icon={TabIcon}
iconTitle="user">
<Scene
key="tab1_1"
component={Settings}
title="Settings"
/>
</Scene>
</Scene>
</Scene>
</RouterWithRedux>
}
}
class Home extends Component {
componentWillMount() {
if(!this.props.isLogged) {
setTimeout(() => {
Actions.tabbar({ type: 'REMOVE_TWO_TABS' });
}, 0);
}
}
// ...
}
Even calling the action that removes type in componentDidMount nothing happens, unless use a setTimeout to delay the call to action REMOVE_TWO_TABS.
But this means that for a moment I see all tabs.
The second issue is that if the user logs in, I have to show all tabs, which means that when I remove the tabs I have to save them somewhere and when the user logs I have to put them ..
This does not seem a good behavior ..
So, there is a right way to hide / show tab depending on a condition?
I tried using the "switch" component and with this solution but I can not find answer.
I think what I want to achieve is not a strange thing!
Could you please tell me the solution to the problem?
Looking for a solution for this too.
You don't have to dispatch your action REMOVE_TWO_TABS in a component.
You can dispatch it before your LOGIN_SUCCESS_ACTION for instance, so you can remove the tab before to navigate to home scene.
When the user logout, you can re-initialise the scenes to restore all the tabs that you've previously deleted.
|
GITHUB_ARCHIVE
|
[Python-Dev] Version fields [was Re: Goodbye]
nad at acm.org
Fri Sep 24 12:40:10 CEST 2010
In article <4C9C6A6F.6010202 at netwok.org>,
Éric Araujo <merwok at netwok.org> wrote:
> How about revamping the type/versions fields?
> Issue type
> () Feature request (blocked by moratorium: () yes () no)
> () Bug (found in: 2.7 3.1 py3k)
> () Security bug (found in: 2.5 2.6 2.7 3.1 py3k)
> I’m getting tired of explaining the meaning of the versions field again
> and again, let’s put this information directly under the eyes of the bug
I believe there is another separate use case for the versions field that
hasn't been mentioned yet: for issues with "release blocker" priority,
the versions field is often used to identify the upcoming release for
which a resolution is deemed mandatory. However, having a single
versions field is not totally satisfactory for this, particularly when -
as has happened in the recent past - two different release cycles
overlap (say, Python 2.7.x and 3.2.y). A given issue may or may not
apply to both, it may be a "release blocker" for one or both, and, if
both, a fix may be checked-in for one branch but not the other. The
release managers for the two releases may end up using the one priority
field and the one set of version fields for conflicting purposes. It
certainly makes it more difficult to automate a tracking report of
exactly what are the release blocker issues for a specific release.
Besides adding fields to the database for an issue, there are other ways
to handle the ambiguity, of course. The simplest might be to just open
a separate duplicate issue for each additional release blocked.
Presumably that level of detail might only be needed in the endgame of a
release, beta or rc stages. It still places a restriction on the use of
the version field and possibly other fields. In issue workflow
documentation, there should be some description of how "release blocker"
should work, perhaps including something along the lines of "once a
release enters stage <x>, 'release blocker' priority should only be
changed with the approval of the release manager".
nad at acm.org
More information about the Python-Dev
|
OPCFW_CODE
|
Combined with DNS-over-TLS and DNS-over-HTTPS, Technitium DNS Server provides a good level security and privacy from network level DNS attacks and from adware. This makes it a must have tool if you are a privacy and security conscious person.
Technitium DNS Server is cross platform and works on Windows, Linux or macOS.
|Technitium DNS Server v2.0
How Does It Work?
The Ad blocking feature works using the DNS Sinkhole method. With this feature enabled, for all the blocked domain names, the DNS Server will respond with 0.0.0.0 IPv4 address and :: for IPv6 address making the Ads fail to load making the website you visit free from Ads. This can not only block Ads but also adware, malware, social networks, porn etc. based on the block lists you configure in settings.
On your computer, you need to install the DNS Server and configure your network adapter's DNS settings to use the locally hosted DNS server. Once this is done, you need to configure the Block List URL settings to start blocking Ads. Once the DNS Server loads the block lists, it would respond with 0.0.0.0 IP address for the blocked websites making them fail to load.
You may also install the DNS Server on any spare computer on your network and configure your home or office router with IP address of this spare computer as DNS server in DHCP settings. With this setup, all your computers and devices like mobile phones would use the installed DNS Server blocking Ads and malware domains on all devices without installing any additional software on them.
Configuring Block Lists
To enable Ad blocking, you need to configure Block List URLs in the settings. Known and popular block lists are already listed in the Quick Add drop down list from where you can just click and add those URLs.
|Technitium DNS Server Block List Configuration
If you are not sure, just select the Default option from the Quick Add drop down list and a default set of block list URLs would get configured.
Once done, click the Save Settings button at the bottom of the page to save the changes and start the block list download background process. These configured block lists are automatically downloaded every 24 hours to keep the DNS Server blocked zone updated.
If you have the DNS server installed directly on your computer then don't forget to configure your network adapter's DNS server settings to 127.0.0.1 (for IPv4) and ::1 (for IPv6). Without these network configuration changes, the DNS Server wont get any queries to respond to and things wont work as intended.
If you setup the DNS server to be used on the network by all devices then do configure your router's DHCP config and set the IP address of the computer running the DNS server as the DNS for your network. By configuring the router's DHCP, you don't need to manually configure any of your devices on the network.
|IPv4 DNS Server Network Configuration
|IPv6 DNS Server Network Configuration
Once the configuration is done, just check the Dashboard on the web console after a couple of minutes to see the number of blocked domains in the Blocked Zones widget. If there are too many block list URLs configured, it may take few more minutes for all of them to get downloaded and loaded.
If you have any further queries, do write them below as comments or send an email to email@example.com.
|
OPCFW_CODE
|
The following types of questions are representative of those that will appear on the February 16, 2006 exam. This is not an exhaustive list, merely a representative list.
- Definitions: You may be asked to define technical terms; conversely, you may be given technical definitions and asked to supply the corresponding terms.
Categorization of routing-problem instances: You may be given an instance (solved or unsolved) of the interdomain-routing problem and asked whether it falls into a category that was covered in class and in the assigned reading. For example, you could be given a solved instance similar to the one on slide 25 of http://www.cs.yale.edu/homes/jf/FRS.ppt and asked whether it satisfies the Gao-Rexford conditions. The answer in that case is yes. If node 1 had assigned a value to 17245d, the answer would be no: Export of provider route 7245d by 7 to its provider 1 would violate Gao-Rexford scoping constraints.
Similarly, you could be given the following unsolved instance and asked whether it is an instance of next-hop policy routing:
In this case, the answer is no, because node a values abd differently from the way it values abcd, even though these two routes have the same next hop.
You may be asked to explain the main contributions of various papers or results that are identified by authors' names. For example, you may be asked to explain how the Feigenbaum-Papadimitriou-Sami-Shenker formulation of the LCP routing problem improves on the Nisan-Ronen formulation or why the Parkes-Shneidman formulation improves on the Feigenbaum-Papadimitriou-Sami-Shenker formulation.
Complexity results and their relevance: You may be asked to state the computational complexity of one or more of the categories of interdomain routing that we studied and explain its practical relevance. The question may also ask you to define this category of instances. For example, you may be asked to define the forbidden-set routing problem (see http://www.cs.yale.edu/homes/jf/FKMS.pdf for the definition), to say what its computational complexity is (NP-hard to approximate within any factor - see Theorem 2 of http://www.cs.yale.edu/homes/jf/FKMS.pdf and think about why it applies), and to explain the practical relevance of this complexity result (forbidden-set routing policies, although they appear to be simple and natural, are not a realistic option for BGP-compatible computation of routes and VCG prices; if, as this NP-hardness result tells us, approximately optimal routing trees cannot even be found efficiently in a standard, centralized computational model, where communication complexity and incentive compatibility are not potential obstacles, then a fortiori they cannot be found efficiently in the BGP-computational model, where these potential obstacles exist).
You may be asked to apply and/or generalize the results or principles covered in class. For example, you may be asked to specify a VCG mechanism for a well-known problem (see, e.g., Problem 6 in http://zoo.cs.yale.edu/classes/cs455/2002/hw1.pdf and its solution in http://zoo.cs.yale.edu/classes/cs455/2002/hw1sol.pdf).
You may be given one or more interdomain-routing instances and asked to compute the results of running one or more of the algorithms we studied (e.g., the FPSS algorithm or the FRS algorithm) on these instances.
|
OPCFW_CODE
|
React Native is a powerful technology that enables you to build adaptable, low-cost solutions that are simple to update, maintain, and support. Continue reading if you’re contemplating React Native as the core technology for your app and need to find a React Native developer.
By level of seniority, a React Native developer should have the following hard skills. Before you hire a React Native developer, there are a few things you should look into. Models for collaborating with React Native developers, as well as where to look for them.
Let’s get this party started.
Hard to find qualities in a React Native developer
You can categorize candidates into three categories based on their abilities, experience, and expertise when looking for a React Native developer for your project: junior, medium, and senior.
Let’s take a look at what expertise junior, medium, and senior React Native developers should have to figure out which type of developer you’ll need to complete your project.
Junior React Native developer
A junior React Native developer should be able to perform the following tasks:
• Work with the React Native framework. The React Native framework, including its core components, APIs, and libraries, should be well-understood by a junior developer. They should also be aware of the fundamental React assumptions.
• Make navigation a priority. A junior developer should be familiar with the many types of app navigation, such as push, modal, and so on.
• Make use of Redux. Developers may use this state management solution to manage shared states across components easily and construct predictable programs that work in various situations. Different state management solutions are available to developers. Redux, on the other hand, is one of the most user-friendly and robust.
• Debug and test your program. Junior developers should understand the fundamentals of debugging and testing, as well as the tools that are utilized for these tasks.
Middle React Native Developer
A middle-level React Native developer should be able to do the following:
• Enhance performance. They should be familiar with the most prevalent ways of app performance optimization and provide suggestions on how to improve the performance of various types of apps.
• Create features that are specialized to mobile devices. For example, middle React Native experts must be able to work with cameras, microphones, GPS sensors, and gyroscopes, among other things.
• They must also understand how to incorporate these hardware characteristics into an app and deal with the difficulties that may arise when doing so.
• Offline data storage and caching A middle developer should be familiar with various database types (relational and non-relational) and the libraries used to work with them.
• They must grasp all of the complexities of offline data caching and be able to distinguish between when it is appropriate to use pre-written code and when it is ideal for writing caching logic from scratch.
• Connect to external services.
• Middle React Native developers should be knowledgeable with the intricacies of integrating third-party services (Google, Facebook, PayPal) and how to use their APIs to increase the functionality of mobile apps.
• Apps are signed and deployed. Therefore, a middle developer should be knowledgeable about signing an app and where and how to deploy an app to Google Play and the App Store.
Senior React Native Developer
A senior React Native developer should be able to do the following:
• Create native applications. A developer must be familiar with the languages used for native app development, such as Objective-C, Java, Swift, and Kotlin, to properly migrate an existing Android or iOS project to React Native.
• To create full-featured, rich products, integrate native libraries and frameworks into React Native apps.
• Create a continuous integration and delivery system. CI/CD improves code quality, enables early detection and resolution of bugs, and improves the transparency and visibility of the development process.
• Mentoring and establishing a product development team workflow
• A senior developer should be able to explain the differences between multiple software development methodologies, justify the use of one for a specific project, and assist the team in transitioning to a new and more efficient workflow as appropriate.
• Participate in research and conversations, as well as provide architectural ideas for future application development.
• The architecture of an app must be planned ahead of time to meet the project’s business and technological objectives. The architecture of the app is the responsibility of a senior developer, as it is one of the most important decisions for a project.
A few things to keep in mind when hiring a React Native developer:
Soft talents are just as significant as hard skills in describing a candidate’s personality. Moreover, soft skills, according to some HR managers, are even more crucial than hard skills for project success.
Curiosity, helpfulness, communication, teamwork, problem-solving, and accountability are some of the soft qualities a software engineer should have.
Finding someone who is a cultural fit for your organization entails finding someone who shares your firm’s values and has a similar perspective.
A developer’s portfolio represents their knowledge and skills. Even inexperienced developers should create at least a few apps to polish their skills. To demonstrate their abilities, many professionals include a link to their GitHub account in their CV. Some may even brag about creating libraries that other programmers utilize in their work.
The perfect candidate for your project is someone who possesses all of the necessary talents. You can recruit a react native developer if your project is growing and you already have a team of developers that can help and mentor a newbie. However, if you don’t have any React Native experts on your team, you’ll need someone with more seniority.
We described three working models with React Native developers and showed you where to find these specialists in this article. We explained how the expected skills, experiences, and responsibilities differ for junior, middle, and senior React Native developers, described three models of working with React Native developers and showed you where to find these specialists.
|
OPCFW_CODE
|
MPI allows to create a new communicator by splitting an existing one into a sub-communicator, which can make our program dynamically select a subset of computing nodes to involve in the collective communication operations, such as all-reduce and all-gather operations. NCCL also has a similar feature, but it is not well-documented yet.
Since NCCL relies on MPI to run on multiple nodes, the following example code is based on MPI Programming Model. Assume there are 4 CUDA GPUs and 4 corresponding MPI ranks. This code performs all-reduce operation within the first two and the last two ranks simultaneously.
This code can be compiled and run on my machine with these commands,
nvcc -ccbin mpic++ test.cu -o test -L/usr/local/cuda/lib -lnccl
nvccto compile MPI code is not a common practice. It is recommended to compile it with
mpic++from a CUDA-Aware MPI variant.
The output of this program should be,
[rank0] result: 1 # 0 + 1 = 1
The key is
ncclCommInitRank. Suppose only a subset of ranks initializes the communicator with the same unique ID belonging to one of them. In that case, this communicator will ignore other ranks that are not in this subset.
Usage of ncclCommInitRank
Official API explanation:
1 ncclResult_t ncclCommInitRank(ncclComm_t *comm, int nranks, ncclUniqueId commId, int rank)
Creates a new communicator (multi thread/process version). rank must be between
nranks-1and unique within a communicator clique. Each rank is associated to a CUDA device, which has to be set before calling
ncclCommInitRankimplicitly synchronizes with other ranks, so it must be called by different threads/processes or use
In addition to the official instructions, we should also know,
- Each unique ID should only be used once.
ncclGetUniqueIdcan be invoked multiple times, and it will return a different unique ID each time. Meanwhile, the unique ID generated before is still working.
- It is safe to communicate within disjoint subsets of nodes simultaneously.
- Using NCCL to perform inter-GPU communication concurrently with CUDA-aware MPI may create deadlocks.
Moreover, I also evaluate the influence on performance bring by sub-grouping.
The testbed is,
g4dn.metalinstance with 8x NVIDIA Tesla T4 GPUs.
- Shipped with AWS Deep Learning AMI
- OS: Ubuntu 18.04 (Kernel Version: Linux 5.4)
- CUDA Toolkit: 11.0 (Driver Version: 450.119.03 )
First of all, I would like to emphasize the GPU topology of this bare-metal machine.
Note: We should extract the topology information from physical machines instead of virtual machines since the hypervisor may fuzz the result due to security reasons.
# nvidia-smi topo -m
It looks like a balanced tree topology. We could expect two neighbor GPUs will have higher communication efficiency.
The result below is measured on the root rank, and each experiment is repeated 5 times. Meanwhile, the environment
CUDA_VISIBLE_DEVICES was set to reorder GPUs binded to MPI ranks. CPU binding remains unset.
And the meaning of the notations on communicators is,
0/1: Only one communicator performing all-reduce on physical GPU 0/1.
0/1 + 2/3: Two communicators are working at the same time, and each of them perform all-reduce on two GPUs independently.
0-7: Equivalent to
From the result above, we can conclude that,
- GPUs are working at PCIe Gen3 x8 mode as the PCIe Switch splits one PCIe x16 slot into two x8 slots.
- Double checked by
nvidia-smi --query-gpu=pcie.link.gen.current --format=csvand
sudo lspci -vvv
- Double checked by
- The GPU Topology will significantly affect the performance of all-reduce.
- The topology that NVIDIA DGX adopt should obviously accelerate collective communication operations.
- The interference between two concurrent communicators is not quite noticeable.
- UPI bus is not a bottleneck when two PCIe Gen3 x16 devices (PCIe Switches) transmit a large data chunk over UPI bus.
|
OPCFW_CODE
|
By Alexandre Gonfalonieri, AI Advisor.
After a number of AI tasks, I spotted that deploying Machine Studying (ML) fashions at scale is among the most necessary challenges for corporations prepared to create worth by means of AI, and as fashions get extra advanced it’s solely getting tougher.
Primarily based on my expertise as a guide, solely a really small proportion of ML tasks make it to manufacturing. An AI undertaking can fail for a lot of causes, amongst which is deployment. It’s key for each decision-maker to totally perceive how deployment works and tips on how to scale back the chance of failure when reaching this important step.
A deployed mannequin may be outlined as any unit of code that’s seamlessly built-in right into a manufacturing setting and might soak up an enter and return an output.
I’ve seen that so as to get their work into manufacturing, knowledge scientists sometimes should hand over his or her knowledge mannequin to engineering for implementation. And it’s throughout this step that a few of the commonest knowledge science issues seem.
Machine Studying has a number of distinctive options that make deploying it at scale tougher. That is a few of the points we’re coping with (others exist):
Managing Information Science Languages
As it’s possible you’ll know, ML functions typically comprise of parts written in several programming languages that don’t at all times work together properly with one another. I’ve seen, many instances, an ML pipeline that begins in R, continues in Python and ends in one other language.
Normally, Python and R are by far the preferred languages for ML functions, however I seen that manufacturing fashions are not often deployed in these languages for numerous causes together with velocity. Porting a Python or R mannequin right into a manufacturing language like C++ or Java is sophisticated, and infrequently leads to lowered efficiency (velocity, accuracy, and so on.) of the unique mannequin.
R packages can break when new variations of the software program come out). As well as, R is gradual, and it’s not going to churn by means of large knowledge effectively.
It’s an awesome language for prototyping, because it permits for straightforward interactions and problem-solving, nevertheless it must be translated to Python or C++ or Java for manufacturing.
Containerization applied sciences, equivalent to Docker, can remedy incompatibility and portability challenges launched by the multitude of instruments. Nonetheless, automated dependency checking, error checking, testing, and construct instruments won’t be able to sort out issues throughout the language barrier.
Reproducibility can also be a problem. Certainly, knowledge scientists could construct many variations of a mannequin, every utilizing completely different programming languages, libraries or completely different variations of the identical library. It’s troublesome to trace these dependencies manually. To unravel these challenges, an ML lifecycle instrument is required that may mechanically monitor and log these dependencies in the course of the coaching part as configuration as code and later bundle them together with the skilled mannequin in a ready-to-deploy artifact.
I might advocate you depend on a instrument or platform that may immediately translate code from one language to a different or permit your knowledge science staff to deploy fashions behind an API to allow them to be built-in anyplace.
Compute Energy and GPU’s
Neural nets are sometimes very deep, which signifies that coaching and utilizing them for inference takes up quite a lot of compute energy. Often, we would like our algorithms to run quick for lots of customers, and that may be an impediment.
Furthermore, many manufacturing ML right now depends on GPUs. Nonetheless, they’re scarce and costly, which simply provides one other layer of complexity to the duty of scaling ML.
One other attention-grabbing problem of mannequin deployment is the dearth of portability. I seen that it’s typically an issue with legacy analytics methods. Missing the aptitude to simply migrate a software program part to a different host setting and run it there, organizations can develop into locked into a selected platform. This could create obstacles for knowledge scientists when creating fashions and deploying them.
Scalability is an actual concern for a lot of AI tasks. Certainly, it is advisable to make it possible for your fashions will be capable to scale and meet will increase in efficiency and software demand in manufacturing. Originally of a undertaking, we normally depend on comparatively static knowledge on a manageable scale. Because the mannequin strikes ahead to manufacturing, it’s sometimes uncovered to bigger volumes of information and knowledge transport modes. Your staff will want a number of instruments to each monitor and remedy the efficiency and scalability challenges that can present up over time.
I consider that problems with scalability may be solved by adopting a constant, microservices-based method to manufacturing analytics. Groups ought to be capable to shortly migrate fashions from batch to on-demand to streaming by way of easy configuration modifications. Equally, groups ought to have choices to scale compute and reminiscence footprints to help extra advanced workloads.
Machine Studying Compute Works In Spikes
As soon as your algorithms are skilled, they’re not at all times used — your customers will solely name them once they want them.
That may imply that you just’re solely supporting 100 API calls at 8:00 AM, however 10,000 at 8:30 AM.
From expertise, I can let you know that scaling up and down whereas ensuring to not pay for servers you don’t want is a problem.
For all these causes, only some knowledge science tasks find yourself truly making it into manufacturing methods.
Robustify to Operationalize
We at all times spend quite a lot of time making an attempt to make our mannequin prepared. Robustifying a mannequin consists of taking a prototype and getting ready it in order that it could possibly truly serve the variety of customers in query, which frequently requires appreciable quantities of labor.
In lots of instances, all the mannequin must be re-coded in a language appropriate for the structure in place. That time alone may be very typically a supply of huge and painful work, which ends up in many months’ value of delays in deployment. As soon as achieved, it needs to be built-in into the corporate’s IT structure, with all of the libraries points beforehand mentioned. Add to that the usually troublesome process of accessing knowledge the place it sits in manufacturing, typically burdened with technical and/or organizational knowledge silos.
Throughout my tasks, I additionally seen the next points:
- If we have now an enter characteristic that we modify, then the significance, weights or use of the remaining options could all change as properly or not. ML methods should be designed in order that characteristic engineering and choice modifications are simply tracked.
- When fashions are consistently iterated on and subtly modified, monitoring config updates whereas sustaining config readability and suppleness turns into a further burden.
- Some knowledge inputs can change over time. We’d like a technique to perceive and monitor these modifications to have the ability to perceive our system totally.
- A number of issues can go fallacious in ML functions that won’t be recognized by conventional unit/integration assessments. Deploying the fallacious model of a mannequin, forgetting a characteristic, and coaching on an outdated dataset are only a few examples.
Testing & Validation Points
As it’s possible you’ll already know, fashions evolve repeatedly as a consequence of knowledge modifications, new strategies, and so on. As a consequence, each time such a change occurs, we should re-validate the mannequin efficiency. These validations steps introduce a number of challenges:
Aside from the validation of fashions in offline assessments, assessing the efficiency of fashions in manufacturing is essential. Often, we plan this within the deployment technique and monitoring sections.
ML fashions must be up to date extra continuously than common software program functions.
Automated ML Platform
A few of you might need heard about automated machine studying platforms. It may very well be a great answer to supply fashions quicker. Moreover, the platform can help the event and comparability of a number of fashions, so the enterprise can select the one mannequin that most closely fits their necessities for predictive accuracy, latency, and compute sources.
As many as 90% of all enterprise ML fashions may be developed mechanically. Information scientists may be engaged to work with enterprise individuals to develop the small proportion of fashions at present past the attain of automation.
Many fashions expertise drift (degrading in efficiency over time). As such, deployed fashions must be monitored. Every deployed mannequin ought to log all inputs, outputs, and exceptions. A mannequin deployment platform wants to supply for log storage and mannequin efficiency visualization. Maintaining a detailed eye on mannequin efficiency is essential to successfully managing the lifecycle of a machine studying mannequin.
Key parts that should be monitored by means of a deployment platform.
Discover the various alternative ways to deploy your software program (this is a great long read on the topic), with “shadow mode” and “Canary” deployments being notably helpful for ML functions. In “Shadow Mode,” you seize the inputs and predictions of a brand new mannequin in manufacturing with out truly serving these predictions. As an alternative, you’re free to research the outcomes, with no vital penalties if a bug is detected.
As your structure matures, look to allow gradual or “Canary” releases. Such a follow is when you’ll be able to launch to a small fraction of shoppers, reasonably than “all or nothing.” This requires extra mature tooling, nevertheless it minimizes errors once they occur.
Machine studying remains to be in its early phases. Certainly, each software program and hardware parts are consistently evolving to satisfy the present calls for of ML.
Docker/Kubernetes and micro-services structure may be employed to resolve the heterogeneity and infrastructure challenges. Current instruments can drastically remedy some issues individually. I consider that bringing all these instruments collectively to operationalize ML is the most important problem right now.
Deploying Machine Studying is and can proceed to be troublesome, and that’s only a actuality that organizations are going to want to cope with. Fortunately although, a number of new architectures and merchandise are serving to knowledge scientists. Furthermore, as extra corporations are scaling knowledge science operations, they’re additionally implementing instruments that make mannequin deployment simpler.
Original. Reposted with permission.
Bio: Alexandre Gonfalonieri is an AI guide and writes extensively about AI.
|
OPCFW_CODE
|
Statsig’s dashboards are the most effective way to consume, share, and save the insights that matter most for your product. Metrics Explorer is how you start creating insights; dashboards are where those insights can live in an ongoing capacity to be absorbed by the entire team.
Creating a Dashboard
There are two ways to create a dashboard.
- Navigate to the Dashboards tab and click Create.
- You can also create a Dashboard directly from Metrics Explorer. To do this, once you have finished building a chart:
- Click the “…” button and choose “Export to Dashboard”.
- Click the Dashboard Destination button, name the Chart
- Select “Create New Dashboard”.
- Finally give your new Dashboard a name.
- Note, charts created with a “Last x days” date-range will continue to update on the dashboard as time goes on.
Adding Charts, Feature Gates, and Experiments to a Dashboard
Dashboards are designed to help teams share and absorb product insights of all types. To that end, it is possible to add both Metric Charts to a dashboard as well as keep track of ongoing A/B tests & Experiments, as well as the impact of feature launches.
There are several types of dashboard widgets you can add or create including:
- Charts Create a new chart directly from a dashboard or export a chart created in Metrics Explorer to a dashboard. Supported charts include:
- Drilldown Charts
- Retention Charts
- Funnel Charts
- User Journey Charts
- Text: Annotate dashboards with context or create section headers for better readability.
- Single Value: Highlight a hero metric with clarity by adding a single value to the dashboard.
- Experiment, Feature Gate: Get a quick snapshot of an experiment or feature gate.
- Funnel Metrics: Visualize custom funnel metrics.
To add a widget to a dashboard:
- Click the “Add Widget” button
- Select the type of widget you would like to add
- Configure the widget. E.g. select a chart type and then select events and metrics you want to track.
- Save the widget to the dashboard.
Organizing a Dashboard
A well-made dashboard helps easily convey a narrative around what information is most important and the relationship between items on the dashboard. To facilitate this, we make it easy to move and resize dashboard widgets in place.
Each dashboard is constructed as a grid over which you you can place, move, and resize dashboard widgets. Move dashboard items around the grid by placing your mouse on empty space in the widget, and then click and hold to drag the widget around.
Resize the the widget by clicking and holding the the bottom right edge of the widget dragging to the desire size.
Viewing and Exploring a Chart in a Dashboard
All of the charts we support in Metrics Explorer can be added to a dashboard. In addition, dashboard charts are not static.
To dive into a chart on the dashboard, click [ ] icon. Once expanded, you have the full power of Metrics Explorer. Charts can be modified just as you would in Metrics Explorer for further exploration without making a persisted change to the dashboard itself.
Dashboard Date Ranges
The charts and widgets on a dashboard are by default synced with the date range of the dashboard itself. You can change the date range of the entire dashboard by modifying the date range. You can view widgets in custom date ranges by expanding the chart / widget and modifying its date range.
Editing a Chart in Dashboard
If you want to persist a change to a chart on a Dashboard, you can click the pencil icon. You can then make any changes you would like to the chart, including changing the overall query, the date range, or updating metadata such as the chart title or description. Once finished, click the Save button to persist the changes.
Once you’ve created a chart you may want to quickly find the charts that matter to you. Heading to the Dashboards tab will give you several ways to find a dashboard.
- Find dashboards you’ve created quickly by navigating to the Dashboard tab, clicking into the search box, and selecting “My dashboards”.
- Navigate to the Dashboard tab and click the filter icon to scope to Dashboards with specific tags or created by specific team members.
- Navigate to the Dashboard tab and simply search for the name of the dashboard you would like to find.
- Anywhere within Statsig you can bring up global search with “cmd+k” and type in the name of the dashboard.
Automatically Generating Dashboards
To make creating dashboards easier, we provide the ability to automatically generate dashboards based on certain entity metadata. For example, you can create a dashboard that syncs all metrics, experiments, or feature gates with a specific tag. This will automatically create the dashboard and add any new entities created with that tag to the dashboard.
Dashboards that are generated via syncing entities are indicated with sync icon.
|
OPCFW_CODE
|
Yes, it is an odd name. It is also the coolest release of Ubuntu yet. I have opened a terminal a total of 0 times and I have a running system with most of my setup tasks satisfied.
- In a controversial move, Mr. Shuttleworth (founder of Ubuntu) has decided to actually make it easy to play multimedia. On a fresh installation, Firefox defaults to a webpage stored on your computer with links to official documentation, community docs, and the web forum. If you click on either documentation, you are taken to a page with information about multimedia support. You will be told the official Ubuntu repositories has a meta package for all of the “gstreamer” codecs, flash, and java packages. Check its box, click “Apply” and enjoy. Optionally, Windows codecs are also available. Why the don’t put them in the meta package, I have no idea.
- Alternately, you could not read these instructions, jump the gun, and just double-click a multimedia file. In the past, Totem would happily report that it has no idea how to play the file. Now it catches itself, and asks for permission to install the missing codec packages.
- NTFS volumes can now be mounted with write permissions in just a few clicks. The tool is called “ntfs-3g configuration”. I hope that this will be scheduled for inclusion by default, however it currently is not.
- A new indexing application named Tracker is installed by default. It is so lightweight, I didn’t even realize that it was running. It is less exhaustive than Beagle, but the performance is worth it for some applications. This is also integrated in with the Deskbar applet.
- Network manager is now the configuration applet by default, and with one package has Microsoft VPN support.
- Significant work has been done on 3D acceleration. Although short of being enabled out of the box with the ATI binary driver, installation was moderately easy. There is a “fglrx” package in the repositories. You can find it by searching for “ATI Radeon”. You can install this, but it doesn’t complete the process required for direct rendering. You still will have to manually invoke “depmod -a”, and “aticonfig …”, then restart your machine. IMO, this should be handled by the package installation.
What else is scheduled for inclusion?
Many other specifications look promising for Feisty. Of particular note is the “bullet-proof-x” specification, that will allow the Xserver to gracefully fall back to more and more generic configurations. The goal is to prevent the Xserver from ever crashing – even on exotic hardware. The current print configuration system is going to be completely replaced by a port of printerdrake – originally for Mandrake. This promises better automatic discovery, and plug-and-print. Currently gnome-cups-manager is a mess in Ubuntu. Thanks to patches in the gnome-vfs packages, printers are no longer even displayed in Nautilus.
So its all good right?
There is still much work to be done. I want my printers back damn it! I am not sure what rubbed the maintainer the wrong way about displaying non-file objects in the Nautilus file browser, but he just cut off the best way to add a printer. I have gone so far as to receive a patch upon request from Frederick from the OpenSuSE project to apply against the gnome-vfs source packages to change this back.
Also, I was expecting fglrx to be installed and running by default. It is logical to assume that most people want 3D acceleration, and this will be one of the first things they will fight with. Just take care of this. I know its a crappy binary from an unresponsive company, but if the community can get it in the repository as a .deb, they can have it installed by default. With bullet-proof-x, the chances of this screwing up a system installation should be non-existent.
Its looks as though Ubuntu will be moving to the Slab menu for Feisty. This integrates in nicely with the new “Control Center” that has been added in the latest version of Gnome. Oddly though, Tracker, the default indexing client does not take advantage of Slab in the same manner as Beagle. Hopefully this will change in th near future.
|
OPCFW_CODE
|
Managed to unbrick it:
1) Ran/flashed the 4.4.2 update here, with Nand erase all and Phone Bootloader Update (I think).
2) Restarted, and went back to download mode. Ran the "AT&T I317 ODIN 4.4.2 UCUCNE5 Bootloader & tz.img" bootloader. (Second post in thread.) Got stuck on the AT&T circular logo.
3) Restarted and went successfully into RECOVERY MODE!
(Model: AT&T SGH-I317)
I was trying to reset my phone yesterday and today to stock so that I could update to 4.4.2. (had "accidentally" deleted email program, so no longer stock and OTA wouldn't work)
I finally managed to flash from sammobile's 4.1.1 stock rom, then start an upgrade via OTA. The upgrade downloaded and started installing... it didn't kick me out at 25%, but it went to 100% and then got stuck on the first boot screen ("Samsung Galaxy Note II"). (edit: subsequent posts on http://forum.xda-developers.com/show....php?t=2618447 say how to handle this..) Since then I have tried flashing with 4.1.1 again, 4.1.2, 4.3, 4.4.2, clockwork mod, philz's mod, etcetc: nothing works. Factory reset doesn't work, safe mode doesn't work, phone is stuck on boot screen.
Of course I tried Kies, also. Kies 1 doesn't work, 2.6 tells me it's the wrong version after getting past all the warning screens (or was that Kies 1?), and Kies 3 greys out and hangs after getting past the warning screens.
The only thing I can do is get to the Odin download mode.
Funny thing that may help diagnose: when I plug the phone in to the PC via USB, it automatically boots up -- to the first loading screen.
I was thinking of trying the Nand erase all method, in conjunction with this:
(edit: just realized all these bootchains apparently useless for Note 2s)
I downloaded and tried both the VRALEC bootchain and the VRALF bootchain on Odin 3.09:
* The VRALEC bootchain runs and gives me a nice full blue bar, but gets stuck on "NAND Write Start!". (is this normal?)
* The VRALF bootchain gives me "md5 error! Binary is invalid".
Edit: I actually got the VRALF one to work by renaming it "BOOTLOADER-I535VRALF2-618049-REV09-user-low-ship.tar.md5". Here is a log of what happened. This is similar to the VRALEC one:
<OSM> Enter CS for MD5..
<OSM> Check MD5.. Do not unplug the cable..
<OSM> Please wait..
<OSM> BOOTLOADER-I535VRALF2-618049-REV09-user-low-ship.tar.md5 is valid.
<OSM> Checking MD5 finished Sucessfully..
<OSM> Leave CS..
<ID:0/003> Odin v.3 engine (ID:3)..
<ID:0/003> File analysis..
<ID:0/003> Get PIT for mapping..
<ID:0/003> Firmware update start..
<ID:0/003> NAND Write Start!!
I don't know what to do next. Any ideas appreciated.
|
OPCFW_CODE
|
Crash in GIDSignInButton, iOS 9.x
Hi all!
We integrate GoogleSignIn (v. 4.x) into our app and all work well, but sometimes it is crashed in GoogleSignIn part. We only receive crash-reports (fabric) about this crash and unfortunately can not reproduce it by ourself. May be somebody faced with this issue... any help would be appreciated! Thank you in advance!
Stacktrace of the crash:
Fatal Exception: NSInternalInconsistencyException
0 CoreFoundation 0x182a0ee38 __exceptionPreprocess
1 libobjc.A.dylib 0x182073f80 objc_exception_throw
2 CoreFoundation 0x182a0ed08 +[NSException raise:format:]
3 Foundation 0x183394124 -[NSAssertionHandler handleFailureInMethod:object:file:lineNumber:description:]
4 OwnApp 0x1003903b8 -[GIDSignIn presentViewController:] (GIDSignIn.m:1410)
5 OwnApp 0x10038c230 -[GIDSignIn authenticateWithOptions:appSwitchConfig:] (GIDSignIn.m:669)
6 OwnApp 0x10038d7cc __50-[GIDSignIn authenticateInteractivelyWithOptions:]_block_invoke (GIDSignIn.m:938)
7 OwnApp 0x10038b288 __43-[GIDSignIn fetchRuntimeConfigWithHandler:]_block_invoke (GIDSignIn.m:475)
8 OwnApp 0x1003899b4 __54-[GIDRuntimeConfigFetcher fetchWithURLString:history:]_block_invoke (GIDRuntimeConfigFetcher.m:94)
9 OwnApp 0x10036e288 __76-[GSDK_GTMSessionFetcher invokeFetchCallbacksOnCallbackQueueWithData:error:]_block_invoke
10 libdispatch.dylib 0x1824594bc _dispatch_call_block_and_release
11 libdispatch.dylib 0x18245947c _dispatch_client_callout
12 libdispatch.dylib 0x18245eb84 _dispatch_main_queue_callback_4CF
13 CoreFoundation 0x1829c4dd8 __CFRUNLOOP_IS_SERVICING_THE_MAIN_DISPATCH_QUEUE__
14 CoreFoundation 0x1829c2c40 __CFRunLoopRun
15 CoreFoundation 0x1828ecd10 CFRunLoopRunSpecific
16 GraphicsServices 0x1841d4088 GSEventRunModal
17 UIKit 0x187bc1f70 UIApplicationMain
18 OwnApp 0x1000bd484 main (main.m:13)
19 libdispatch.dylib 0x18248a8b8 (Missing)
That assert is that it was unable to prevent the view controller. The stack trace there looks like its being triggered after the config was fetched remotely, and by the time the completion block was run I am guessing the uidelegate was not valid.
To speculate a bit: It feels like a possible scenario here is something like poor network connectivity - e.g. someone hits sign in and goes in to a tunnel, the connection takes forever, and the code asserts when the response comes back as the user has navigated elsewhere in the app.
How frequently do you see these?
Hi Ian!
Thank you for the answer and help!
Seems that the crash is caused by a bug in our code - we set [GIDSignIn sharedInstance].uiDelegate in a view controller, but don't unset it when this view controller is released. Just our fault :(
Ian, thank you again and good luck!
@apozdeyev I am not sure why you will have to unset since it is a weak reference. I have the same crash happening in my application with not much info available.
Here is a link for Crashlytics: http://crashes.to/s/89a9f34b252
@apozdeyev Can you provide more information how you solve this problem. We face the same problem now. And I think @Ankit8946 is right that we dont need to unset the uidelegate when the controller disappeared.
Hi, @hijamoya! I just unset uiDelegate and by some magic this crash has not reproduced from that time.
@apozdeyev do you mean set it to nil?
@hijamoya, yes.
|
GITHUB_ARCHIVE
|
Include the standard header <mutex> to define the classes
recursive_timed_mutex; the templates
unique_lock; and supporting types and functions that define mutual-exclusion code regions.
Beginning in Visual Studio 2015, the C++ Standard Library synchronization types are based on Windows synchronization primitives and no longer use ConcRT (except when the target platform is Windows XP). The types defined in <mutex> should not be used with any ConcRT types or functions.
In code that is compiled by using /clr, this header is blocked.
recursive_mutex are mutex types. A mutex type has a default constructor and a destructor that does not throw exceptions. These objects have methods that provide mutual exclusion when multiple threads try to lock the same object. Specifically, a mutex type contains the methods
lockmethod blocks the calling thread until the thread obtains ownership of the mutex. Its return value is ignored.
try_lockmethod tries to obtain ownership of the mutex without blocking. Its return type is convertible to bool and is true if the method obtains ownership, but is otherwise false.
unlockmethod releases the ownership of the mutex from the calling thread.
You can use mutex types as type arguments to instantiate the templates
unique_lock. You can use objects of these types as the
Lock argument to the wait member functions in the template condition_variable_any.
A timed mutex type satisfies the requirements for a mutex type. In addition, it has the
try_lock_until methods that must be callable by using one argument and must return a type that is convertible to bool. A timed mutex type can define these functions by using additional arguments, provided that those additional arguments all have default values.
try_lock_formethod must be callable by using one argument,
Rel_time, whose type is an instantiation of chrono::duration. The method tries to obtain ownership of the mutex, but returns within the time that is designated by
Rel_time, regardless of success. The return value converts to true if the method obtains ownership; otherwise, the return value converts to false.
try_lock_untilmethod must be callable by using one argument,
Abs_time, whose type is an instantiation of chrono::time_point. The method tries to obtain ownership of the mutex, but returns no later than the time that is designated by
Abs_time, regardless of success. The return value converts to true if the method obtains ownership; otherwise, the return value converts to false.
A mutex type is also known as a lockable type. If it does not provide the member function
try_lock, it is a basic lockable type. A timed mutex type is also known as a timed lockable type.
|lock_guard Class||Represents a template that can be instantiated to create an object whose destructor unlocks a
|mutex Class (C++ Standard Library)||Represents a mutex type. Use objects of this type to enforce mutual exclusion within a program.|
|recursive_mutex Class||Represents a mutex type. In constrast to the
|recursive_timed_mutex Class||Represents a timed mutex type. Use objects of this type to enforce mutual exclusion that has time-limited blocking within a program. Unlike objects of type
|timed_mutex Class||Represents a timed mutex type. Use objects of this type to enforce mutual exclusion that has time-limited blocking within a program.|
|unique_lock Class||Represents a template that can be instantiated to create objects that manage the locking and unlocking of a
|call_once||Provides a mechanism for calling a specified callable object exactly once during execution.|
|lock||Attempts to lock all arguments without deadlock.|
|adopt_lock_t Structure||Represents a type that is used to define an
|defer_lock_t Structure||Represents a type that defines a
|once_flag Structure||Represents a struct that is used with the template function
|try_to_lock_t Structure||Represents a struct that defines a
|adopt_lock||Represents an object that can be passed to constructors for
|defer_lock||Represents an object that can be passed to the constructor for
|try_to_lock||Represents an object that can be passed to the constructor for
|
OPCFW_CODE
|
Ever wanted to run your own Linux server? If you’re already using a Raspberry Pi, then this is the perfect project for you.
The “anydesk not working on raspberry pi” is a problem that has been present for a while. There are many solutions to this issue, but the one I recommend is to use the instructions in the link below.
I have previously tried numerous remote access options on this site, but AnyDesk wasn’t really a thing at the time. It’s time to put it on the to-do list. AnyDesk enables you to remotely access your computer or other devices from across the room or across the nation. The Raspberry Pi is one of the operating systems that is supported.
To install AnyDesk on a Raspberry Pi, follow these steps:
- On the official website, download the Raspberry Pi AnyDesk package file.
- To begin the installation, double-click on the package.
- To complete the installation, enter the raspberry pi password.
In the menu bar, a new icon will appear. AnyDesk is now up and running!
There are a few options to consider after the software is installed. Continue reading for more details and detailed instructions!
If you want to get started with the Raspberry Pi fast, check out my e-book. It’s a 30-day challenge in which you learn something new every day until you’ve mastered the Raspberry Pi. The first third of the book covers the fundamentals, but the latter chapters feature tasks that you may do on your own.
Step-by-Step Installation of AnyDesk
The Pi Glossary is available for download. If you’re confused by all the new terms and acronyms, get my free Raspberry Pi dictionary (PDF format)!
The Graphical User Interface (GUI) is a user interface that allows you to interact with your
- Double-click the downloaded file to open it. It should ask whether you want to install and then prompt you for a password to verify your identity.
- You’ll have a new icon on your menu bar, as well as a new entry in the Internet part of the menu, in no time.
Today’s Raspberry Pi Bootcamp Sale is 10% off. Take it a step farther. I’m here to assist you in getting started with the Raspberry Pi. Learn all of the necessary abilities in the proper sequence.
Using the Command Line to Install AnyDesk
You may quickly install AnyDesk using the online URL above, but if you want, you can also use the command line.
- Type wget https://download.anydesk.com/rpi/anydesk 18.104.22.168-1armhf.deb in the terminal. The file will be downloaded to the current directory.
- After that, sudo dpkg -I anydesk 22.214.171.124-1armhf.deb (if it’s still the current version) will install it.
- Finally, double-check that you have all of the essential files, including: apt install -f sudo
The software is now ready to use after it has been completed. It’s accessible through the menu bar icon or the internet group in the main menu.
Are you having trouble navigating the Linux command line? Check out this post for the most critical commands to memorize, as well as a free cheat sheet you can download to keep the commands close to hand.
Install AnyDesk on a PC or a Mac.
Two computers must be linked in order to utilize AnyDesk. You can utilize two Raspberry Pis, but for simplicity, you’ll probably want to connect to or from a Windows or Mac computer. I’ll show you how to do it immediately.
Install AnyDesk on your Windows computer.
Visit the AnyDesk Windows download link at Remote Desktop Software for Windows – AnyDesk to install AnyDesk on Windows. Version 6 is required for Windows, not the MSI version. Run the installer when you’ve downloaded it, and then start AnyDesk.
AnyDesk starts in a temporary or uninstalled mode. To finish the installation, go to the top right of the window and select Install AnyDesk on this Device.
On a Mac, install AnyDesk.
Visit the AnyDesk Macintosh download site at Remote Desktop Software for macOS – AnyDesk and click the download now button to install AnyDesk on your Mac.
Double-click the.img file to launch a finder window after it has been downloaded. To install the software, drag it to the Applications folder.
After that, you may start AnyDesk from the programs folder. When you initially run it, it will inform you that it requires system rights to function:
In the system settings window that appears, click the Configure button, then check the box next to AnyDesk in both the Accessibility and Screen Recording sections.
From a Raspberry Pi, connect to AnyDesk.
We may utilize AnyDesk to connect the two machines now that they are both up and running. Let’s get the Raspberry Pi connected to your other PC. You’ll need the “This Desk” number from the other computer, and that machine must be running AnyDesk.
Configuration of the Basics
To begin, click the menu bar icon, which will open the New Session box. The “This Desk” number, which you’ll need to access your Pi from another computer, can be found in the top left corner. Press return after entering the “This Desk” number from the other computer into the “Remote Desk” field. Someone is attempting to connect to the remote computer, and the remote computer will get an alert. The session will begin when you accept.
At that time, the distant computer is under the control of the connection’s initiator.
In only 30 days, you’ll have mastered the Raspberry Pi. Today’s sale is 10% off. Get the eBook here. In a 30-day challenge, uncover the mysteries of the Raspberry Pi. Learn important Linux skills and put them to work on a variety of projects.
For a totally remote connection, create a password.
It’s also feasible to set up a remote access password so that the remote computer doesn’t have to ask for permission every time it connects. This must be done on each remote computer, and the passwords must be accessible on the controlling machine.
There is a link to “Set password for unattended access” under the “This Desk” number. This will bring up the security settings when you click it. The security settings at the top of the window must be opened before anything may be modified. Once opened, go halfway down the page and click the box labeled “Enable unattended access.” Then click to create an access password.
Once that’s done, this machine may be accessed from any remote computer that has AnyDesk installed and has the Desk Number and password.
When a connection is established, you have the option of entering the password if you have one or waiting for the other computer to authorize the connection. If you have the password, there is no need to wait; the connection will begin as soon as the network can be reached. You must, however, leave the second computer turned on, as I learned by mistake.
When compared to other remote access software I’ve tried, one of the unique features of AnyDesk is the utilization of a distinct window to transmit data. It is regarded as though it were a distinct link. Thankfully, both can be linked at the same time.
Put the number for the remote desk into the box (if it isn’t already there from a previous connection) and then click the first icon on the right to start a file transfer. If the password is not set or accessible, a popup will appear asking for it, or you may wait for the remote machine to accept the connection.
Once connected, it appears as a series of File Transfer Protocol displays, with the local computer on the left and distant files on the right. Like an FTP connection, files may be sent in either way.
Take a look at my cheat sheet! Get your free PDF file with all the Raspberry Pi instructions you’ll ever need!
How much does AnyDesk set you back?
AnyDesk is available for personal use at no cost. When utilizing it for business reasons, a license is necessary, which costs between $10 and $50 per month.
We’ll presume you can continue to use the Pi for free since most of us use it for pleasure.
When compared to the built-in VNC, why would I choose AnyDesk?
If you’ve spent any time with your Raspberry Pi, you’re presumably aware that VNC, a remote access software, comes pre-installed on the operating system. VNC is a great way to connect to your Raspberry Pi from another computer (versions are available for Windows and Mac, as well as several varieties for Linux and Unix). Unfortunately, unless you are licensed, it does not work the other way around.
AnyDesk works in both ways and offers apps for both iOS and Android devices, as well as Chrome OS. This enables you to operate (or at least see and transfer data) your computer from your iPhone or Android mobile.
There is a comprehensive help system accessible at http://support.anydesk.com/Help Center if you need additional information or assistance with a particular function.
If you’d want to test some more options for remote access to your Raspberry Pi, check out this post with five more options.
Do you want to be a member of the group? Join us here to receive behind-the-scenes information, my ideas, and more while also helping me to keep this website going.
Resources for the Raspberry Pi
Don’t know where to begin? Learn all there is to know about the Raspberry Pi, stop looking for assistance all the time, and start enjoying your projects. Now you may watch the Raspberry Pi Bootcamp course.
In only 30 days, you’ll have mastered the Raspberry Pi. You don’t want just the basics? This book is for you if you want to learn the best ways to become a Raspberry Pi expert. With step-by-step instructions, you may learn important Linux skills and perform a variety of tasks. Get the e-book here.
VIP Members’ Club You may also join the Patreon community if you simply want to hang out with me and show your support. I provide you early access to my stuff and share behind-the-scenes information there. When you join, you’ll also receive a shoutout. More information may be found here. Do you need assistance building anything using Python? Any Python script for your Raspberry Pi may be created, understood, and improved. Learn the basics in a step-by-step manner, rather than wasting time on irrelevant ideas. Now is the time to get the e-book.
This website also contains all of my tool and hardware suggestions.
The “anydesk raspberry pi ubuntu” is a tutorial for installing AnyDesk on Raspberry Pi.
Frequently Asked Questions
How do I install AnyDesk on Raspberry Pi?
A: You will need to use the following commands, sudo apt-get install git python-pip Next you can type this command if its your first time using pip on Raspberry Pi. sudo pip download AnyDesk Run anydsk with the following command. python anydsk /path/to/installation
Can AnyDesk run on Raspberry Pi?
A: Unfortunately, Raspberry Pi cannot run any desktop or window OS and AnyDesk requires a Windows OS.
How do I uninstall AnyDesk from Raspberry Pi?
A: There are two ways to uninstall AnyDesk from your Raspberry Pi.
- anydesk raspberry pi command line
- anydesk arm64 ubuntu
- install anydesk ubuntu arm64
- anydesk download
- uninstall anydesk raspberry pi
|
OPCFW_CODE
|
Source maps?
Forgive me if this is a repeat but I didn't find any issues about this.
Does LiveScript compiler support generating source maps?
The reason I ask is because both coffeescript and TypeScript seem to support it.
+1
+1
+1
"+1"s are not helping, really.
GitHub already have a "+1 button". It's on the right side of the Star/Unstar button.
Well with the lack of a voting system for issues, there's no other way to express community interest in a particular issue. Forking is not a solution.
Regardless, I'm going to take a stab at implementing this, since it's about time LiveScript got sourcemaps...
If there's no developer interest in a specific feature, then community interest is a bit moot.
I believe this feature has been road mapped already. #260
I think part of the hold up is that LiveScript is/was meant to be re-written on top of CoffeeScript-Redux compiler which has source map support already. If anyone is interested in helping with source maps, looking into that would be helpful.
f CoffeeScript-Redux compiler which has source map support already
If you have the energy to work on both, do it
I thinks thats fair to feel that way, and I hear where you're coming from.
So what would be the best way for me to help the project?
We all keep asking for this feature, but there's all this underlining work that needs to be done before that can happen. What areas would ideally be addressed first before Source Map support?
I would be willing to work on various things if it meant we would eventually be able to add this.
My two cents here: Sweet.js is another superset language of JavaScript, which makes use of an npm module called source-map, which has all the mechanics to generate source-maps. If you want to consume to keep concerns separated, you would probably just need to hook it with your complier internals.
Sweet.js: https://github.com/mozilla/sweet.js.
source-map: https://github.com/mozilla/source-map.
#481
I don't mind the +1s.
I don't have the time to commit the working on this complicated feature, however, I think I'm pretty good at merging in pull requests and if anyone would like to work on it and submit a pull request that would be great. The +1 just signify to any potential contributor that their work in creating a pull request would greatly be appreciated.
Isn't it about time to start on LS2 now that CSR is somewhat stable and ES6 is coming up?
Many breaking changes could be made. I am thinking about es6 modules, let and native const support.
Would it be clever to make such a heavy investment in 1.3 now?
As you can see, I've just submitted a PR which adds basic source-map support. It could do with a lot more testing though (hint hint, everyone who +1ed this!) and it's only fairly rough at the moment.
:+1: Suddenly this is all happening so fast! :D
:+1: Suddenly this is all happening so fast! :D
That's why I think a fork is useful while +1s are just noise: They get things moving.
Bitching about +1s is noise...
@demux No. Explaining why people should contribute more, with no fear, instead of just asking for things. You've missed my whole point.
@blvz So you think that people are afraid to contribute? If that's the case at least some of the time, then obviously they shouldn't be afraid.
But I'm saying; asking for something isn't a bad thing. I believe that many of the people asking for features are them selves active, contributing members of the open source community.
@demux I think most people are. But maybe "fear" wasn't the right word to describe it. Most people in user-land don't have the motivation to give it a go. While I agree that asking isn't a bad thing, there are better ways to do it.
But hey, that's just my opinion, which I've thought it was worth sharing (while I do "bitch" about every +1s, in silence). Anyone's welcome to disagree and even gkz said he's ok with the +1s, so this discussion already did more noise than the +1s themselves.
|
GITHUB_ARCHIVE
|
SAN FRANCISCO -- Microsoft's not the only big tech player taking a gamble on a new direction. Mozilla made an aggressive argument for Firefox OS to Web and app developers Monday night at its confusingly named Mobile Monday Mixer -- confusing because the company held the event last night at its San Francisco office.
Firefox OS is Mozilla's stab at providing an open-source mobile OS alternative to Apple's closed iOS and Google's semi-open Android -- not unlike what Firefox did for Web browsers when it debuted in 2004. But Mozilla is also recycling software it's creating for Firefox OS in its desktop and Android browsers to improve them. At its heart, Firefox OS is an attempt to keep Mozilla relevant as the rest of the world goes mobile.
Sullivan told the crowd last night that Mozilla has created a way -- technically, a "payments API" -- for regular Web browsers to handle payments the way that native mobile apps do. He said that Mozilla expects this development to drive the creation of multiple app stores -- the exact opposite of Apple's single-store format -- and will allow developers to distribute their apps directly to customers.
"You can't do A/B testing in the iOS app store," Sullivan said, referring to the developer method of testing two different options with their users.
But that payments API alone won't cause app makers will flock to Firefox OS. Sullivan argued that the learning curve for any operating system that wants to compete with iOS and Android -- and, by implication, Microsoft's Windows Phone 8 -- must be easy for developers to pick up.
Mozilla's director of research Andreas Gal didn't mince his words when asked what he thought of Windows Phone 8, Apple's iOS, and Google's Android at a demo of the Firefox OS in September. "Microsoft will be the last to experiment with proprietary native code. It will either fail and make Apple and Google the only two to have successful private systems, or it will succeed and it will take longer for them to go away."
Firefox OS's reliance on the Web is also its biggest weakness. "One of the great things about the Web is that it's open, scrappy, and you can build your own business model," Sullivan noted. But, he said, with a chuckle, "what's bad about the Web is that it's open, scrappy, and you have to build your own business model."
Others at Mozilla are also cautiously enthusiastic about Firefox OS's chances of success. CEO Gary Kovacs told me back in September that it was simply the next phase of the non-profit organization's goals. "It gets back to the overriding mission that Mozilla began 15 years ago. We did it with the browser, we've now seen the explosion of Web pages and value for the world."
Kovacs noted that there are 2.5 billion people connected to the Web now, and pegged Mozilla's success towards getting the next 2.5 billion online. Firefox OS, he said, will be "as disruptive as when we put out Firefox."
Gal made it clear that that even when the operating system is ready for the public, developers still will have to put in some effort to make their Web sites ready to use as apps. This includes icon creation, but he predicted that, overall, creating an app won't be that different from building a Web site.
"It's not so much about convincing people to build HTML5 apps, but to make content better optimized to run on mobile," he said. And while Kovacs conceded that Mozilla didn't do enough in the past to promote better performance standards, he contended that today is different.
The web stack is strong enough as a programming model to provide really rich applications. Last year when we started Boot to Gecko on Github, we realized that we would have a thinner stack than the dominant operating systems. We also realized that the Web stack is never perfect, it's never complete. But the Web always catches up, and it has the broadest reach.
Last night, Sullivan described the birth of Firefox OS as highly unusual. "Andreas Gal started Boot to Gecko [the original name for Firefox OS] with nothing but a readme file. I've never seen such interest in nothing but a readme file."
Along with the appeal of coding in a language most developers already are familiar with, Carlos Domingo, CEO of Mozilla's first carrier partner Telefonica Digital, said in September that they are working on developer incentives. What those are and if they involve cash incentives, though, he refused to say.
We've known since the deal with Telefonica was announced earlier this year at Mobile World Congress that Firefox OS is intended to run on lower-powered phones, about as much juice as is required to run Android 2.2 Froyo -- an 800 MHz Qualcomm chip, with only 256 MB of RAM -- and Domingo said in September that the goal is to price them around $100 unsubsidized, the higher end for feature phones.
Gal reiterated in September that Firefox OS is more than Mozilla's mobile incursion. "This isn't the Mozilla platform, it's the HTML5 platform. The long term strategic goal is to have competing better implementations of HTML5, not just one."
Eich agreed that it's about upgrading the Web so that the browser can power a smartphone. "One of the brain twisters about Firefox OS is [what happens] when you have all the top Web engines able to run the same sites [the same way]. Hopefully, we won't be like the original Star Trek and get cancelled after the third season."
|
OPCFW_CODE
|
The Atbashy River is a left tributary to the Naryn located in the Naryn catchment in Kyrgyzstan (Figure 1). The river is suitable for small scale hydropower development and has been chosen as a case study under the EU-supported Hydro4U Project (www.hydro4u.eu). At a demonstration site in the basin, the project implements a shaft power plant which is a low-head, run-of-river power system with fish friendly intake. Figure 2 shows the site.
Mean discharge over the observation record was 16.6 m3/s, which corresponds to 350 mm mean specific discharge. The discharge regime is nivo-glacial and strongly seasonal with minimum flows during the cold winter months (October through March, mean discharge of 8.25 m3/s) and peak flows during the warm season from April through September (mean discharge of 24.8 m3/s). In winter, the river is normally covered with a significant ice cover (> 30 cm).
As part of the plant’s feasibility study, hydrosolutions GmbH conducted a detailed flood frequency analysis to study return levels for 2-year, 20-year, 50-year, and 100-year events under current and future climate. Insights of such detailed risk analysis of hydrological extremes can inform structure design, influence operation and management, and help quantify climate impacts on infrastructure and operations.
To robustly estimate return levels, we need long time series. In reality, we do not have such a long observation record. Hence,we use a deep neural net (DoppelGANger model, see https://arxiv.org/abs/1909.13403) with Google’s Tensorflow library and Colab pro (https://colab.research.google.com/) for training a deep neural network on past daily observed and future CMIP6 climate data. With the trained network, we can generate synthetic 1’000-year climate time series and run the hydrological model with this forcing. The resulting daily 1’000-year discharge data can then be used for robust flood frequency analysis (Figure3).
Our results suggest that under the high emission scenario SSP3-7.0, we will see more extreme hydrological extremes in the future in Central Asia. In other words, for a given return period, we expect higher return levels, except for return periods 5 years and flows smaller than 120 m3/s. Similarly, for a given return levels, we expect shorter return periods, except for smaller return levels and ranges as mentioned before. Whereas the current 50-year mean return level is 158 m3/s, it is expected in increase to 174 m3/s under the SSP3-7.0 scenario, as an example (Figure 3).
In conclusion, the zone of runoff formation in high mountain Central Asia will experience more severe hydrological extremes more frequently in the future. This requires targeted mitigation and adaptation measures for the management of the increasing risks to ensure the safety of livelihoods and infrastructure locally.
|
OPCFW_CODE
|
FeatureHub’s original and primary architecture reflects its focus on it being a streaming update platform, one where updates to feature values are streamed out to listening clients in near-realtime.
Since its release, other patterns for use have emerged and we are, with the 1.5.0 release, making a few additions and alterations to match these expectations.
With the 1.5.9 release we have moved to using Cloud Events and supporting an increased number of async layers - NATS (preferred), Google PubSub (Release), AWS Kinesis (beta).
FeatureHub is available as a bundle (for streaming this is the Party Server) or as individual pieces for better scalability and isolation of your Administration side from your applications that require their features.
The extended streaming deployment is designed to scale to millions, even billions of requests, while mostly isolating your backend system from being overwhelmed by requests. With release 1.5.9 we have moved to the preferred "Dacha2" Lazy Cache system, and deployed it conceptually looks like this:
|Communication between Edge and Dacha(Cache) is shown via REST on this image, which can be configured optionally. By default, it is via NATS.
The non-streaming platform (Party-Server-Ish) is designed to scale to less - tens of thousands, possibly more if you have a limited number of environments, or a larger number of read replicas. It is also designed to be much simpler and cheaper to deploy on environments like Google Cloud Run or Azure Container Instances. Deployed, conceptually it looks like this:
The way that FeatureHub is architected is designed for various different implementation sizes and scales, but fundamentally there is a separation of concerns of all the main components, so they can be scaled independently as and when needed.
We discuss the main deployment options of FeatureHub in the installation section and what each part is for.
This is the main admin server and is the source of truth for the application. All users login here (via local, OAuth2 or SAML), all portfolios, applications, environments, groups, features, etc are all controlled via this. This is always bundled with a UI and backend server and is configured to talk to some external database.
If MR server goes down, it won’t affect the operation of end-user clients, all their data is in the cache (or in the database if you use party-server-ish or edge-rest).
The "Admin" API is defined in an OpenAPI schema and can be generated for a wide variety of platforms. We currently include generated clients for Dart, Typescript, C# and Java, but it is not limited to these.
NATS is the Cloud Native Open Source messaging platform that has been around for a very long time, is very fast and is very adept at scaling to huge volume in a hugely distributed fashion. We use it for FeatureHub to transfer environments, features and service accounts around the network to feed Dacha and Edge.
Dacha is where the data that is required by every SDK is cached, and you need at least one of these for an operational FeatureHub system. It can be run in-process (using the Party Server design), or separately. Edge always talks to Dacha which holds permissions, environments, features, and pre-calculated etags for appropriate requests.
There are two choice for Dacha: Dacha1 and Dacha2 (Dacha2 is available from v1.5.9).
It must use NATS as it relies on features only NATS has
When it starts it completely fills its internal cache, either from another NATS or via the MR. This makes it completely isolate your servers from MR, no deliberate "miss" traffic can impact your Management Repository
Edge is able to talk to Dacha ove NATS
Filling its internal cache can take some time with hundreds or thousands of environments, and MR must be available for it to do so, so it can lead to a complicated start for a new k8s cluster or rollout. This can delay it from being healthy depending on how fast it can fill its cache, which can lead to operational complexity.
Dacha2 is introduced in 1.5.9 and exists to support multiple async layers. It is a lazy cache:
it supports multiple async layers (NATS, Google Pub/Sub, AWS Kinesis (beta), we are looking at others)
it is Cloud Events first
it caches misses as well as hits to ensure consistent misses do not make it to MR
it automatically updates itself as new environments, features, and service account changes are broadcast from MR, so a newly created environment will be a "cache hit" by default.
Edge is intended to be where the communication with the SDKs live. It is intended to be high volume endpoint but retain little data - only who is connected to it and which environment they are listening to for feature updates. Access to Edge is given by a combination of Service Account and Environment IDs (the API key). That combination is given a permission structure back in MR, and is usually simply READ. For test accounts, a service account can also have the ability to change features as it may need to while doing end-to-end tests.
It does not attempt to retain the active feature list for each Service Account + Environment. It is highly multithreaded and concentrates requests to Dacha.
It is expected that you will normally run at least two of these in any kind of environment.
Edge-REST provides only GET and PUT (for updating features for tests) API options. It allows the SDK to poll for updates but not get realtime updates, and will talk directly to the database. It can be deployed on its own or as part of party-server-ish.
If you would like to serve features faster globally or would like to cache feature flags on CDN, FeatureHub has affiliated with Fastly - real-time content delivery network. Integration with Fastly can save costs on deployment infrastructure and make the FeatureHub application stack considerably faster around the world.
We can provide you with the environment variables, and the configuration steps necessary to integrate Fastly with FeatureHub. Pulumi configuration, which can be translated easily into Terraform is also available on demand. Please contact us on email@example.com for further information.
|
OPCFW_CODE
|
[05:11] <lotuspsychje> good morning to all
[05:19] <lotuspsychje> schlerpM: hello
[05:19] <schlerpM> Hi
[05:19] <lotuspsychje> schlerpM: you can check the ircd version loaded in the MOTD from a network you connect i think
[05:20] <lotuspsychje> schlerpM: most ircd's build their own custom ircd for safety reasons
[05:20] <schlerpM> Ahh I see!
[05:20] <schlerpM> I have never used ircd until yesterday
[05:20] <schlerpM> *irc
[05:20] <lotuspsychje> schlerpM: wich one did you test?
[05:21] <schlerpM> None yet!
[05:21] <schlerpM> Was think ircd-hybrid and kiwiirc as a front end that I could just give less tech savvy colleagues the link to
[05:21] <lotuspsychje> schlerpM: its pretty complicated with leaf or server layout
[05:22] <lotuspsychje> you need to compile an ircd as server first
[05:22] <schlerpM> Yeah would you recommend and starting points for me?
[05:22] <lotuspsychje> then compile few leaf ircd's
[05:23] <lotuspsychje> because if one server splits, you can rely on the other ircd's
[05:23] <schlerpM> Ahhh I see!
[05:23] <lotuspsychje> schlerpM: are you gonna host yourself, or get an ircd hosting?
[05:23] <schlerpM> It's all going to be internal to our network and hosted from a vps
[05:23] <schlerpM> That's my plan anyway
[05:23] <lotuspsychje> schlerpM: from your own vps, or a vps rental?
[05:24] <schlerpM> I rent a vps
[05:24] <schlerpM> Linode
[05:24] <EriC^^> morning lotuspsychje
[05:24] <lotuspsychje> right
[05:24] <lotuspsychje> EriC^^: hello mate :p
[05:24] <lotuspsychje> schlerpM: i strongly advise to rend specialized ircd hosting company
[05:24] <lotuspsychje> schlerpM: their cheap, got support and anti-ddos
[05:25] <lotuspsychje> schlerpM: with own vps, you will get bottlenecks for sure
[05:26] <schlerpM> Ahh I see!
[05:26] <lotuspsychje> schlerpM: because other data will flow on your vps right
[05:26] <schlerpM> I'm talking about setting up a network for about 30 people max just for internal discussions in out office tbh
[05:26] <schlerpM> Nah this is my spare vps I was talking about
[05:27] <lotuspsychje> schlerpM: hmm okay
[05:27] <lotuspsychje> schlerpM: for 30 users, you can do this
[05:27] <lotuspsychje> server-leaf
[05:27] <lotuspsychje> 2 ircd's
[05:28] <lotuspsychje> linked to each other
[05:28] <schlerpM> Yep
[05:28] <schlerpM> And what ircd software would you suggest lotuspsychje?
[05:30] <lotuspsychje> schlerpM: unrealircd would do the trick for less users
[05:30] <lotuspsychje> https://www.unrealircd.org/
[05:30] <schlerpM> Ahh I was recommended that yesterday
[05:33] <schlerpM> Thank you lotuspsychje !
[05:33] <lotuspsychje> no sweat
[11:30] <BluesKaj> Howdy folks
[13:22] <lotuspsychje> good afternoon to all
[13:22] <EriC^^> afternoon lotuspsychje
[13:22] <EriC^^> how's the house hunting going?
[13:22] <lotuspsychje> EriC^^: hello mate, your following me at 15h22?
[13:23] <EriC^^> nah i just logged on
[13:23] <lotuspsychje> EriC^^: we found a small house for hire
[13:23] <lotuspsychje> me too
[13:23] <EriC^^> haven't put it in the favorites yet
[13:23] <EriC^^> cause it autoconnects sometimes and it starts disconnecting
[13:23] <lotuspsychje> the owner needs to aprove first
[13:24] <EriC^^> cool
[13:24] <EriC^^> how much?
[13:24] <lotuspsychje> EriC^^: yeah ive seen you ping timeout sometimes
[13:24] <EriC^^> yeah it does the max send thing
[13:24] <lotuspsychje> EriC^^: what kind of isp speeds in lebanon?
[13:25] <EriC^^> it's ok, it's 23mbps i think
[13:25] <lotuspsychje> nice
[13:25] <EriC^^> i use another connection sometimes which is a local guy who gives wireless net
[13:25] <EriC^^> he uses wep so it was too easy
[13:25] <EriC^^> :D
[13:25] <lotuspsychje> thank you neighbour :p
[13:26] <EriC^^> not my neighbor, it's a local distributor
[13:26] <lotuspsychje> lol
[13:26] <EriC^^> wouldn't use my neighbor's
[13:26] <EriC^^> i'm serious, he distributes to the area, and has his phone number you call he gives you the pass i guess
[13:26] <EriC^^> but it's slow as heck
[13:27] <lotuspsychje> just good to irc :p
[13:27] <EriC^^> so it's my backup connection :P
[13:27] <EriC^^> yeah, i use it when i'm out of download traffic on the other one, between recharges or if i wanna download something huge and leave it overnight or something
[13:28] <lotuspsychje> i cant complain here: vdsl2 50mbit without data limit
[13:28] <EriC^^> there's another one which i think is faster, he has like 4-5 ssid's and i've seen his shop in the area
[13:29] <EriC^^> but he uses wpa2, i gave it a crack once but it'd take i think 1 week to go over all numbers 10digits
[13:29] <EriC^^> so i said screw it
[13:29] <lotuspsychje> lol
[13:29] <EriC^^> wep takes like 10mins to crack
[13:29] <lotuspsychje> EriC^^, the lebanon h4cker
[13:29] <EriC^^> lol
[13:30] <EriC^^> this ain't h4cking :P
[13:30] <lotuspsychje> its debatable :p
[13:31] <EriC^^> anyways no harm no foul, he isn't going to miss the traffic
[13:31] <EriC^^> it baffles me that he still uses wep though
[13:31] <lotuspsychje> http://linux.softpedia.com/blog/introducing-the-unofficial-whatsapp-client-for-linux-mac-and-windows-485195.shtml
[13:31] <lotuspsychje> wep is a pain indeed
[13:32] <EriC^^> i think he gave the customers the password a long time ago, and never switched to wpa2 when wep got easy to crack
[13:46] <lordievader> Hehe...
|
UBUNTU_IRC
|
Install the tree dependency in case you don’t have it so we can view our directory structure (sudo apt install tree).
Looks like we have two folders which contain images of cells which are infected and healthy.
We can get further detail of the total number of images using the following code.
Looks like we have a balanced dataset of 13779 malaria and non-malaria (uninfected) cell images.
Let’s build a dataframe from this which will be of use to us shortly as we start building our datasets.
Build and Explore Image DatasetsTo build deep learning models we need training data but we also need to test the model’s performance on unseen data.
We will use a 60:10:30 split for train, validatation and test datasets respectively.
We will leverage the train and validation datasets during training and check the performance of the model on the test dataset.
Now obviously the images will not be of equal dimensions given blood smears and cell images will vary based on the human, the test method and the orientation in which the photo was taken.
Let’s get some summary statistics of our training dataset to decide optimal image dimensions (remember we don’t touch the test dataset at all!).
We apply parallel processing to speed up the image read operations and based on the summary statistics, we have decided to resize each image to 125×125 pixels.
Let’s load up all our images and resize them to these fixed dimensions.
We leverage parallel processing again to speed up computations pertaining to image load and resizing.
Finally we get our image tensors of desired dimensions as depicted in the preceding output.
We can now view some sample cell images to get an idea of how our data looks like.
Based on the sample images above, we can notice some subtle differences between malaria and healthy cell images.
We will basically make our deep learning models try and learn these patterns during model training.
We setup some basic configuration settings before we start training our models.
We fix our image dimensions, batch size, epochs and encode our categorical class labels.
The alpha version of TensorFlow 2.
0 was released on March, 2019 just a couple of weeks before this article was written and it gives us a perfect excuse to try it out!Deep Learning Model Training PhaseIn the model training phase, we will build several deep learning models and train them on our training data and compare their performance on the validation data.
We will then save these models and use them later on again in the model evaluation phase.
Model 1: CNN from ScratchOur first malaria detection model will be building and training a basic convolutional neural network (CNN) from scratch.
First let’s define our model architecture.
Based on the architecture in the preceding code, our CNN model has three convolution and pooling layers followed by two dense layers and dropout for regularization.
Let’s train our model now!We get a validation accuracy of 95.
6% which is pretty good, though our model looks to be overfitting slightly looking at our training accuracy which is 99.
We can get a clear perspective on this by plotting the training and validation accuracy and loss curves.
Learning Curves for Basic CNNThus we can see after the fifth epoch, things don’t seem to improve a whole lot overall.
Let’s save this model for future evaluation.
h5')Deep Transfer LearningJust like humans have an inherent capability of being able to transfer knowledge across tasks, transfer learning enables us to utilize knowledge from previously learned tasks and apply them to newer, related ones even in the context of machine learning or deep learning.
A comprehensive coverage of transfer learning is available in my article and my book for readers interested in doing a deep-dive.
Ideas for deep transfer learningFor the purpose of this article, the idea is, can we leverage a pre-trained deep learning model (which was trained on a large dataset — like ImageNet) to solve the problem of malaria detection by applying and transferring its knowledge in the context of our problem?We will apply the two most popular strategies for deep transfer learning.
Pre-trained Model as a Feature ExtractorPre-trained Model with Fine-tuningWe will be using the pre-trained VGG-19 deep learning model, developed by the Visual Geometry Group (VGG) at the University of Oxford, for our experiments.
A pre-trained model like the VGG-19 is an already pre-trained model on a huge dataset (ImageNet) with a lot of diverse image categories.
Considering this fact, the model should have learned a robust hierarchy of features, which are spatial, rotation, and translation invariant with regard to features learned by CNN models.
Hence, the model, having learned a good representation of features for over a million images, can act as a good feature extractor for new images suitable for computer vision problems just like malaria detection!.Let’s briefly discuss the VGG-19 model architecture before unleashing the power of transfer learning on our problem.
Understanding the VGG-19 modelThe VGG-19 model is a 19-layer (convolution and fully connected) deep learning network built on the ImageNet database, which is built for the purpose of image recognition and classification.
This model was built by Karen Simonyan and Andrew Zisserman and is mentioned in their paper titled ‘Very Deep Convolutional Networks for Large-Scale Image Recognition’.
I recommend all interested readers to go and read up on the excellent literature in this paper.
The architecture of the VGG-19 model is depicted in the following figure.
VGG-19 Model ArchitectureYou can clearly see that we have a total of 16 convolution layers using 3 x 3convolution filters along with max pooling layers for downsampling and a total of two fully connected hidden layers of 4096 units in each layer followed by a dense layer of 1000 units, where each unit represents one of the image categories in the ImageNet database.
We do not need the last three layers since we will be using our own fully connected dense layers to predict malaria.
We are more concerned with the first five blocks, so that we can leverage the VGG model as an effective feature extractor.
For one of the models, we will use it as a simple feature extractor by freezing all the five convolution blocks to make sure their weights don’t get updated after each epoch.
For the last model, we will apply fine-tuning to the VGG model, where we will unfreeze the last two blocks (Block 4 and Block 5) so that their weights get updated in each iteration (per batch of data) as we train our own model.
Model 2: Pre-trained Model as a Feature ExtractorFor building this model, we will leverage TensorFlow to load up the VGG-19 model, and freeze the convolution blocks so that we can use it as an image feature extractor.
We will plugin our own dense layers at the end for performing the classification task.
Thus it is quite evident from the preceding output that we have a lot of layers in our model and we will be using the frozen layers of the VGG-19 model as feature extractors only.
You can use the following code to verify how many layers in our model are indeed trainable and how many total layers are present in our network.
We will now train our model using similar configurations and callbacks which we used in our previous model.
Refer to my GitHub repository for the complete code to train the model.
We observe the following plots showing the model’s accuracy and loss.
Learning Curves for frozen pre-trained CNNThis shows us that our model is not overfitting as much as our basic CNN model but the performance is not really better and in fact is sligtly lesser than our basic CNN model.
Let’s save this model now for future evaluation.
h5')Model 3: Fine-tuned Pre-trained Model with Image AugmentationIn our final model, we will fine-tune the weights of the layers present in the last two blocks of our pre-trained VGG-19 model.
Besides that, we will also introduce the concept of image augmentation.
The idea behind image augmentation is exactly as the name sounds.
We load in existing images from our training dataset and apply some image transformation operations to them, such as rotation, shearing, translation, zooming, and so on, to produce new, altered versions of existing images.
Due to these random transformations, we don’t get the same images each time.
We will leverage an excellent utility called ImageDataGenerator in tf.
keras that can help us build image augmentors.
We do not apply any transformations on our validation dataset except scaling the images (which is mandatory), since we will be using it to evaluate our model performance per epoch.
For detailed explanation of image augmentation in the context of transfer learning feel free to check out my article if needed.
Let's take a look at some sample results from a batch of image augmentation transforms.
Sample Augmented ImagesYou can clearly see the slight variations of our images in the preceding output.
We will now build our deep learning model making sure the last two blocks of the VGG-19 model is trainable.
We reduce the learning rate in our model since we don’t want to make to large weight updates to the pre-trained layers when fine-tuning.
The training process of this model will be slightly different since we are using data generators and hence we will be leveraging the fit_generator(…) function.
This looks to be our best model yet giving us a validation accuracy of almost 96.
5% and based on the training accuracy, it doesn’t look like our model is overfitting as much as our first model.
This can be verified with the following learning curves.
Learning Curves for fine-tuned pre-trained CNNLet’s save this model now so that we can use it for model evaluation on our test dataset shortly.
h5')This completes our model training phase and we are now ready to test the performance of our models on the actual test dataset!Deep Learning Model Performance Evaluation PhaseWe will now evaluate the three different models that we just built in the training phase by making predictions with them on the data from our test dataset, because just validation is not enough!.We have also built a nifty utility module called model_evaluation_utils, which we will be using to evaluate the performance of our deep learning models with relevant classification metrics.
The first step here is to obviously scale our test data.
The next step involves loading up our saved deep learning models and making predictions on the test data.
The final step is to leverage our model_evaluation_utils module and check the performance of each model with relevant classification metrics.
Looks like our third model performs the best out of all our three models on the test dataset giving a model accuracy as well as f1-score of 96% which is pretty good and quite comparable to the more complex models mentioned in the research paper and articles we mentioned earlier!ConclusionWe looked at an interesting real-world medical imaging case study of malaria detection in this article.
Malaria detection by itself is not an easy procedure and the availability of the right personnel across the globe is also a serious concern.
We looked at easy to build open-source techniques leveraging AI which can give us state-of-the-art accuracy in detecting malaria thus enabling AI for social good.
I encourage everyone to check out the articles and research papers mentioned in this article, without which it would have been impossible for me to conceptualize and write this article.
Let’s hope for more adoption of open-source AI capabilities across healthcare making it cheaper and accessible for everyone across the world!This article has been adapted from my own article published previously in opensource.
comIf you are interested in running or adopting all the code used in this article, it is available on my GitHub repository.
Remember to download the data from the official website.
.. More details
|
OPCFW_CODE
|
Where to start
Members - Reputation: 202
Posted 13 June 2011 - 04:21 AM
I have just started learning the basics of c++ using books and the internet, so this is where my plan starts. First I want to begin learning Open Gl using the NeHe tutorials as they seem to be quite well recommend. Once i have a handle over this I will start some small projects like pong and tetras just to drill it all in as doing something yourself if my way of learning (pong already done ah well ). My last idea was to then move from the raw windows Open GL to GLUT so that my programming then becomes portable and can be used across multiple platforms.
Thx for any advice in advanced
Members - Reputation: 97
Posted 13 June 2011 - 04:37 AM
SDL is a 2D library that can have support from OpenGL, so it's a good progression phase and would be a better platform for learning how games work; your experience with Java and C# should give you an advantage here.
I tried learning a complicated API first with C++, the Win32 API, however this threw me off coding and came back after several months and started with SDL as it's a low level API and builds up the thinking of an API and then the more complicated ones like OpenGL and Win32 API can be learned fairly easily.
This was how I learnt OpenGL and large parts of Win32 API, but of course we all learn differently. Good luck!
I do not care for reputation, merely to help fellow users.
Members - Reputation: 202
Posted 13 June 2011 - 04:03 PM
update: I have tried the Lazy Foo website but their tutorials are for VS 2005 and after 2 hours of tinkering i still carnt get the SDL liberties to work with VS 2010
update: nvm fixed it
thanks again in advanced
Members - Reputation: 308
Posted 13 June 2011 - 08:53 PM
Progress came a lot faster after I decided to take a "step back" and work with SDL. IMO it's a perfect fit for games like Tetris, Pong, Pac-Man, and many other 2D games in that because you don't have to concern yourself so much with having the right code in the right places to make it work you can spend more time absorbing the "lessons" those games have to offer.
Members - Reputation: 120
Posted 14 June 2011 - 12:19 AM
Then I moved to coding in java since I m more comfortable with java and many post says that u should program in what you are comfortable with ..
So there are tutorials on you tube about java game programming that'll get you started but the technique isnt refined .. so you got to play around with it
Once you get the hang of the game loops .. search for dewitters game loops .. they provide very interesting details about game loops and the pros and corns ...
Then try developing a side scroller ...
If you feel comfortable with all of this then move to openGL .. if you are developing in java then you can use JOGL or LWJGL (my current choice) as a wrapper for openGL...
currently I m learning from NeHe .. but the tutorials use a deprecated library devIL ... so download the new library known as SLICK .. and then combine the tutorials to get u a good base in openGL API
I have currently reached this far ... if ne1 knows how to go further please let me know ...
Hope this helps you MaxFIre
Moderators - Reputation: 14084
Posted 14 June 2011 - 10:08 AM
"ever once" --> "everyone's"
i have taken ever once advice
Making games fun and getting them done.
Please do not PM me. My email address is easy to find, but note that I do not give private advice.
Members - Reputation: 778
Posted 14 June 2011 - 05:35 PM
When it comes down to C++ game programming, I started with SDL. I would go back to it any day too! But if you want to really learn 3D, then I would find a simple opensource graphics library that uses OpenGL and take it apart and learn from the people who already did what you want to do. If you just want to program games, and you want to do it fast, then get a graphics engine. You got to weigh how much you want to do something, whats more important to you? Making games or making graphic engines? It's possible to do both, but in my experience, better aim for one. Think hard, and write the pros and cons of making your own engine, VS using another's. Since I wanted to make games more then remake the wheel, i chose to use another's engine.
The choice is yours, but think hard. And if you decide to make a graphic engine, Personally i say take an opensource engine apart and see how they do it! After you get the basics down first
Jack of all trades
Master of none
Crossbones+ - Reputation: 4065
Posted 15 June 2011 - 12:26 PM
But, compared to SFML, I think it's weak, and it doesn't have near the power. The main advantages of SFML is the graphics engine is on top of OpenGL, which provides HW acceleration for many of the graphics abilities SDL does not provide. When I wanted to rotate an object, using SDL, it was SOOOO SLOW. Using SFML, it was no problem.
SFML has a nice interface for Audio as well, and provides for advanced audio functionality, like spatial positioning, among other things. Input, Network, and fonts are also provided. Also, SFML has a C++ interface, which I prefer.
Personally, if I was going to advise beginners on which rendering engines to use, I would always suggest SFML.
---(Old Blog, still has good info): 2dGameMaking
"No one ever posts on that message board; it's too crowded." - Yoga Berra (sorta)
|
OPCFW_CODE
|
Hmmm, let's see if I can put down a "schematics" of your idea in the way I may be able to understand it:
The USB stick (modified with Manufacturer's Tool) has two parts, LUN1 (CD-ROM) and LUN2 (HD-Like device).
Th PC is booted from the LUN1 (CD-ROM), which contains:grldr
as no-emulation bootsectormenu.lstWHAT ELSE?
The LUN2 (HD-Like device) contains:\I386WHAT ELSE?
The point you seem to have not fully considered/understood is the way NTLDR or SETUPLDR.BIN actually boot.
Until you are in "real mode" (read Text Mode, including BOOT.INI choices or the blue screen SETUP) the information from BIOS (and the ones "faked" by grub4dos) are trusted.
As soon as you "switch" to "protected mode" (read black screen/loading progress bar) ANY info from BIOS (and from grub4dos) is ignored, it simply vanishes in thin air, a new scan of the hardware is performed and unless a given hardware is found AND an appropriate driver for it is loaded, the booting will be aborted, possibly with a 0x0000007b BSOD STOP ERROR.
The exception being that of a driver that is loaded "forcibly".
The only known one to be working is the RAMDISK.SYS driver used in conjunction with the SETUPLDR.BIN coming from SERVER 2003 SP1 or R2 (NOT "gold", NOT SP2).
There may be other possible ways, but it seems like the topic is not of much interest:http://www.boot-land...?showtopic=1507http://www.boot-land...?...c=5512&st=7http://www.boot-land...?...c=5512&st=9
Would you be "game" for this?
The "trick" of the "XP kansas City Shuffle" is simply that of loading through grub4dos mapping a smaller image that appears to the NT booting IDENTICAL to the actual partition on hard disk.
During "real mode" the mapped image is accessed, during "protected mode" the real partition is loaded.
This allows for two newish things:
1) booting from USB on computers with no or defective booting from USB support
2) speed up the booting on computers with USB-2.0-chips-but-only-USB-1.1-speed-support-while-booting, since only the files in the smallish mapped image is loaded with the USB BIOS routines whilst the large number of files on the partition are loaded through the native NT drivers, at USB 2.0 speed
In the particular case you posed, it could have the "advantage"
of having a "fixed" LUN1 booting (but still with the "USB_multiboot" tricks) a "variable" part on LUN2, in other words you would only need to run the Manufacturer's Tool once and setup the DISK SIGNATURE of the LUN2 once, after which you would be free - within limits - to change the contents of LUN2 with ordinary file tools.
Edited by jaclaz, 21 December 2008 - 10:11 AM.
|
OPCFW_CODE
|
#ifndef FLATTENLAYER_CPP
#define FLATTENLAYER_CPP
#include"tensor.cpp"
class FlattenLayer{
public:
void forward(Tensor4D &input, Tensor4D &output);
void backward(Tensor4D &output_grad, Tensor4D &input_grad);
};
void FlattenLayer::forward(Tensor4D &input, Tensor4D &output){
for(int batch = 0; batch < input.size.len1; ++batch){
for(int c_in = 0; c_in < input.size.len2; ++c_in){
for(int h_in = 0; h_in < input.size.len3; ++h_in){
for(int w_in = 0; w_in < input.size.len4; ++w_in){
output.data[batch][c_in * input.size.len3 * input.size.len4 + h_in * input.size.len4+ w_in][0][0] = input.data[batch][c_in][h_in][w_in];
}
}
}
}
}
void FlattenLayer::backward(Tensor4D &output_grad, Tensor4D &input_grad){
int c_out, h_out, w_out;
for(int batch = 0; batch < output_grad.size.len1; ++batch){
for(int c_in = 0; c_in < output_grad.size.len2; ++c_in){
c_out = c_in / (input_grad.size.len3 * input_grad.size.len4);
h_out = c_in % (input_grad.size.len3 * input_grad.size.len4) / input_grad.size.len4;
w_out = c_in % (input_grad.size.len3 * input_grad.size.len4) % input_grad.size.len4;
input_grad.data[batch][c_out][h_out][w_out] = output_grad.data[batch][c_in][0][0];
}
}
}
#endif
|
STACK_EDU
|
In honor of the upcoming olympics, I figured I’d write a post highlighting something that JK, Bert, and I came up with in the process of writing our book on heavy tails.
One of the topics that is interwoven throughout the book is a connection between “extremal processes” and heavy-tails. In case you’re not familiar with extremal processes, the idea is that the process evolves as the max/min of a sequence of random variables. So, for example,
Of course, the canonical example of such processes is the evolution of world records. So, it felt like a good time to post about them here…
What’s the connection between heavy tails and world records?
The New York Times has a great visualization of world records that helps to make this clear… You should click on over to it to play around, but the following screenshot highlights the point I want to make. It shows the progression of track running world records. (On a related note, I’m very curious to see the results of the LetsRun.com clean/dirty poll for world records.) In any case, though, in the plot below shows the pace (sec/100m) that each record decreases for all running track events (each line corresponds to a different event, in order of distance except for the 100/200, which are on top of each other).
When you look at this, you can’t help but notice that the times between improvements look sort of heavy-tailed. Basically, there are lots of periods where the record changes frequently, but there are also others where the records are unchanged for long times.
In fact, this has been observed and tested empirically in a large number of contexts; for example, records for rainfall, earthquakes, and other extreme events. You can find lots of papers in the literature identifying such things… Very commonly, the times between “records” are, in some sense, heavy-tailed.
Coming across this, it is of course natural to try and come up with a theoretical “explanation” for the observations. We looked around and couldn’t find one, so we tried to derive one ourselves… Indeed, we did find a simple theoretical “explanation” for the connection, which I’ll prove for you in the following.
To formalize the setting, let’s consider the following. Suppose we observe a sequence of i.i.d. random variables, with distribution
Let denote the instant of the ‘th record, i.e., and for For let denote the time between the st record and the th record.
Then, we can prove the following theorem, which I think is new…
Theorem 1 Suppose that is continuous. For any is heavy-tailed, with
Note that this is a somewhat delicate situation that we’re trying to study, because is not stationary with respect to Indeed, you should expect that as the record gets bigger, the time it takes to break it gets larger.
Also, a fun observation about the theorem is that it isn’t even required that have infinite support for to be heavy-tailed! …so looking at records can create heavy tails from things that are extremely light-tailed.
I think the proof is kind of interesting, so I figure it’s nice to include the idea of it here…let me know if you can find a simpler argument!
The first part of the proof is to show that the distribution of is insensitive to the random variables we’re considering, i.e., to > In particular, the following shows that we may assume that is exponentially distributed with no loss of generality, which makes things a lot easier!
To see this, define the function Note that is simply the cumulative hazard function corresponding to the distribution The key of the argument is to show that the random variable is exponentially distributed with mean 1. This actually follows easily from the (standard) fact that is a uniform random variable over
Now, Let Clearly, is an i.i.d. sequence of exponential random variables. Moreover, since is non-decreasing, records of the sequence coincide with those of the sequence Thus, for the purpose of studying the time between records, we may assume without loss of generality that is an exponential distribution with mean 1.
Now, using the above, we can study the distribution of the records in a much simpler setting. What we’ll show is that the th record has an Erlang distribution with shape parameter and rate parameter 1 (i.e., is distributed as the sum of i.i.d. exponential random variables, each having mean 1). To do this, we’ll proceed inductively. Clearly, the above claim is true for since Assume that the claim is true for some Note that for
where is exponentially distributed with mean 1, and independent of From the memoryless property of the exponential distribution, it now follows that which implies that in distribution. This proves our claim that has an Erlang distribution.
We are now ready to analyze the tail of Once again, we proceed inductively, and first consider the case Note that conditioned on the value of is a geometrically distributed with
Therefore, unconditioning with respect to
Making the substitution we get
It therefore follows that
Next, we assume that for some and analyze the tail of Recall that in distribution, where is exponentially distributed with mean 1, and independent of Therefore, we can think of as the time until a new sample exceeds Note that the time until a new sample exceeds is distributed as Moreover, conditioned on a new sample exceeding the probability that it exceeds equals
Note that the above calculation exploits the memoryless property of the exponential distribution, and the fact that and are i.i.d. Thus, when a new sample exceeds it also exceeds (and thus sets a new record) with probability 1/2. Therefore, is simply distributed as a geometric random sum of i.i.d. random variables, each distributed as i.e.,
in distribution, where is an i.i.d. sequence of random variables with the same distribution as and is a geometric random variable independent of with success probability 1/2. Finally, since is assumed to be regularly varying (and therefore subexponential, which I’ve talked about in an earlier post), we have
which proves our desired induction step.
|
OPCFW_CODE
|
Working with plots
There are various methods defined on the
Plot type. We will cover a few of
them here, but consult the (forthcoming) API docs for more exhaustive coverage.
SyncPlot both have implementations of common Julia methods:
size: returns the
layoutattributes in the plot's layout
copy: create a shallow copy of all traces in the plot and the layout, but create a new
All exported functions from the plotly.js
API have been
exposed to Julia and operate on both
SyncPlot instances. Each of
these functions has semantics that match the semantics of plotly.js
In PlotlyJS.jl these functions are spelled:
restyle!: edit attributes on one or more traces
relayout!: edit attributes on the layout
addtraces!: add traces to a plot at specified indices
deletetraces!: delete specific traces from a plot
movetraces!: reorder traces in a plot
redraw!: for a redraw of an entire plot
There are also the two unexported (but still callable) methods from plotly.js
extendtraces!: Extend specific attributes of one or more traces with more data by appending to the end of the attribute
prependtraces!: Prepend additional data to specific attributes on one or more traces
When any of these routines is called on a
SyncPlot the underlying
object (in the
plot field on the
SyncPlot) is updated and the plotly.js
function is called. This is where
SyncPlot gets its name: when modifying a
plot, it keeps the Julia object and the display in sync.
For more details on which methods are available for each of the above functions consult the docstrings or (forthcoming) API documentation.
Be especially careful when trying to use
restyle! to set attributes that
are arrays. The semantics are a bit subtle. Check the docstring for details
A common task is to construct subpots, or plots with more than one set of axes. This is possible using the declarative plotly.js syntax, but can be tedious at best.
PlotlyJS.jl provides a conveient syntax for constructing what we will call regular grids of subplots. By regular we mean a square grid of plots.
To do this we will make a pun of the
hvcat functions from
Base and leverage the array construction syntax to build up our subplots.
Suppose we are working with the following plots:
p1 = Plot(scatter(;y=randn(3))) p2 = Plot(histogram(;x=randn(50), nbinsx=4)) p3 = Plot(scatter(;y=cumsum(randn(12)), name="Random Walk")) p4 = Plot([scatter(;x=1:4, y=[0, 2, 3, 5], fill="tozeroy"), scatter(;x=1:4, y=[3, 5, 1, 7], fill="tonexty")])
If we wanted to combine
p2 as subplots side-by-side, we would do
which would be displayed as
If instead we wanted two rows and one column we could
Finally, we can make a 2x2 grid of subplots:
[p1 p2 p3 p4]
Right now saving figures to a file only works when using the
The following syntaxes are currently supported without any other packages
# save to html. `js` is a keyword argument that specifies how plotly.js can be # included. Supported values are `:local`, `:remote`, `:embed`. See docstring # for more details savefig(sp::ElectronPlot, "output_filename.html"; js::Symbol) # save svg savefig(sp::ElectronPlot, "output_filename.svg")
Other routines are available for saving figures, but they require the independent installation of various Julia packages.
If you have installed Rsvg.jl
you can use the following routines:
PlotlyJS.savefig(sp::ElectronPlot, "output_filename.pdf") PlotlyJS.savefig(sp::ElectronPlot, "output_filename.png") PlotlyJS.savefig(sp::ElectronPlot, "output_filename.eps")
Note that the pdf and eps (not png) export from this function this case will be a true vector image. This allows you to have infinitely zoomable quality. This is the recommended way to obtain a pdf of your plot, but comes with the extra step of installing Rsvg.jl via:
If you have ImageMagick.jl properly installed in your Julia installation you can also do the following:
savefig_imagemagick(sp::ElectronPlot, "output_filename.png") savefig_imagemagick(sp::ElectronPlot, "output_filename.pdf") savefig_imagemagick(sp::ElectronPlot, "output_filename.jpeg")
Please note that the maximum DPI resolution for any of these formats is 96.
To get true vector quality pdf files, we recommend using the Rsvg backend.
There is one more routine you can use to save a figure to a file. This routine
requires that you have properly installed the cairosvg
python package and that the
cairosvg command is on your PATH.
cairosvg can be found, you can use the following routines:
PlotlyJS.savefig_cairosvg(sp::ElectronPlot, "output_filename.png") PlotlyJS.savefig_cairosvg(sp::ElectronPlot, "output_filename.pdf") PlotlyJS.savefig_cairosvg(sp::ElectronPlot, "output_filename.ps")
|
OPCFW_CODE
|
This site is designed to explore signal propagation of 20m via analog SSTV. Automated signal quality characterization can be performed via image analysis and multiple samples from the same station (over time and assuming consistent power/antenna orientation) can indicate propagation trends. This site classifies images as P5-P0 by evaulating both JPEG image compressibility and the number/frequency of discrete colors. Noisy images typically don't compress as well as clean images and tend to contain a significant number of infrequently used colors. While this algorithm produces decent results for its simplicity, it favors images with single color backgrounds and a semi-noisy P2 with a solid backround may be classified as P3 due to better JPEG compressibility of the background).
To mirror images to your own site, there are there static names available:
kd0ofn.qsl.net/last-any.jpg: the last image received, regardless of quality kd0ofn.qsl.net/last-good.jpg: the last P3-P5 image received kd0ofn.qsl.net/last-auth.jpg: last P3-P5 for 10 mins, then last anything
The receiving pipeline is an IC-7410 via built-in USB Audio device to a customized SSTVENG sample app that stores images as lossless PNGs with metadata added for:
Start Time (epoch seconds) Duration (seconds) SSTV Mode Callsign (FSKID encoded) USB Sampling Rate (hz) Correction to Sampling Rate (PPM) Auto-Slant Correction Applied (PPM)The PNGs are stored on the shared network drive of a Linux server. The server uses incrond (cronjob for inode/directory change) to trigger the processing pipeline. The file is converted from PNG to JPG at 85% quality and the P5-P0 analysis performed. The attributes above are re-encoded into the JPG as a single comment via comma separated key=value pairs. If you extract the comments from an images, it will look like:
FSKID works great when used. Have experimented with optical character recognition (OCR) to identify callsigns with limited results. Will likely include as a "callsign guess" in some future version. Eventual goal is capturing enough callsigns to use as a proxy for location to perform more sophisticated propagation trend analysis.
Lots of folks have strong feelings regarding auto-slant. Seems the general concern is that enabling auto-slant means those transmitting wont realize their frequency calibration is off. In reality, calibration issues occur on both the transmit and receive sides to some degree and no calibration is perfect. Thus, seems like a slanted image conveys limited actionable data on its own. Instead, this site displays all images auto-slant enabled along with the calibration data of my receiver/software (via WWV @ 10Mhz) and the auto-slant adjustment (in PPM) that was applied to each image.
As an experiment in including metadata with an sstv image, I have added QR codes to my QSO images. The QR code decodes via any standard reader to a URL that redirects here. The format is HTTP://SSTV.ME%23abcdef encoded with H-level ECC (up to 30% errors). SSTV.ME is an alias for kd0ofn.qsl.net, %23 is the hash-sign (HTML anchor) and ABCDEF is a 32-bit (base32) image reference. Following the QR link displays the original image in full PNG fidelity allowing comparison against remotely received images.
First successful validation of QR took place Feb 20 @ ~2p Central during a 50W QSO to K01E (Rhode Island) also received by VE6PW (Calgary, ON) and uploaded to World SSTV Cam. The QSO included two images each with the QR on the first being unreadible but the second being readable. Was using free iOS app Quick Scan (general QR reader) on an iPhone 6S.
|
OPCFW_CODE
|
I had a class last week and I was 10 minutes late. I missed part of the class, and I want to write an email to my professor to arrange a time to meet.
I prepared this email:
I’ve missed part of the previous session and this part is not clear for me. Could I meet up with you this week so you can explain it to me?
Is it appropriate?
I would like to take a different perspective than Buffy:
You were late (maybe even disrupted the course when entering the room) and missed something. Now you want additional time from your teacher to catch up something which happened in the first 10 minutes - which will take ~5-15 minutes of the working time of your teacher. Multiply this by 100 students and 5 courses per week, and you will spot the problem ;-).
Therefore, I would suggest to try everything you can do to catch up on your own. Ask other students. Use books. If you invested >3h without success, you can still write this e-mail explaining what you already understood, where you struggled and at which point you need specific help.
This will show your professor you are really engaged and makes it easy to answer your question within seconds. Maybe (s)he will ask you "just to talk a few minutes after the next lecture" which is also very time effective.
The behavior also depends a bit on your local student-teacher-relationship. In my course, I would not mind if you just approach me after class, you will receive a little (friendly but sarcastic) remark about being late, and get an answer (and I'm happy that someone is trying to learn something).
Perfect question for office hours! Go and wait for your turn to ask.
Yes, it is appropriate, though you will want to apologize for being late. Most professors, good ones anyway, value questions and the students who ask them. If the prof in question holds regular office hours that would be the most appropriate time to ask.
I don't think there is any "special" way to ask. What you suggest seems fine to me.
You can also try to get up to speed on the topic before you meet, using text books and the like. Or discussions with fellow students.
I was once thought to be very smart because I asked a lot of questions. On the other hand, my mother thought I was a "pain" because I asked a lot of questions. But she wasn't a professor.
No, as a general rule, such mail is not appropriate. I would not be happy to receive such email. There are reasons courses are taught in classes of multiple (many) students, and that is, the professor's time is much more valuable than students'.
There may be exceptions to the above rule. For example, if you have been late to some widely known reason that affected many people (e.g. snowstorm, public transport breakdown, etc.), then the professor may be more generous (but I would not be surprised if he asks for multiple students that missed the class to arrange a single meeting).
On the other hand, if a) people are habitually late to this professor's class, or b) you were late multiple times, don't even think about this, because you may get pretty bad reaction. Do the math, if there are 100 people in the class and only 5% are late and want extra time with professor, it can easily add up to a burden that is non-negligible.
|
OPCFW_CODE
|
Plethysm of $S^3(S^2V)$ as $\mathfrak{sl}_3(\mathbb{C})$-module
I have asked this question in MSE before, but have not got any answer. So here I am asking it again with some more detail.
I believe that the following sequence of $\mathfrak{sl}_3(\mathbb{C})$-modules is exact:
$$0\to\mathbb{C}\to V\otimes\wedge^2V\stackrel\phi\to S^2 V\otimes S^2(\wedge^2 V)\stackrel\psi\to S^3(S^2V)\stackrel{N}\to S^6V\to 0$$
where $V=\mathbb{C}^3$ and $S^2$ denotes the symmetric square. The exactness of this sequence can be shown easily using eigen value diagram. But I want to show the exactness explicitly by constructing the maps and showing exactness of those maps.
Let's call the third and fourth map $\phi$ and $\psi$ respectively, the fifth map is the natural map (call it $N$). I also know what $\psi$ is, it is given by the followig $$\psi ((s\cdot t)\otimes (u\wedge v)\cdot(w\wedge z))=((u\cdot w)\cdot(v\cdot z)-(u\cdot z)\cdot(v\cdot w))\cdot (s\cdot t)$$
and I have checked that $\operatorname{im}\psi=\ker N$. But I am not sure what $\phi$ is.
So my question is, how to define $\phi$ so that $\ker\phi$ is isomorphic to $\mathbb{C}$ and $\psi\circ\phi$ is zero i.e. $\operatorname{im}\phi\subset\ker\psi$ (the other direction $\ker\psi\subset\operatorname{im}\phi$ follows by computing the dimensions which turns out to be equal) and of course $\phi$ should be $\mathfrak{sl}_3(\mathbb{C})$-linear.
Under the action of $\mathfrak{sl}_3(\mathbb{C})$, $\Lambda^2V\simeq V^\ast$. This isomorphism can be written explicitly in terms of the invariant volume form $\omega\in\Lambda^3V\simeq \mathbb{C} $, by $(v_1,v_2\wedge v_3)=v_1\wedge v_2\wedge v_3=\lambda \omega$ (in other words, for $\theta\in V^\ast$, $\theta\mapsto \iota_\theta\omega\in\Lambda^2V$).
Thus $V\otimes\Lambda^2V\simeq V\otimes V^\ast = \mathfrak{gl}_3(\mathbb{C}) = \mathfrak{sl}_3(\mathbb{C})\oplus \mathbb{C}$. Moreover, $S^2V\otimes S^2(\Lambda^2 V)\simeq S^2V\otimes S^2 V^\ast = \mathfrak{gl}(S^2V)$. The latter space contains a subalgebra isomorphic to $\mathfrak{sl}_3(\mathbb{C})$, which is spanned by the representation matrices of the induced action on $S^2V$.
Your map $\phi$ is then given by composing the following operations: first use the isomorphism $\Lambda^2V\simeq V^\ast$, then remove trace of the resulting operator by $A\mapsto A-\text{Tr}(A)/3 I$, then send the resulting matrix to its induced matrix on $S^2V$, and finally use the isomorphism $\Lambda^2V\simeq V^\ast$ to obtain an element of $S^2V\otimes S^2(\Lambda^2 V)$.
The kernel property is clear, because $S^3S^2V$ contains no submodule isomorpic to $\mathfrak{sl}_3(\mathbb{C})$. (This follows from highest weight calculations).
Thanks. I was wondering, is it possible to prove the last line without using highest weight calculations (which I was trying to avoid)?
|
STACK_EXCHANGE
|
|04-24-06, 10:20 PM||#1|
Join Date: Apr 2006
Trouble installing audio driver on FC5_64
Sorry if this is not the correct place for this - I couldn't find a place to specifically ask about the nForce audio drivers for Linux.
I am having trouble installing the audio driver that is part of the nForce drivers installer.
I'm running Fedora Core 5 (64-bit) updated to kernel version 2.6.16-1.2096_FC5 on an ASUS A8n-SLI Premium with an AMD Athlon64 X2.
I successfully installed the LAN driver from the nForce drivers, which had to be rebuilt due to the kernel update. I also successfully installed an
NVidia graphics driver for my XFX GeForce 6800Xtreme from the Livna repository (not from the NVidia installer.)
The driver compiles successfully, but I get the following messages near the end of the install log:
-> Kernel module compilation complete.
-> Testing kernel module:
-> Copying test module ./nvsound/main/nvsound.ko to
ERROR: Unable to load the kernel module 'nvsound.ko'. This is most likely
because the kernel module was built using the wrong kernel source files.
Please make sure you have installed the kernel source files for your
kernel; on Red Hat Linux systems, for example, be sure you have the
'kernel-source' rpm installed. If you know the correct kernel source
files are installed, you may specify the kernel source path with the
'--kernel-source-path' commandline option.
-> Kernel module load error: FATAL: Error inserting nvsound
(/lib/modules/2.6.16-1.2096_FC5/kernel/sound/oss/nvsound.ko): Unknown symbol
in module, or unknown parameter (see dmesg)
-> Testing completed.
Any idea what is going wrong here? Alternately, is there a source of precompiled drivers for the NForce drivers? Livna only has the graphics
|Thread||Thread Starter||Forum||Replies||Last Post|
|[9800gt] Xorg black screen after installing nvidia driver||HacKurx||NVIDIA Linux||1||06-07-12 01:29 PM|
|WINDOWS 8 RELEASE PREVIEW ACCEPTS INSTALLING WINDOWS 7 DRIVER 301.42||News||Archived News Items||0||06-01-12 05:30 AM|
|Need Help Installing NVIDIA Tesla M2070Q in Linux RHEL5||Ferianto85||NVIDIA Linux||0||05-18-12 08:35 PM|
|Getting the proprietary nvidia driver to run with Debian 3.0 r0 (woody)||Katchina404||NVIDIA Linux||9||01-12-03 08:49 AM|
|Trouble with SuSE 8 & NVIDIA latest driver||bughill||NVIDIA Linux||3||08-20-02 12:31 PM|
|
OPCFW_CODE
|
brajeshwar.com: If you belong to the “Linux Community” bandwagon, you may just have a different opinion on the fact that there are lessons to be learned from Windows and Mac.
the-gay-bar.com: The way software is handled is one of the aspects where the three major operating systems differ and it is somewhat of a religious war (but that happens a lot when it comes to operating systems).
engadget.com: If you've been waiting for that Minority Report-style interface to really come to fruition, you can finally exhale. One of the science advisors from the Steven Spielberg film -- along with a team of other zany visionaries -- has created an honest-to-goodness, real-world implementation of the computer systems seen in the movie.
codeghar.wordpress: Sun makes its own Unix operating system, called Solaris. In an effort to be more open source oriented, Sun is releasing parts of Solaris as OpenSolaris. Here I will try to find reasons why OpenSolaris would be a good choice.
Aaron Aseigo: Microsoft recently conceded the Vista struggle and is now refocusing it's market on the yet-to-materialize Windows 7. Sure, Vista is slow and piggish and has some rather cute ideas of what "bling" means. People now see both Linux and Mac as viable options and no longer feel beholden to trudge along besides Microsoft and their pace.
zdnet.com/Murphy: All three of the main OS candidates: MacOS X, Novell’s “Sousa Linicks” and Microsoft’s Windows Vista run on pretty much the same hardware and run broadly comparable applications suites, so the decision must ultimately come down to which one best balances cost versus productivity in your applications area.
c0t0d0s0.eu: In the last few weeks i´ve heard one sentence quite often: "Why you you still develop Solaris? You should contribute to Linux!" from people administering Linux systems. And you could read at other places, that Solaris is irrelevant, that there is nothing worth of mentioning it or even for an integration to Linux. Just think about the Zemlin quotations!
phoronix.com: OpenSolaris 2008.05 had given a new face to Solaris through a vastly improved desktop experience. While OpenSolaris 2008.05 was not perfect, it was quite pleasant and a very nice first step. Sun Microsystems is now preparing for the release of OpenSolaris 2008.11 to incorporate their latest set of changes.
royal.pingdom.com: This post is about the desktop operating systems that fly under the radar of most people. We are definitely not talking about Windows, Mac OS X or Linux, or even BSD or Solaris. There are much less mainstream options out there for the OS-curious.
techradar.com: While you're cursing the slow boot times of your modern PC or wondering why you can't have 50 applications open at once without the system taking a hit, cast your mind back to the operating systems of old. Here are five operating systems we fondly remember.
|
OPCFW_CODE
|
Book a date in your diaries folks, if you’re in Belfast, or can get to Belfast on the 16th March, you now have somewhere to be: foss means business, big thanks to Ciaran O’Riordan & co for setting this up. Others have also blogged on this, so here’s some linkage to help google 🙂 DW, Paul Gregg, Indymedia Ireland, Bagel Belly Blog, eWeek
I have an old laptop (PII-400Mhz, 196Mb RAM, it’s small & light, which is why I still keep it around), it used to run Debian Woody just fine, I wiped it recently and dropped Debian Sarge on to it, it just seemed sluggish, I reckoned it couldn’t cope with all the new graphical schmuck in the new GNOME release.
One of the things I liked about Sawfish/Sawmill was the pack window feature, I can be quite keyboard orientated and liked to be able to bounce windows round without have to jump for the mouse.
Well, I tried Ion, FluxBox, BlackBox etc, but discovered that OpenBox has a MoveToEdgeEast (& West, South & North!), which I quickly mapped to Ctrl-Alt-Right etc, and hey presto, it was like being at home again…
All I then needed was some kind od task bar, fbpanel seems to fit the job so far!
Both are in Debian, so no hacking around building from source 🙂
I was watching CSI last night, the song they opened & closed the show was quite cool and hit my current mood spot on (sound wise, not quite lyric wise..), so I did some googling & discovered that it was a cover of a Tears for Fears song, Mad World, the cover was by Gary Jules, you can hear & see it here.
Why why why can’t you burn ISO images without having to BUY tools…well, you don’t, if you know where to look
I needed to burn a DVD ISO today, and ISORECORDER doesn’t do this, so I had a quick search, and there are a few options:
My dad has a Dell Inspiron 6000, which I use every now and then, one of the things that has really bugged me about it jaggy images in IE (they are being scaled or something), I did some poking about today (we’re are babysitting at there house..) and found this:
I found it here: http://forums.us.dell.com/supportforums/board/message?board.id=insp_video&message.id=79808
He had is DPI set to 120.
|
OPCFW_CODE
|
How do I specify the key exchange method in OpenSSH?
I'm trying to understand how OpenSSH decides what key exchange method to use. What I don't see is how to specify the method. In addition, I know every ssh server/client is required to support at least two methods: diffie-helleman-group1-sha1 and diffie-helleman-group14-sha1, but its unclear to me how the server and client to choose between the two, given that each program must support both. I would think that in every case diffie-helleman-group14-sha1 is used since it has the larger MODP group.
I can specify the cipher and the MAC:
ssh <user@ip> -c aes256-cbc -m hmac-sha1
but looking in the manpages I don't see an equivalent option for the key exchange. Can someone 1) tell me a way to specify this 2) explain how ssh chooses the method? (I suspect it always picks the first in the list, meaning the second is never, ever selected)
OpenSSH 5.7 introduced the KexAlgorithms option:
ssh(1)/sshd(8): add a KexAlgorithms knob to the client and server
configuration to allow selection of which key exchange methods are
used by ssh(1) and sshd(8) and their order of preference.
So if you have at least that version, you should be able to pass -oKexAlgorithms=<kex_list> to specify your preferences.
AFAICT, the OpenSSH client won't actually print out what kex algorithm was negotiated, but if you pass -vv and look at the kex_parse_kexinit lines, you can see the list of kex algorithms (as well as lists of encryption, MAC, etc. algorithms) supported by the client, followed by the lists supported by the server. In theory, the client will select the first algorithm in its list that also appears in the server's list (i.e., the selection favors the client's preference). So for client list a,b,c and server list c,b, the client chooses algorithm b.
Thank you, I missed that in the man pages. My second question was more along the lines of 'why is there even a -group1- option when both group14 and group1 are REQUIRED by the RFC? The only reason I see that group1 would get selected is if someone manually specified it.
According to RFC 4253, Section 8.1, "[group1] MUST be supported for interoperability as all of the known implementations currently support it." So mainly to ensure backward compatibility at the time the RFC was written (2006), but group14 would be selected over group1 in any non-ancient client nowadays.
|
STACK_EXCHANGE
|
Passages validating Goethe as Nietzsche's Übermensch?
It is believed by some that the closest Nietzsche comes to naming the Übermensch is Goethe. However, in my own readings (which is not comprehensive) I've not found any solid evidence. What is generally the basis for the thinking that Goethe was the closest to Nietzsche's ideal?
Specific passages and critical studies would be ideal.
I think this question - although scholarly in nature - and answer are a perfect example of how this site should operate. A specific, accurate, answerable question met by a clear, balanced and informed answer. Shows that that's possible even in philosophy
Nietzsche does explicitly name a few people in Will to Power that he thinks rank among the greatest human beings that have ever lived, and he puts them in this category for traits very similar to those that he ascribes to an Übermensch:
Systematic falsification of history; so that it may provide the proof of moral valuation:
a. decline of a people and corruption;
b. rise of a people and virtue;
c. zenith of a people ("its culture") as consequence of moral evaluation.
Systematic falsification of great human beings, the great creators, the great epochs:
one desires that faith should be the distinguishing mark of the great: but slackness, skepticism, "immorality," the right to throw off a faith, belong to greatness (Caesar, also Homer, Aristophanes, Leonardo, Goethe). One always suppresses the main thing, their "freedom of will"—
—Will to Power, Book Two: 380 (transl. Kaufmann)
He praises Goethe in a number of his works for having been a "free spirit," something that Nietzsche himself expressed desire to be. And again, we recall that the Übermensch is one who embodies the free spirit par excellence.
In Twilight of the Idols, IX §49, Nietzsche writes with regard to Goethe's Dionysian character:
Goethe—…a grand attempt to overcome the eighteenth century through a return to nature, through a going-up to the naturalness of the Renaissance, a kind of self-overcoming on the part of that century…He did not sever himself from life, he placed himself within it…and took as much as possible upon himself, above himself, within himself. What he aspired to was totality; he strove against the separation of reason, sensibility, emotion, will…; he disciplined himself to a whole, he created himself… Goethe conceived of a strong, highly cultured human being who, keeping himself in check and having reverence for himself, dares to allow himself the whole compass and wealth of naturalness, who is strong enough for this freedom; a man of tolerance, not out of weakness but out of strength, because he knows how to employ to his advantage what would destroy an average nature; a man to whom nothing is forbidden, except it be weakness, whether that weakness be called vice or virtue… A spirit thus emancipated stands in the middle of the universe with a joyful and trusting fatalism, in the faith that only what is separate and individual may be rejected, that in the totality everything is redeemed and affirmed—he no longer denies… But such a faith is the highest of all possible faiths: I have baptised it with the name Dionysus.
It's certainly poetic, but in addition to Goethe, Nietzsche also offers up great admiration (most pronounced in his earlier works) for Schopenhauer, a man who was able to reconcile action and contemplation, and voluntarily chose to give up both comfort and happiness in life, courageously confronting the suffering of the human condition in a positive and affirming way. Schopenhauer seemed to Nietzsche to have transformed life's many hardships into a cheerful disposition, a characteristically Übermenschlich activity. He is also quite taken with Rousseau, whom he sees as a revolutionary.
Thus, I remain skeptical of any pursuit to identify the true Übermensch, as per Nietzsche. He wasn't particularly bashful, and it's unlikely that he would have avoided naming someone directly if he thought they really merited the title. It's quite important throughout Nietzsche's work that no one has ever truly achieved the status of Übermensch. It's why he himself found the concept difficult to describe at times. Note that he describes the Übermensch as "belonging to the future": he has never fully existed, neither in the being of Goethe nor of anyone else.
In my opinion, few academics have taken up the question of who is/was the "true" Übermensch because such a question is irrelevant and might, in fact, be harmful, at least to the extent that it detracts from the fundamental point that Nietzsche is trying to make.
As far as other scholarly references, the best one I know of dealing with extra-literary questions like this one is Kaufmann's own Nietzsche: philosopher, psychologist, antichrist. He devotes several sections to a discussion on the Übermensch concept, and mentions Goethe (and the other contenders) specifically.
You might also find this article interesting (if you have access to scholarly journals through a service like Project Muse). I remember reading it myself a while back, and found it quite insightful. Re-reading the abstract, it looks like it might be pertinent to some of your questions.
|
STACK_EXCHANGE
|
Does Lufthansa weigh your carry on luggage?
I will soon be travelling to Europe, flying Lufthansa, and was wondering about the weight of my carry-on luggage.
In my experiences of travelling in Asia, I've never had my carry-on bag weighed, even though I believe it was slightly over the 8kg limit.
Is this different in Europe, or specifically with Lufthansa? Should I be more careful to keep the weight under the limit, to avoid charges?
There's no possibility of a clear "yes/no" answer to this question. Most or all airlines weigh carry-on luggage... at least sometimes, and maybe often. What happens at the check-in counter depends on the check-in clerk (who may feel generous, or stingy, or have received directions or rebuke from the clerk's supervisor), and there are multiple clerks and counters and so on. The most that can be said is: they might. If you're risk averse, be aware of your luggage's weight.
IMO the real answer here is "keep the weight under the limit if you want to avoid charges". Anything else is speculation and/or trying one's luck.
Are you seriously asking "I want to break this airline's rules, can anyone tell me if I will be found out?"
@Luc The question to me seems to be "I don't pay much attention to the weight of my carry-on and it might be a little over 8kg; will this cause problems for me?" That seems to be a reasonable question, and not an egregious attempt to get away with breaking the rules.
The solid gold bars I sometimes fly with fit in a smallish secured briefcase that fits under the seat in front of me, and is usually more of a problem going through security (they show up completely black), or at customs, than on board or at gate. The physical training to carry it well was also pretty intense. ;-)
I've never seen any airline weight carry-on luggage. Some details:
I travel twice a year through Europe, often with Lufthansa;
My carry-on is a regular rucksack ~10 kg.
@WBT: I have had a similar experience with my Rolexes.
Normally, airlines care about the size more than about the weight. Is the size within the acceptable range?
@TT_ You don't fly very often, so the fact that you've not seen something really says almost nothing.
How long is a piece of a string?
Having flown Lufthansa over 20 times this year alone and 100s of times in total here are my experiences.
If your carry on is large you risk a weighing more often.
If the flight is totally full you risk a weighing more often.
If you arrive late at check in you risk a weighing more often.
In reality if you are on time and your carry on is not massively oversize a weighing is actually very rare. It has happened to me 2 times this year and maybe 10 in total. Only 1 time have I been asked to remove something from the bag because it was almost double the allowed the weight.
Many times if the excess weight is small it's ok, barring the full flight challenge where weight becomes a factor for fuel and safety reasons. I assume, but I am not an airliner expert so that part is speculation.
I would also add that the type of luggage should also been taken into account. I have never had a weighing with a backpack, althought it has happened a few times with a suitcase. (I mainly travel with backpacks now for this reason)
It seems like it is also more likely to be checked if the luggage looks heavy.. If you are carrying it easily you are less likely to be checked than if it is rolled on wheels or visibly weighing you down... (with airlines in general, rather than Lufthansa specifically)
I would second this back pack weighing. Pretend the back pack is not heavy, smile and respect the person at the counter and they would just skip weighing it :)
@GertvandenBerg - Anecdotally, I always put the luggage on the scale myself, while making look as effortless as possible. It's really silly and I doubt it helps, but I always think "if it looks like I'm struggling, they'll certainly check. May as well look like Superman and make it look as if it is just 2 lbs!"
Yes. According to Carry-on baggage rules at Lufthansa
For a smooth boarding procedure, more stowage space on board and a
punctual departure, it is essential that your carry-on baggage
corresponds to the regulations. That is why we check your carry-on
baggage against the permitted dimensions, quantity and weight at
departure airports.
And from a detailed Trip Advisor post about Lufthansa carry-on baggage
My experience - YES. It's weighed at check in and can be weighed at
the gate for passengers connecting from a different airline to LH. And
I have seen people sent away to consolidate their carry on weight
down.
Here's what I've done in some cases (chortle chortle). Note you are
also allowed a smaller personal item (e.g. laptop bag). I've had cases
where I have moved a heavier item (e.g. laptop power adapters/chords
or a camera lens) from carry on to personal item in order to make the
weight limit for the carry on, and then moved it back. You can also in
some cases use your jacket pockets. I've done that more so with
airlines that have a 7kg carry on limit (EVA, Jet Airways). As long as
the airline doesn't impose a total weight limit for both carry on and
PI that strategy works.
Suggest you get a set of luggage scales, and before you leave weigh
your bags in certain configurations and plan what to swap from bag to
bag. That way you can remain "legal" weight wise and not get stung.
Occasionally.
In my experience, it happens about in 10%-20% of my LH group flights (including Swiss and Austrian).
I have a pretty big roll aboard suitcase which probably triggers more attention than a smaller piece of carry on. In all cases, taking out a jacket or moving something heavy to my backpack did the trick, but I always make sure I'm not horribly overweight. Anything over 8.5 kg can get you flagged.
They might, but you should not worry too much.
If it is weighted and goes over the limit and the check-in clerk does not let you "pass this time" you have other options.
You can swap items form carry-on luggage to checked-in luggage. You can leave the desk and check-in later with some items in your pockets, etc.
The best advice, however, is to check-in as early as possible and use small (not bulky) carry-on luggage (backpacks are better than suitcases).
You didn't mention if you are going to have checked baggage as well, so my reply is in the case where you don't.
I have travelled with Lufthansa in Europe more than 10 times in the past 3 years, and every time I just had a backpack as a carry-on luggage and a laptop bag as a personal item. Because I did my check-in online I just skipped going to the check-in and straight to the security check. Nobody ever weighed my backpack, even though I am sure sometimes it weighed about 10 KG. Also, nobody said anything when boarding or on the plane.
|
STACK_EXCHANGE
|
Road warrior configuration with proxy arp -- almost there, but not quite!|
David Brodbeck <DavidB,AT,mail,DOT,interclean,DOT,com>|
Wed, 16 Jul 2003 15:20:27 -0400|
I'm trying to get a "road warrior" configuration going between a host on my
network and a laptop, with proxy arp. The machine serving as the host for
the connection is 184.108.40.206 (eth0), and has an additional IP on the
same physical interface of 220.127.116.11 (eth0:0). I'm using
18.104.22.168 as the remotely accessable IP, and 22.214.171.124 as the cIPe
When the connection is made, the two hosts can ping each other on their cIPe
addresses. In addition, proxy arp works to at least some extent; hosts on
the LAN can ping the laptop's cIPe address. However, when I try to access
hosts on the LAN from the laptop, the packets get routed through the
Internet instead of the cIPe tunnel. If I try to manually add a route
("route add -net 126.96.36.199/24 cipcb0") nothing works until I delete it
Here's the options file on that host:
Here's ip-up on the host:
# ip-up <interface> <myaddr> <daemon-pid> <local> <remote> <arg>
# Sample of the ip-up script.
# This is called when the CIPE interface is opened.
# $1 interface the CIPE interface
# $2 myaddr our UDP address
# $3 daemon-pid the daemon's process ID
# $4 local IP address of our CIPE device
# $5 remote IP address of the remote CIPE device
# $6 arg argument supplied via options
# Purposes for this script: set up routes, set up proxy-arps, etc.
# start daemons, logging...
# If this becomes our default route...
#route add default gw $5
# just a logging example
now=`date "+%b %d %T"`
echo "$now UP $*" >> /var/log/cipe.log
# many systems like these pid files
echo $3 > /var/run/$1.pid
# Trigger the key exchange procedure, useful when we're using SOCKS
# This _must_ run delayed and in the background
#(sleep 10; ping -c5 $5) &
# If the system runs gated, tell it what has happened
# The following are just ideas for further consideration
# Interconnect two 10. subnets through the Internet!
# Assuming $4 is in 10.1 and $5 in 10.2
#route add -net 10.2.0.0 netmask 255.255.0.0 gw $5
# Proxy-ARP the peer's address on eth0
arp -i eth0 -Ds $5 eth0:0 pub
# Evil tricks department: masquerade the CIPE peer's /24 network to our IP
#NA=`expr $5 : '\([0-9]*\.[0-9]*\.[0-9]*\.\)'`
#ipfwadm -F -a accept -m -b -S $NA.0/24 -D 0.0.0.0/0
# the usual way for this would be a case selection on $5 or $6, however
# execute anything local
[ -x /etc/cipe/ip-up.local ] && /etc/cipe/ip-up.local $*
And here's options on the laptop:
David Brodbeck, System Administrator
InterClean Equipment, Inc.
3939 Bestech Drive Suite B
Ypsilanti, MI 48197
(734) 975-2967 x221
(734) 975-1646 (fax)
|
OPCFW_CODE
|
Alex Schroeder let me know in a comment on yesterday’s random dungeon post that he’s looking for additional dungeon generation algorithms for Text Mapper. The method I described in that post was really aimed at being simple and easy to remember for humans. What would I do if I were exploiting computer power instead? What does Text Mapper actually need?
The 5- and 7-room dungeon algorithms are already pretty good for mini-dungeons. One thing I noted, though, is that it rarely creates a Jaquayed dungeon, as Jason Alexander describes. The structure is almost always linear with maybe one or two dead-end branches, although I did get two 7-room dungeons with internal loops after several reloads. And since they are mini-dungeons, there’s an obvious underrepresented dungeon type: megadungeons, or at least sprawling underworlds. I did do a couple old posts about random dungeon generators and non-linear dungeon generation, although I think they are unusable for a computer algorithm in their current state. Still, there’s the seed of a couple ideas for new additions to Text Mapper: an underworld wilderness map, a ruined underworld city generator, and a themed dungeon sublevel generator used as part of the underworld city generator or by itself to generate a mid-sized dungeon.
Here are some hypothetical steps we could take for the third and most important generator:
- Start at the Top. Randomly select a theme and basic tunnel structure that includes loops and nodes.
- Each Node is a Substructure. Generate a more specific theme within a node and its own tunnel structure with loops and subnodes in the same way as Step 1.
- Each SubNode is Either a Room or a Room Cluster. Theme determines which (Mazes tend to be more tunnels and isolated rooms, Tombs are like Mazes but with tunnels connecting room clusters, Ruins have more room clusters, and Fortified Areas have shorter, fewer tunnels.)
- Generate a Room Shape and Exits.
- Select Room Theme. For example, Storeroom, Torture Chamber, Kitchen. Specific theme from Step 2 determines list of room themes to select from. Room Theme specifies general room contents, including containers present.
- Add Room Occupant, if any. Determine if occupant starts in a hidden state and what the trigger is (vermin can emerge from crates if disturbed. Spirits can be summoned by touching a holy/cursed object.)
- Add Container Contents, including possible treasure.
- Add Secrets, including secret exits or containers.
- Repeat Steps 4 to 6 for Each Room until SubNode is complete.
- Repeat Step 3 and following steps for next SubNode until all SubNodes in the Node are finished.
- Repeat Step 2 and following steps for the next Node until all Nodes are finished.
This work is licensed under a Creative Commons
(CC BY-NC-SA 4.0) license.
|
OPCFW_CODE
|
Example code works, real world scenarios are unusable
python --version
Python 3.6.7
pip freeze| grep raven
pyravendb==<IP_ADDRESS>
Problem
Any combination of session.store(document) and session.save_changes() no matter where i use with store.open_session() as session: always fail.
Use case
Storing 100-200 documents per second from a single script, estimate many billions of documents to store from 1 source, no way for me to know the total count or size until it is done. I have run the exact script for 19 days storing json to disk and using df I estimate 900GB-1100GB of files, i cannot inspect the directory all ls in bash and scandir in python hang for hours and pin my io rendering the PC useless.
Scenario 1
Using store.open_session() then session.store() and session.save_changes() together on each document during parsing, short lived as prescribed by you.
This fails before the first 10000 documents (above error), and as the requestor I cannot request deltas or resume. Results in RuntimeError: can't start new thread with this exact stack trace;
File "/home/kde/workspace/github/open-net-scans/venv3/lib/python3.6/site-packages/pyravendb/store/document_session.py", line 280, in store
entity_id = self._document_store.generate_id(self.database, entity)
File "/home/kde/workspace/github/open-net-scans/venv3/lib/python3.6/site-packages/pyravendb/store/document_store.py", line 130, in generate_id
return self.generator.generate_document_key(db_name, entity)
File "/home/kde/workspace/github/open-net-scans/venv3/lib/python3.6/site-packages/pyravendb/hilo/hilo_generator.py", line 77, in generate_document_key
return generator.generate_document_key(entity)
File "/home/kde/workspace/github/open-net-scans/venv3/lib/python3.6/site-packages/pyravendb/hilo/hilo_generator.py", line 103, in generate_document_key
return value.generate_document_key()
File "/home/kde/workspace/github/open-net-scans/venv3/lib/python3.6/site-packages/pyravendb/hilo/hilo_generator.py", line 132, in generate_document_key
self.next_id()
File "/home/kde/workspace/github/open-net-scans/venv3/lib/python3.6/site-packages/pyravendb/hilo/hilo_generator.py", line 148, in next_id
self.get_next_range()
File "/home/kde/workspace/github/open-net-scans/venv3/lib/python3.6/site-packages/pyravendb/hilo/hilo_generator.py", line 156, in get_next_range
result = self._store.get_request_executor().execute(hilo_command)
File "/home/kde/workspace/github/open-net-scans/venv3/lib/python3.6/site-packages/pyravendb/store/document_store.py", line 104, in get_request_executor
self.conventions))
File "/home/kde/workspace/github/open-net-scans/venv3/lib/python3.6/site-packages/pyravendb/connection/requests_executor.py", line 66, in create
executor.start_first_topology_thread(urls)
File "/home/kde/workspace/github/open-net-scans/venv3/lib/python3.6/site-packages/pyravendb/connection/requests_executor.py", line 77, in start_first_topology_thread
self._first_topology_update.start()
File "/usr/lib/python3.6/threading.py", line 846, in start
_start_new_thread(self._bootstrap, ())
reproduce it with this simplified version of my real use case
from pyravendb.store import document_store
class Domain(object):
def __init__(self, domain):
self.domain = domain
store = document_store.DocumentStore(urls=["http://localhost:8080"], database="scans")
store.initialize()
for n in range(0, 15000): # simulate fetching 15k documents
domain = '%s.com' % n
with store.open_session() as session:
session.store(Domain(domain))
session.save_changes()
Scenario 2
Using store.open_session() just before i request the download, and session.save_changes() after all downloaded content is parsed and calling session.store() often (rather than scenario 1 where it is called during parsing with save_changes). Results in:
File "/home/kde/workspace/github/open-net-scans/venv3/lib/python3.6/site-packages/pyravendb/store/document_session.py", line 315, in save_changes
self.increment_requests_count()
File "/home/kde/workspace/github/open-net-scans/venv3/lib/python3.6/site-packages/pyravendb/store/document_session.py", line 395, in increment_requests_count
more responsive application.".format(self.conventions.max_number_of_request_per_session))
pyravendb.custom_exceptions.exceptions.InvalidOperationException: The maximum number of requests (30) allowed for this session has been reached. Raven limits the number of remote calls that a session is allowed to make as an early warning system. Sessions are expected to be short lived, and Raven provides facilities like batch saves (call save_changes() only once). You can increase the limit by setting DocumentConventions. MaxNumberOfRequestsPerSession or MaxNumberOfRequestsPerSession, but it is advisable that you'll look into reducing the number of remote calls first, since that will speed up your application significantly and result in a more responsive application.
reproduce it with this simplified version of my real use case
from pyravendb.store import document_store
class Domain(object):
def __init__(self, domain):
self.domain = domain
store = document_store.DocumentStore(urls=["http://localhost:8080"], database="scans")
store.initialize()
with store.open_session() as session:
for n in range(0, 310000): # simulate fetching 15k documents
domain = '%s.com' % n
session.store(Domain(domain))
session.save_changes()
Catch 22
If save_changes is capped at ~10k uses, and the maximum number of requests (30) allowed for this session - then pyravendb seems to have an arbitrary hard coded soft-limit of 300k documents per python process. Looking at the source, granted i am lacking complete understanding of pyravendb and know quite a bit about requests and urllib, it seems there is poor requests session usage in pyravendb and it is heavily abstracted and hard to understand some of the design goals...
My expectations
Can you take a look at this perceived 300k arbitrary hard coded soft-limit?
You closed the issue but I will explain scenario 1 is designed that way no more than 30 requests to the server in one session, that is why the session.save_changes need to be out from the loop and after that closed the session.
scenario 2 I didn't understand, did you have problems with save_changes after add 300+k documents?
Opened in YouTrack with more info and more scenarios. So far scenario 2 is best but it's also hard coded limiting far below ravendb and host capabilities.
|
GITHUB_ARCHIVE
|
Territory transfers
closes #862, based on #877
This PR introduces a transfer option in the territory context menu. It is only visible for founders:
On click, a modal opens up with an user input:
If the button is clicked, the user is asked to confirm:
If confirmed, the territory is updated. The update is logged to a new AuditLog table.
This table is queried during notifications to notify the new owner. The notification looks like this:
TODO:
[x] self-review
[x] ~test if revenue notifications in the past now show up for new owner. if so, not ideal but I think okay for MVP if there is no trivial solution (which I think there is not)~ (they will show up but that's fine imo)
[x] ~make sure it is okay to use sub name as notification id. does this not conflict with revenue notifications?~ (no, it doesn't, id is only used to resolve fields afaik)
[x] ~make sure index on AuditLog is good~ (using new schema now with btree index on TerritoryTransfer(created_at, "newUserId"))
[x] push notification
[x] notification setting
An alternative: create a AuditEvent for each user involved in the transfer.
I thought about this but I decided against it since the problem with that was the decision to which row I then link to from TerritoryTransfer.eventId. It also seemed to break an idea about this polymorphism approach: every generic event (row in AuditEvent table) has a corresponding specialized event (for example, row in TerritoryTransfer)[^1]. But I agree, the array is weird. I might have overestimated the importance of this assumed idea of surjection.
[^1]: The projection of foreign keys to AuditEvent needs to be surjective
You could have a TerritoryTransfer for each party in the transfer.
The model would get ride of to/from and just have a boolean saying whether it's sender or something.
Another issue with transfers: what if I don't want to be transferred a territory but someone transfers one to me?
Another issue with transfers: what if I don't want to be transferred a territory but someone transfers one to me?
Then they paid for a territory and gifted it to you. You could let it expire or transfer it back assuming you know the sender. I did show the sender in a previous commit but it looked like too much information and I didn't think of it to be important information.
Afaik, there are no financial downsides to receiving a territory but maybe reputational risks since we don't show who owned it before—to stackers, it looks like you always owned it. But I think that's out of the scope for MVP, I don't think this feature will be abused in some way due to the financial disincentive or even used much.[^1]
However, I think a confirmation dialog would be easy to implement: Instead of immediately updating the Sub table, we only insert a TerritoryTransfer which is not confirmed yet. It can be confirmed or denied by the receiver via the notification. Only then is Sub.userId updated. So if you think we should already add confirmation here, I can include it.
[^1]: But stackers always surprise me with their ideas. Maybe they start selling territories to each other like with pins?
No I don't think it's worth adding a confirmation dialog now, but I wanted to flag it.
Changes since last review:
Rename label from user to stacker (https://github.com/stackernews/stacker.news/pull/878/commits/11888fb036d29f12d8e37c54949b4b7c3f60d5cc)
More space between cancel and confirm button (https://github.com/stackernews/stacker.news/pull/878/commits/bb094fe73acc3f8197417e2154b887d5bb236a68)
Remove AuditEvent table (https://github.com/stackernews/stacker.news/pull/878/commits/f2d425155add729581b92a5265b944e863f30393)
Fix territory transfer notification id conflict (https://github.com/stackernews/stacker.news/pull/878/commits/1ab6c58e0a6134373a389e346e217ba9eeeb7ad6)
|
GITHUB_ARCHIVE
|
using System;
using System.Linq;
using SOATApiReact.Model;
namespace SOATApiReact.Data
{
public static class DataInitializer
{
public static void Initialize(DataContext context)
{
context.Database.EnsureCreated();
if(context.Users.Any())
{
return;
}
var Users = new User[]
{
new User(){Document=1019049848, Name="Juan Camilo", Surname="Beltran Herrera", DocumentType=DocumentType.CC, Genre="M"},
new User(){Document=1022458798, Name="Alan David", Surname="Avila", DocumentType=DocumentType.TI, Genre="M"},
new User(){Document=1019145325, Name="Esteban Camilo", Surname="Rodriguez", DocumentType=DocumentType.CC, Genre="M"},
new User(){Document=28787024, Name="Lourdes", Surname="Herrera", DocumentType=DocumentType.NIT, Genre="F"},
new User(){Document=101945865, Name="Breyner", Surname="Garzón Peña", DocumentType=DocumentType.DIP, Genre="M"},
new User(){Document=102215743, Name="Viviana", Surname="Avila", DocumentType=DocumentType.DIP, Genre="F"},
};
context.Users.AddRange(Users);
context.SaveChanges();
var Vehicles = new Vehicle[]
{
new Vehicle() {Plate="AXF152", Axles=2, Color="Rojo", Engine="2000CC"},
new Vehicle() {Plate="BJS452", Axles=2, Color="Blanco", Engine="2400CC"},
new Vehicle() {Plate="KJ4548", Axles=2, Color="Negro", Engine="2600CC"},
new Vehicle() {Plate="LKE154", Axles=3, Color="Verde", Engine="1600CC"},
new Vehicle() {Plate="KPD154", Axles=2, Color="Gris", Engine="2200CC"},
new Vehicle() {Plate="AHG152", Axles=2, Color="Negro", Engine="2000CC"},
new Vehicle() {Plate="BK4541", Axles=3, Color="Azul", Engine="2000CC"},
};
context.Vehicles.AddRange(Vehicles);
context.SaveChanges();
var Soats = new SOAT[]
{
new SOAT() {Owner = Users.First(u => u.Name.Contains("Alan")), Vehicle = Vehicles.First(v => v.Plate.Equals("AXF152")), Year=new DateTime(2018, 1, 1)},
new SOAT() {Owner = Users.First(u => u.Name.Contains("Juan")), Vehicle = Vehicles.First(v => v.Plate.Equals("BJS452")), Year=new DateTime(2017, 1, 1)},
new SOAT() {Owner = Users.First(u => u.Name.Contains("Alan")), Vehicle = Vehicles.First(v => v.Plate.Equals("AXF152")), Year=new DateTime(2019, 1, 1)},
new SOAT() {Owner = Users.First(u => u.Name.Contains("Esteban")), Vehicle = Vehicles.First(v => v.Plate.Equals("LKE154")), Year=new DateTime(2017, 1, 1)},
new SOAT() {Owner = Users.First(u => u.Name.Contains("Lourdes")), Vehicle = Vehicles.First(v => v.Plate.Equals("BK4541")), Year=new DateTime(2019, 1, 1)},
new SOAT() {Owner = Users.First(u => u.Name.Contains("Lourdes")), Vehicle = Vehicles.First(v => v.Plate.Equals("BK4541")), Year=new DateTime(2020, 1, 1)},
new SOAT() {Owner = Users.First(u => u.Name.Contains("Juan")), Vehicle = Vehicles.First(v => v.Plate.Equals("AHG152")), Year=new DateTime(2020, 1, 1)},
new SOAT() {Owner = Users.First(u => u.Name.Contains("Breyner")), Vehicle = Vehicles.First(v => v.Plate.Equals("KPD154")), Year=new DateTime(2018, 1, 1)},
new SOAT() {Owner = Users.First(u => u.Name.Contains("Breyner")), Vehicle = Vehicles.First(v => v.Plate.Equals("KPD154")), Year=new DateTime(2019, 1, 1)},
new SOAT() {Owner = Users.First(u => u.Name.Contains("Viviana")), Vehicle = Vehicles.First(v => v.Plate.Equals("KJ4548")), Year=new DateTime(2020, 1, 1)},
};
foreach (var s in Soats){
context.SOATs.Add(s);
}
context.SaveChanges();
}
}
}
|
STACK_EDU
|
next install roadblock
Sat, 04 Aug 2001 01:57:53 -0400
Whoo-hoo! It actually worked... gnucash up and ran!
Now the adjustment period... meanwhile, thanks everyone.
"Michael T. Garrison Stuber" wrote:
> > Thanks again for the help; just the simlink seems to work. The *next*
> > inscrutable (for me) roadblock to g-wrap's configuring:
> > /bin/sh ../libtool --mode=compile gcc -DHAVE_CONFIG_H -I. -I. -I..
> > -O2 -Werror -Wall -g
> > -I../libruntime-guile -g -O2 -I/usr/local/include -c gw-test-parent.c
> > rm -f .libs/gw-test-parent.lo
> > gcc -DHAVE_CONFIG_H -I. -I. -I.. -O2 -Werror -Wall -g
> > -I../libruntime-guile -g -O2 -I/usr/local/include -c
> > -fPIC -DPIC gw-test-parent.c -o .libs/gw-test-parent.lo
> > cc1: warnings being treated as errors
> > gw-test-parent.c: In function `gw_init_module_gw_test_parent':
> > gw-test-parent.c:167: warning: implicit declaration of function `strcmp'
> > make: *** [gw-test-parent.lo] Error 1
> > make: Leaving directory `/tmp/g-wrap-1.1.9/test'
> > make: *** [all-recursive] Error 1
> > make: Leaving directory `/tmp/g-wrap-1.1.9'
> > make: *** [all-recursive-am] Error 2
> > ----------------------------------------------------
> This one is actually pretty easy. The compiler has been set to treat
> warnings as errors. That's what that -Werror flag is doing. The warning
> that it's complaining about is likely the result of (a) a sloppy
> programmer, (b) change in header files, or (c) failure to include the
> correct -I directories. Basically, the source code is using a call to a
> library, but it hasn't declared what the call should look like. Usually
> this is taken care of by including the the appropriate header file to keep
> the compiler happy. If the programmer forgot, this can happen.
> Alternatively, if your header files are different from what the g-wrap was
> built against, this can happen because the declaration for strcmp is in a
> different file, which the source code doesn't include. There is the
> possibility that your header files are in the normal place, but I don't
> think anything else would compile for you if that we're the case.
> There are three options here: (Okay, I'm sure there are more, but there
> are three I'll mention)
> (a) figure out which header file you need (grep for strcmp in /usr/include)
> and add an #include<> to the source file. Usually this is in string.h.
> On my system I would add:
> #include <string.h>
> (b) add an extern declaration -- You really need to be able to program to
> do this correctly.
> (c) turn off the -Werror switch
> Personally, I'd go with option (c). It's probably going to be the easiest.
> You'll need to grep throught the configuration files and the make files to
> figure out where this is being set.
> > Again, sorry to tire all of you with a layperson's attempt--but it may
> > give you programmers some insight into what we ignorati are up against.
> > Some 6-8 tricky package installations down the road, I'm at least more
> > than half-way toward actually running gnucash, right?!
> I do program, but I didn't bother building everything from source. Way,
> way too much hassel. I just grabbed the packages for everything except
> GNUCash itself. http://www.rpmfind.net is my friend. It took a little
> doing, but it was simpler.
|
OPCFW_CODE
|
Report Modules - Frequently asked questions
How to convert the long value to date format in ServiceDesk Plus?
Use "from_unixtime([columnname]/1000)" in the mysql prompt to get the date in the correct format.
For getting the long value from date, see the following example:
mysql> select unix_timestamp('2004-1-31 20:10:10') * 1000;
unix_timestamp will give you the time in seconds since Jan 1 1970. Multiply by 1000 to get the time in milliseconds.
DATE_FORMAT(FROM_UNIXTIME(wo.CREATEDTIME/1000) ,'%d-%m-%Y %k:%i')'Created Time',
Compare date in MYSQL:
where wo.createdtime >= (UNIX_TIMESTAMP(DATE('2009-01-01 00:00:00')) * 1000) and wo.createdtime <= (UNIX_TIMESTAMP(DATE('2009-12-31 23:59:59')) * 1000)
Also you can use,
where (from_unixtime(wo.createdtime/1000) between '2011-09-20 00:00:00' and '2011-09-20 23:59:59')
As an example, see the query below,
Display the date column in SQL Server
select dateadd(s,datediff(s,GETUTCDATE() ,getdate()) + (CREATEDTIME/1000),'1970-01-01 00:00:00') 'Created Date', WORKORDERID 'Request ID' from WorkOrder
select DATEADD(s,wo.CREATEDTIME/1000,'01-01-1970')'Created time' from workorder wo
Compare date column in SQL server:
select dateadd(s,datediff(s,GETUTCDATE() ,getdate()) + (CREATEDTIME/1000),'1970-01-01 00:00:00') 'Created Time' from workorder
where dateadd(s,datediff(s,GETUTCDATE() ,getdate()) + (CREATEDTIME/1000),'1970-01-01 00:00:00') >= convert(varchar,'2011-01-01 00:00',21)
and dateadd(s,datediff(s,GETUTCDATE() ,getdate()) + (CREATEDTIME/1000),'1970-01-01 00:00:00') <= convert(varchar,'2011-12-31 23:59',21)
Where can I find the Frequently Asked Query Reports in the application?
Go to the Reports tab -> Frequently Asked Reports. [ Screenshot ] Or, go to Reports tab -> New Query Report -> Frequently Asked Reports.
We have posted all the query reports requested by our customers in our forums. You could find the links to these from within the product.
Click Reports -> Query Reports -> Under Search Query reports drop down you will find the frequently asked query reports categorized by modules.
[ Screenshot ]
Is the database schema of ServiceDesk Plus available?
The database schema is accessible from within the ServiceDesk Plus application.
Go to Reports -> New Query report -> Select the module you want to report on, from the Table Schema drop down list and press the Get button which will give you the Schema. [ Screenshot ]
One of our technicians has left the company and the reports schedule by him is still being triggered. How do I go about deleting this report? Also, I would want to check the other reports scheduled by the technician.
When a technician is deleted or his login removed from ServiceDesk Plus, then the ownership of his ‘Scheduled Reports’ would be moved to the technician who performed the delete technician/remove login operation.
Was your question answered here?
Check the other FAQ modules or you can contact our support team at email@example.com
|
OPCFW_CODE
|
There are two school of thoughts out there: First, there are lots of people, who say that open-source is not fostering innovation or even killing it, because it is just about the reimplementation of already solved problems and that it actually kills innovation, because it keeps people from implementing solutions that can not be protected from being reimplemented as an open-source solution. Second there are at least as many people, who say that open-source is actually creating/fostering innovation, because a company can not hide behind a bad implementation and will always be forced to look for ways to make its solution better, faster and more affordable.
Let's first contemplate on the nature of innovation. As previously discussed there is a difference between innovation and creativity: You can ask people to be creative, but you can't really ask them to be innovative, because innovation is something that get's established after the fact, means it get's established by the traction you get with your ideas in a give business context and the impact they have on the business and the market.
I personally believe that you cannot orchestrate or manage or create innovation. It just happens. It happens in a more darwinistic way: Through diversity/competition/mutation and selection. What you can do is to create an environment, which cherishes diversity/creativity and competition. This will increase the likelyhood that some of your ideas turn into real innovation. In that context you probably want to set up a number of small units to shoot at the same target and want to make them compete for the best implementation. This is why open-source is innovative and/or creates innovation. It is not the a/the project that is innovative. It is the competition between the projects that creates the innovation. One good discussion on this can be found in "Innovation Happens Elsewhere, by Ron Goldman and Richard P. Gabriel".
One other way to put more structure into the discussion is to segment innovation into ...
- Business Model Innovation - a new creative way to make money with something (might be closed-source or open-source)
- Solution Innovation - a new creative way to solve a given (old or new) problem
- Implementation Innovation - a new creative way to implement a given solution
Question is, if open-source and/or open-source concepts are also suitable to drive solution- and/or business-model-innovation? This leads to the next bigger question: In the past the value of a company was determined by the solutions they own. This is why pharmaceutical companies try to protect their products with patents and other means. I am not sure that will stay this way going forward. Instead we might see a world in which your solutions matter less, but your ability to detect problems and develop solutions will determine the value of your company to a much larger extend.
In such a world, open-source concepts might actually be very suitable to foster innovation on the solution level.
|
OPCFW_CODE
|
Does Hawking radiation or something similar occur inside the event horizon?
The question Why is a black hole black? states that stuff can't escape the event horizon and must ceaselessly pull inwards towards the singularity. At the singularity the forces become infinite. However, I heard that Hawking radiation (but not information) can escape the event horizon. Can something similar to Hawking radiation escape the inside of the event horizon and the singularity? Would a person inside the event horizon see Hawking radiation emitted from the singularity?
It is hard for me to understand this because I think as soon as one photon is emitted one quantum step out away it starts to fall back to the singularity at the center of the black hole again.
In the Schwarzschild metric the singularity is a future region for a geodesic that has crossed the horizon. Nothing comes out of it as a result.
Hawking radiation escapes the horizon through quantum tunneling. Hawking calculated the radiation escaping through a quantum field theory in a general relativity classical background. It can also be understood as virtual particles escaping from outside the horizon and falling in with negative nergy, equivalent to positive energy particles escaping. These quantum effect causes the black holes to loose mass and energy in the form of different kinds of radiation escaping.
Note that it does not violate the theorems of black holes because those do not apply to quantum effects.
The calculations he and many other have done were done in the gravitational field near the horizon, which can be represented as a classical general relativity metric, with the matter being a quantum field. Closer to the singularity it does not apply - as you get closer to the sigularity the classical general relativity metric becomes less and less valid, as quantum gravity effects enter in and we do not know how to represent or calculate those. General Relativity becomes invalid as you approach the singularity. It has nothing to say about what happens there or nearby.
The above is now commonly accepted in black hole physics (though possibly I didn't state it all perfectly. The next paragraph is a little surmising on my part, at least on the conclusion, and may or may not be commonly accepted. Either way, what happens at the singularity, or anywhere near where quantum gravity could enter in, is just unknown at this time.
Now, if the Hawking black body radiation theorem is right, it says the black hole radiates at a temperature determined by his effect, and nothing else enters in. Since whatever happens at the singularity, if it is causal (a real singularity would not be, but then there would be no physics for it), it could only affect the energy coming out of the horizon through interactions with the classical gravitational field near it, it seems that anything else that happens inside the horizon might be accounted through the Hawking blackbody radiation already calculated.
EDIT after comment below: the explanation in the paragraph before the last that the 'above is commonly accepted in black hole physics' is just that. It's the theory of black holes, some of which has been observed. But Hawking radiation has not yet been observed, it may be difficult to observe it for macroscopic black holes but it is not impossible that astrophysical observations of the density of black holes and their masses over the time of the universe's evolution could confirm that they'd have to evaporate at the Hawking rates. Again, nothing measured at th time.
"The above is now commonly accepted in black hole physics" -- yet none of it has been confirmed through experimental observations, right?
Correct. And so nobody can ask any questions about black hole radiation? Or questions are ok but no answers allowed? Still, I edited and said in the context of the theory.
Sorry, it was not meant to be a criticism. I just wanted to know if I'm still allowed not to believe any of it. ;-)
Sorry myself. There is a tendency to not accept any statement not experimentally proved on this site. Good question: there is some hope they can see some relatively light black holes from the big band or way in the past tHat might be at the end of their life now, and see some X rays etc. but nothing on that yet. Also micro black holes at the LHC but I'M not sure how likely, and nothing of course. If there were any measurements it'd be big news and a Nobel prize for Hawking. My thought is that black holes can give us some ideas of how to get to quantum gravity, nothing there either. AdS/CFT?
Thanks for the update. Problem with a field when it proceeds for so long without experimental confirmation is that one does not know whether any of it has any correlation with what actually happens in nature.
Yes, well, it took gravitational waves 101 years and the Higgs maybe 40. But both were confirmed. For the Higgs it served as impetus to build the LHC. It s always a risk
And the ether was 40 or 50 years till that well accepted theory got disproven. And we still have the major problems of the issues in the standard model, quantum gravity, dark matter and dark energy. BTW, it s been maybe 20 or 30 years since we had had the standard model and general relativity more or less established and verified, and basically no new physics. Is it not good enough theories or not good enough experiments?
True, well at least they found that neutrinos have mass, which is new physics beyond the standard model.
True. And hopefully something new will still come out of LHC
|
STACK_EXCHANGE
|
To make it launch the game, not just exit silently, make sure
your cd drive is configured in winecfg properly - the content of the CD
has to be accessible through some drive letter and the drive type
has to be CD-ROM.
To start the game from a terminal:
$ cd ~/.wine/drive_c/Program\ Files/EA\ Games/Battlefield\ 1942
$ wine BF1942.EXE
The main menu resolution being different from the in-game resolution is normal: the game programmers hard coded the menu screen to a fixed resolution.
If you want a faster startup time, you can delete the .bik files in the Movie folder and it will load directly to the menu.
Patching to 1.6 is done by installing two patches:
There is some confusion over whether or not online play works with Punkbuster.
Punkbuster works for up to 30 minutes, thereafter it attempts to update itself, when the update occurs it fails to run correctly causing you to be kicked from a punkbuster enabled server.
Servers that do not require punkbuster work 100% - there are several out there.
This note is true of wine version 0.9.58.
Punkbuster enabled servers appear to work ok with Wine 1.1.10 or greater.
Atleast with 1.1.13 and 1.1.14, online play has stopped working
Unfortunately due to the design of Punkbuster it will never work in Wine without the cooperation of Evenbalance, it is designed to compare the internals of Windows while the game runs to make sure nothing is modified. As Wine is not Windows, it will always detect a problem.
The following comments are owned by whoever posted them. WineHQ is not responsible for what they say.
by Edward R. on Wednesday June 20th 2012, 6:18
Guys, lightmaps work! You just have them enabled BEFORE you play the game. You need to enable them in menu first. If you do this in-game, they WON'T WORK!
Tracking Wine Exceptions in Dumps
by Roger on Monday February 13th 2012, 23:06
I was getting "wine: Unhandled page fault on read access to 0xffffffff at address 0x4ff39164 (thread 0020), starting debugger..." exception on initial run of Battlefield 1942 at the point where it was almost getting into the GUI Menu system startup.
Luckly using the Battlefield1942.tie install file, cxrun would dump debugging output on exceptions and other breaks.
In this case, you'll notice the break is at address 0x4ff39164 in the above debug data.
Further into the debug output is a list of library names with their execution address ranges in memory.
Module Address Debug info Name (92 modules)
PE 340000- 394000 Deferred msvcr70
PE 400000- a96000 Deferred bf1942
PE 30000000-3006d000 Deferred binkw32
ELF 4f129000-4f132000 Deferred librt.so.1
ELF 4fee2000-4ffb5000 Export libasound.so.
... surprisingly, you'll see the exception address falls within libasound.so! This means my sound isn't working or configured properly. Going into cxsetup > winecfg or windows configure and toggle to the Sound menu. You'll have some options to play with choosing Alsa or OSS sound systems. To quickly work around this bug, I just disabled/unchecked both the sound modules and found myself quickly working around this exception and into the initial game GUI menus.
(This sound bug will likely resolve into a kernel level problem with my sound card as it's been acting up for the past week or so with kexec reboots instead of hard boots, as well as a recent kernel upgrade.)
Text: Invalid videomode specified
by Roger on Monday February 13th 2012, 22:57
On initial start, you see an "Text: Invalid videomode specified" error before or after the initial splash screen.
You're probably running at 16 bit color.
Battlefield 1942 starts at a default 800x600@32, or 32 bit color and requires this for the first run under Wine. After you get into the GUI menus, you can edit the video resolution to your liking such as using 16 bit color. Make sure your /etc/X11/xorg.conf file matches the bit depth.
(On a side not, I'm not sure if it's required to have the 800x600 resolution or other resolutions also listed within the xorg.conf)
I have very poor performance in BF1942 with Wine 1.2RC2. On Windows Vista I have 100 FPS (this is upper fps limit in BF1942), but in Wine I have only 20 FPS. I think that bootlenecks are draw calls. My platform is:
AMD Phenom X3 8650
2x Radeon HD4850 (with Catalyst 10.5)
Other apps in Wine works very nice, only in BF1942 (this is DX8 game) I see poor performance.
Main Menu scaled wrong
by Daniel on Saturday October 17th 2009, 17:07
I'm using the latest Wine 126.96.36.199, Ubuntu 9.04, latest (177?) NVidia hardware drivers, BF1942 patched to 1.61b with and without the DC_Final mod. I installed this configuration on one good widescreen laptop and one average PC with 4:3 monitor, the laptop using x64, the PC using x86. If it weren't for lack in disk space, I'd install the same setup on my third engine as well for testing purposes.
Installing and starting the game works well.
Problem: The main menu has a downward offset in both cases, displaying or not displaying the upper panel on either of them. Moving the panel to the bottom or fading it away did not change anything. The bottom part (start/join game, see version) is accessible, but not visible from full screen. It miraculously works now (most times) on the laptop, but the PC really fucks it up.
- When the main menu is loaded, the desktop size is scaled to an almost 1:1 ratio. The menu itself is stretched downwards and the panels are attached as well. I actually zoomed out (Super+E) to view both desktops (entire menu gets visible when zoomed out) and took measures and they're close to being squares, I have no idea why.
- Changing the ingame resolution or emulating a virtual desktop with any resolution does not help. Disabling the game's control over the window results in a twice as harsh stretch, so only the upper part is visible.
- Forcing a centered gpu scaling causes in the screen to blacken out entirely, no zooming out or switching desktops. Alt+F4 still quits the application and restores the desktop's default resolution.
I have no clue as to why the desktop is scaled at all, I haven't found any reference to this square-shaped resolution, the NVidia graphic driver's menu shows a default 4:3 resolution where it is definitely not and I've been literally spending days to find any solution or other users' experience with this, but trying to find anything valuable to me related to this topic is a pain in the arse. Linux 0:1 Windows.
The game itself works fine, as soon as the loading screen is displayed, the preset resolution is applied and stretched to full screen. The game runs with a good speed depending on the system. Sounds don't play correctly and to their full extent, but that has been mentioned before. Multiplayer games on LAN work fine. The application is not being terminated due to timeout like in Vista. Linux 1:1 Windows.
Please adress me if you have any questions concerning installed applications or my system settings to find a solution to this. Since the main menu doesn't constantly display properly, leaving me guessing and hoping I hit the right button down there, I also recommend rating the application compatibility to silver. Even if this issue gets solved, I'd not vote for gold as long as the solution is not implemented in Wine itself and easy to apply. I'm new to Linux and although I really think it's cool, I don't know how to use anything that's not a graphical interface and with problems and dissatisfying issues appearing every day, it's just hard to "spread the word" and convince people to try it. It needs to become much much simpler indeed. Sorry, gurus and hackers. I'm a n00b.
Widescreen clips bottom of screen
by togaclad on Saturday December 6th 2008, 19:00
The bottom gets clipped and I can't see things like ammo limits.
I'm running with:
Kubuntu Intrepid (8.10)
nvidia driver 177
I have twin 7600s but have not enabled the sli currently. Waiting till I get this sorted out.
I have a 206BW widescreen set at 1680x1050
Install went fine. I patched to 1.6.19 and 1.61b.
I've tried altering nvidia server settings with and without force full gpu scalling (stretched,centered and aspect ratio scaled). As well as changing winecfg to all combinations of “Emulate a virtual desktop” & “Allow win manager to ctrl the win”
I updated the following files with game.setGameDisplayMode 1680 1050 32 60
C:\Program Files\EA GAMES\Battlefield 1942\Mods\bf1942\Settings\Profiles\Default\VideoCustom.con
C:\Program Files\EA GAMES\Battlefield 1942\Mods\bf1942\Settings\Profiles\Default\Video.con
C:\Program Files\EA GAMES\Battlefield 1942\Mods\bf1942\Settings\Profiles\Custom\VideoCustom.con
C:\Program Files\EA GAMES\Battlefield 1942\Mods\bf1942\Settings\Profiles\Custom\Video.con
as well I changed renderer.fieldOfView 1.1 to handle the 16x10 ratio in file C:\Program Files\EA GAMES\Battlefield 1942\Mods\bf1942\Settings\VideoDefault.con
My best results have beeen with Force full gpu scalling(stretched) and Allow win mgr to cntr but no Emulate vir desktop. But with this the bottom strip of my screen gets clipped. Any ideas?
Application works excellently for ‘normal’ use; a game works fine in single-player but not in multi-player, Windows Media Player works fine as a plug-in and stand-alone player, but cannot handle DRM etc.
|
OPCFW_CODE
|
Linux System Security: The Administrator’s Guide to Open Source Security Tools, 2/e
Authors: Scott Mann, Mitchell Krell and Ellen Mitchell
Publisher: Prentice Hall PTR
Choosing “Linux System Security” for a title of your book is surely a pretentious step. But usually, when someone picks this kind of name for a planned publication, he or she is sure to deliver the quality readers expect. I’m satisfied to say that, in this case, the authors do provide the level of information suitable for the book’s title.
About the authors
Scott Mann is a Linux software engineer at LeftHand Networks in Colorado. He has previously specialized in Linux and UNIX systems for both SGI and Sun Microsystems. His previous Prentice Hall PTR books include Linux TCP/IP Network Administration.
An interview with Scott Mann is available here.
Mitchell Krell, Ph.D., is a former university professor turned consultant. He currently travels around the country teaching classes and consulting for various government agencies on a variety of topics including Linux, IRIX, system administration, networking, web development, and computer security.
Ellen Mitchell is a security analyst at Texas A&M University, where she is responsible for campus network security, development, and administration. She currently maintains the Tiger UNIX security package.
Inside the book
The main difference between this second edition and the initial first publication, is that it provides an update for all those things that changed in the time frame between the two releases. As the Red Hat Linux is authors’ preferred distribution, everything was made sure it works on Red Hat’s 7.2/7.3 Linux releases. Several new chapters were added, for instance the iptables one and another one dealing with network scanners and sniffers. Several chapters from the first edition, such as overview of OPIE tool, TCP Wrappers and cryptographic filesystems were moved to the Appendixes part of the book. As the authors note, this was done because the information is still valuable, but not used as much as before.
As expected, the authors start the book with several light reading chapters on information security basics. The vulnerabilities are categorized into three separate categories – Technical (trojan horses, back doors, buffer overflows, password cracking, spoofing, session hijacking, etc.), Social (ever popular manipulating and impersonating schemes as well as all-around-us shoulder surfing tactics) and Physical (system access and various networking tampering issues). Don’t expect to find questions for all your answers in this section, as the basic level of security knowledge is needed in order to enjoy this book to the maximum. If you are trying to find any specific Linux Security related answers, you’ll probably find it in this massive 800 pages long guide.
“This ‘n That” is the name of the chapter that covers several topics, including dissection of all those services that are actively running on the system with a default installation. In a neat table structured point of view, around 30 services such as chargen, discard, smtp, shell, uucp, auth and others, are presented with an appropriate port number, description and authors’ recommendations. Basically, as it should be, authors suggest to shut down all un-necessary services. An introduction on TCP/IP mode layers and cryptography is also presented, giving the reader a by-the-way overview of basics regarding these important topics.
User administration is big issue, as by doing this inadequately, attacker can be granted easy access to the system. Some of the tips included in this chapter of the book include: usage of /etc/login.defs as an alternative way for password aging, creation of restricted guest accounts, minimization the impact of root compromise situation, configuring /etc/securetty and playing around with file and directory permissions. The use of the pluggable authentication modules for Linux (PAM) is described, with a focus on its logging and session management flexibility. As the authors describe usage of several available PAM modules and applications, this topic is covered over 50 pages. BY skipping a few chapters of the book, you’ll see a guide on using popular password auditing tool Crack, which will help you test the passwords of your system’s users.
As system accounting was developed with keeping track of user resource management on mind, the seventh chapter shows us its usage for the security purposes. Some of the commands included in this connection accounting overview include dump-utmp (converting utmp and wtmp logs into ASCII parsable format), last (showing the information on users logging in) and who (printing the currently logged-in users). Process accounting overview offers information on sa (produces process usage per user or command) and lastcomm (produces output on per-command basis). These commands can be of a great use, but you shouldn’t totally trust them, as after the successful compromise, they can be modified to show false output. As for the host integrity, tools like Tripwire should be quite useful.
One of the great Linux features is its excessive logging. Every system administrator should, either manually or automatically, check those logs for any possible problems. Grepping error logs can help you realize the possible attack attempts targeting your system, or it can alert you that your hard-drive is progressively fading away with blocks of errors (evil grin at several series of IBM drives that give me bad memories). The authors started its system logging coverage with introduction on syslog, its facilities and levels, configuration and finally usage. Also, for the advanced usage, example of configuring /etc/syslog.conf is mentioned along with a tip on synchronizing system clocks and examples of output logs related to these procedures. By skipping to the ending chapters of the book, addition as regarding to system logging is presented in the way of log file management overview. Whether you’ll do it manually, or with swatch, logcheck or any other tool, you will find this chapter an interesting addendum.
As from Red Hat Linux 7.0, this Linux developer uses xinetd rather then inetd for the role of the default super daemon for networking. The improvements in this extended daemon are that it incorporates many useful portmapper and TCP Wrappers capabilities. Some of the advantages noted by the authors are its access limitations based on time, access control for TCP/UDP/RPC services, numerous Denial of Service prevention mechanisms and additional /etc/hosts.allow and /etc/hosts.deny checks. The chapter on xinetd offers a lot of information on xinetd as to be used to protect the network services on the Linux based system.
In the time when telnet was used for remote access, the title of the eleventh chapter, “Let ‘Em Sniff the Net!” wouldn’t be as suitable as it is now. Secure Shell (SSH), utility written by Tatu Ylonen, provided all the needed capabilities of a client-server environment with an encrypted tunnel. As you probably know, there are two versions of SSH – version 1 and 2. Conceptually similar to version 1, SSH 2 offers the difference where no server key is being generated, but key agreement is performed using Diffie-Hellman. Afterwards session key is generated and exchanged between the two parties. This isn’t the end of the process, as the session is then encrypted using the session key together with an available algorithm. Also, from the security perspective, the difference is that SSH 2 uses SHA-1 or MD5, rather then the insecure CRC which is used within SSH 1. This section of the book provides SSH configuration and usage information, as well as it briefly mentions Secure Shell alternatives.
Following the previously mentioned chapter dealing with “Crack”, authors present several extended chapters on some of the “toys” used for playing with the system’s security. Bastille Linux being the one, can help you learn more about the state of your system security. To summarize the things, this tool: helps increasing logging facilities, managing permissions and services, provides file hardening and also includes a firewall (implemented via ipchains/iptables) and a port scan detector. As Bastille is a collection of scripts written by its authors, which are by-the-way noted names in the Information Security community, it can be run manually or automatically. As usual, authors provide in-depth information regarding Bastille’s configuration and usage. In the same manner, Tripwire and both ipchains and iptables are covered providing information on both file integrity and firewalling under Linux.
One of the new additions to the second edition of “Linux System Security – The Administrator’s Guide to Open Source Security Tools” is, already talked about, chapter on scanners, sniffers and detectors. These tools aren’t just dark-side related, as they proved to be valuable to system administrators by both making sure the state of their system security is satisfying and copycating the potential attacker with a system compromise on the mind. Scanners covered include SARA, Nmap, Nessus and NetSaint. “Honorable Mentions” are Internet Security Scanner, VLAD and SAINT. As for the members of the sniffing tool sector, authors present TCPDUMP, Ethereal and Ettercap. Neped and PortSentry are the only detection tools mentioned.
The appendixes that follow the book include information links that should make the administrators up to date. Resources here covered are divided into Web pages, full disclosure resources, mailing lists and USENET groups. Appendix B is taking care some of the notable tools that aren’t covered in more details. OPIE, TCP Wrappers and The Cryptographic and Transparent Cryptographic FileSystems chapters from the first edition are also placed within the Appendixes, because of the reasons I mentioned on the beginning of this review. As usual, the appendixes are closed with a glossary of the phrases mentioned in the book.
As a perfect ending point, these are some of the suggestions authors spread through more then 800 pages of the book:
- Use well planned and well implemented security policy
- Harden your Linux system
- Secure Filesystems, important files and important directories
- Restrict the root access and watch after it carefully
- Make sure that user and group accounts are secure
- Configure log checking and parsing utilities
- Configure network services and use SSH for remote access
- Configure ipchains or iptables (depending on the kernel)
- Run the tools mentioned throughout the book
- Stay in touch with current information security topics
What do I think of it
The authors really did put some energy into this book, which can be seen at every step of this information packed publication. After every thematic section, the reader is presented with additional examples and tips that often include interesting and very useful facts. From my perspective the book is a must for any Linux user interested in progressing the state of his/her security knowledge. Administrators should find it a useful read, as it provides in-depth coverage of Linux system security topics, which is a direct result derived from the experience the book’s authors have. The only thing I would suggest to the authors and the publisher is to include a CD-ROM containing security tools mentioned throughout the book and some useful scripts or personalized configuration files.
|
OPCFW_CODE
|
The conferences organized by the European Ornithologist Union (EOU) have always been a personal favourite of mine. My involvement with these conferences began back in 2015 at the EOU conference in Badajoz, where I presented the findings from the second chapter of my thesis. Over the years, my research focus has shifted a bit, and I’ve changed my study species twice, nevertheless, my enthusiasm for EOU conferences remains strong.
This year the conference was held in Lund, Sweden, a charming city in the southern part of the country. At the conference, I had the opportunity to share the results of my recent postdoctoral research conducted at CEFE (Montpellier, France) and the University of the Basque Country (Leioa, Spain) which aimed at exploring the evolutionary dynamics of bird colouration.
Specifically, I presented the final objective of my postdoctoral project, focused on examining the relationship between climate, biotic interactions, habitat, and plumage colour complexity within the Paridae family. My presentation was part of a symposium entitled “More than warmer temperatures: How precipitation influences avian responses to environmental change” kindly organized by J. Burant, M. Burgess, and N. Freeman. Overall, the symposium attracted a large audience, was engaging and featured high-quality presentations.
Figure 1 Me during the talk © Mercè Palacios.
In the talk I shared the results of a collaborative project with C. Doutrelant (CEFE-CNRS) and P. B. Pearman (UPV/EHU) where we found that Paridae species in regions with more seasonal climates and higher precipitation tend to have more complex plumage colours. However, as with many scientific studies, the results are complex, so if this topic interests you, I hope to publish our findings soon(ish).
Regarding this project, I’d like to express my gratitude to the Synthesis+ program for funding the research I presented at the symposium, including a month-long visit to the Natural History Museum in Tring (UK), where I collected data on Paridae species’ colouration.
Figure 2 Workplace at NHM in Tring while I was measuring the colours of a bunch of Sultan tits (Melanochlora sultanea), detail on the right © David López-Idiáquez.
Aside from the symposium, the conference offered a rich array of talks and posters covering diverse avian-related topics, including climate change, migration, physiology, microbiomes, and behaviour. It served as an excellent opportunity to stay informed about the latest developments in various scientific fields.
What is a conference but the people?
It is impossible to overstate the significance of networking with colleagues, reuniting with old friends, and making new ones at conferences. In this context, the EOU meeting has given me the opportunity to reconnect with researchers I first encountered during my PhD in Spain and my postdoc in France. It was a pleasure to catch up with them, gaining insights into their recent endeavours and discussing about potential collaborative projects with them.
Finally, perhaps the pinnacle of the conference was the trip to Falsterbo, a birdwatching hotspot in Europe. Spending the day watching birds migrate at this amazing spot was truly awesome
Figure 3 Öresund bridge from Falsterbo. © David López-Idiáquez.
In summary, my participation in the EOU conference in Lund has been a really enriching experience. It has provided me with a platform to present my research, gain insights into the work of others, and connect with friends who share my interests.
I’d like to thank the BOU for awarding me a member travel grant to attend this conference and am eager to attend the next conference that will take place in Bangor in 2025!
|
OPCFW_CODE
|
Code Review 1
For our first code review session we are going to do group code review followed by class-wide code review.
First I will split everybody up into groups, probably by proximity. Each group will have an assigned section of the raytrace, e.g.:
- Geometric object
Within each group, share the relevant code amongst each-other. I think the easiest way to do this is to create a gist with your relevant snippet of code, then dump all the links into one shared google doc.
However you exchange the code snippets is fine with me! If you have a better solution let me know and I might recommend it for future code review sessions.
Within your group, review each other’s code. As a group you decide which person’s code you want to present to the class. That person is then responsible for presenting the code.
As a code presenter you are responsible for:
- Describe the key ideas behind the major classes or functions
- If you considered alternate designs, describe them and how you made your choice
- Especially if you refactored or changed designs, describe the original state and what caused a need for change
In addition I will ask the other members of the group why they chose this particular representative.
In general for code reviews, it is best if the code speaks for itself. As an audience member it is your responsibility to look at the code and ask questions/provide comments. If you don’t understand something in the presented code, it probably needs to be pointed out!
I will of course be offering my own critique from a software design and C++ perspective :)
I will invariably accuse nearly everyone of at least some of the following issues so be prepared!
Large Files: Files that are too large encourage interdependence and are generally hard to keep track of. Make separate files for your classes and functions! This will also speed up compile times.
Large Functions: If your functions are too long and involved your code is less modular and more difficult to debug, especially more difficult to unit test. I subscribe to Uncle Bob Martin’s belief that functions should be descriptively named and rather short. It’s true that function call overhead can be a problem. However, you will be more able to effectively optimize a clean and modular implementation. It will also be more easy to detect problem areas. It’s easier to de-modularize code when necessary. It’s not always easy to go the other way.
Short and non-descriptive variable names: There is of course a limit to how long and descriptive you want to name variables that you have to use over and over, but anything that is cleaner to read and review is better!
Inconsistent Code Style: I’m a strong believer in consistent code style, especially when it comes to whitespace and naming conventions.
|
OPCFW_CODE
|
As a high performance real-time credit card processing component, .netCHARGE handles client side encryption to enable transaction processing from your site.
This ensures that your site visitors have an intuitive experience and are not directed to the payment processors site to complete the transaction. This also provides greatly enhanced reliability as the component returns the success or failure of the transaction to your ecommerce implementation directly without having to depend on a post back communication from the payment processor.
Keeping your customers at your site also provides continuity and ensures that transactions are not abandoned due to questions over payment processing.
View the 50+ currently supported gateways / processors..netCHARGE provides a common interface eliminating processor / gateway lock in and native "direct connection" to various processors provides major cost savings.
- Seamless Integration: .netCHARGE can be easily integrated with any ASP.NET ecommerce solution or even a simple ASP.NET payment page. Now you can clear credit cards in real-time with a few simple lines of script.
- Common Interface: As a comprehensive payment software solution supporting all major gateways you can easily change companies without recoding your application. Developers can work with a familiar system regardless of which gateway / processor their clients demand.
- AVS Support: The Address Verification Service is enabled on those processors that support it. This uses the address or zip / postal address to help verify the transaction is legitimate.
- Card Verification Code: This code can help cut down on fraudulent transactions and is passed to processors that support it.
- Card Verification Code This code can help cut down on fraudulent transactions and is passed to processors that support it.
- Transaction Types Different processors support multiple transaction types including regular real-time charges (sale), just authorization, post authorization processing, void, refund, force and echeck.
- Test Mode Allows you to communicate with the systems test server if applicable.
- Timeout Allows you to set a timeout value if the processor does not respond within a set period of time.
- Return and Errors .netCHARGE returns simple codes on success, and failure and provides error messages from the processors if applicable.
- Prevalidation .netCHARGE checks the validity of the credit card number prior to initiating the transaction with the processor. This greatly speeds up the return if the user makes a typographical error when entering their credit card number.
- Native Processor Support You can now bypass gateways fees and work directly with a processor for your transactions with native Nova support.
- Database Logging In addition to the text based logging already available, .netCHARGE 2.0 features a new database transaction log which can optionally be enabled. This will log all transaction details to Access or SQL Server and can even automatically encrypt credit card details before storing them.
- Encryption / Decryption Methods New methods are provided for encrypting or decrypting any sensitive details. The decrypt method can be used in conjunction with the database logging to obtain the actual card number from the stored encrypted value.
- New Properties added Client IP, Description and more are automatically passed to gateways / processors that support them.
- New Add Parameter System For new or custom parameters you can now easily define the field and value and pass this information to the gateway / processor.
- Card Type from Number Methods added to determine credit card type from provided number.
- PABP Visa U.S.A. Cardholder Information Security Program (CISP) Payment Application Validation for Payment Application Best Practices.
- PA-DSS: v7 previously validated PA-DSS 1.2 PCI security standards council. The PA-DSS program no longer accepts components (the consuming application must be validated).
|
OPCFW_CODE
|
Accessing SQL Server Instance through NAT
I'm attempting to access a SQL Server which is exposed through an IP NAT mapping. All the ports are open. I don't know the details of the NAT, if it's relevant, since that's somewhere else in the company hidden in a pile of red tape.
Here's what I figured out. When you attempt to access a named instance of SQL Server, the client asks what port the named instance is running on. If I RDP into the SQL Server I can use netstat to find out the port of that instance and can successfully connect through the firewall. However, connecting via the instance name doesn't work. My guess is that the server is responding at some point with it's internal IP address and the client is using that.
Does anyone know if this is true and if there's a way around it?
Instance to port number translation is done by SQL Browser service which is listening on port 1434/udp. Check if your NAT (in DMZ mode) is publishing all udp ports too.
In my environment it helped to turn off Windows Firewall on SQL Server machine.
The instance listening port protocol discovery is subject to the SQL Server Browser Service. This uses UDP on 1434. With a NAT forwarding of UDP 1434 your client should be able to interact with the SQL Server Browser Service (if the SQL Server Browser's UDP response packet can reach back the client, a big if), but even a successful interaction will put your client in a tight spot: now that it knows the SQL Server dynamic listen port, how does it reach it? The NAT would have to dynamically forward the port picked by SQL Server, or it would have to forward all ports.
What I recommend is to have your SQL Server listen on a per-configured, statically assigned, port. See How to configure an instance of SQL Server to listen on a specific TCP port or dynamic port. Have your NAT forward that port. Then in your client use this port explicitly in the connection string. Do not use 1433, the standard port, since I assume that ahead of the NAT is the public internet and 1433 is subject to constant and frequent scans from all sort of bots and vile clusters.
+1, and 1433 is slowly fading away because it's used for a default instance (no instance name), and Microsoft is highly recommending never to use a default instance (for security reasons). Therefore, whenever you have a named instance, 1433 will never be relevant.
Configure the named instance to run on a static port using SQL Server Configuration Manager. In configuration manager, SQL Server Network Configuration -> Protocols for <named instance> -> TCP/IP -> Properties.
Then supply the hostname and port for the named instance in the connection string. The hostname and port number are specified in the following format (assuming hostname is Test and listen port is 1492):
... Server='Test,1492'; ...
|
STACK_EXCHANGE
|
Acceptable to lock (AppDomain.CurrentDomain)?
I want to enumerate all loaded assemblies in an Asp.NET application, using AppDomain.CurrentDomain.GetAssemblies(). However, when checking the documentation for AppDomain, I find the following statement:
Thread Safety
Any public static (Shared in Visual Basic) members of this type are thread safe. Any instance members are not guaranteed to be thread safe.
Since GetAssemblies() is an instance method, I take this as I have to make take some kind of lock around that call, if not for anything else so to prevent anyone else from loading a new assembly into the domain while I'm enumerating the current ones. I would expect AppDomain to provide some kind of SyncRoot property, but it doesn't, and I have not found any information on the web about how to do.
How am I supposed to synchronize this call?
Edit
I know that the lock statement is used to create a cooperative lock, this is exactly the reason that I want to lock the AppDomain in the same way everyone else does (or should do), rather than create my own lock that won't prevent code that's not mine from loading assemblies while I'm enumerating them.
I know that locks that can be taken by anyone are usually a bad idea, but I also know that not taking a lock when performing unsafe operations is even worse.
Both answers so far say that GetAssemblies() is, in fact, thread-safe. This makes sense to me, and I would really expect it to be the case, but how do you know it? Does anyone have a reference to support this claim? My google-fu has failed me and Reflector shows that this method is a thin wrapper around an internal native method.
NB: the lock keyword doesn't "lock an object" so that no other code can use it. Rather, it guarantees that any other code that tries to acquire a lock on that same object will have to wait until all other locks--current and preceding--are released. In other words, the "nobody use this but me" scenario is only realistic if the object is only ever used inside lock blocks. If any code uses it without acquiring a lock, those semantics fail.
I know exactly what the lock keyword does, and I also know that you should almost never lock on a public object.
However, I also know that race conditions are among the worst problems to debug, so I want to avoid them more than anything else.
SyncRoot properties in general are a very bad idea. One reason why is it's possible for 2 independently developed libraries to unknowingly decide to lock against a shared SyncRoot property and quickly introduce deadlocks into the application. Locking is an operation that cannot easily be reliably shared between two independent components. The best strategy here is to develop your own lock which is used by your components to synchronize access.
In this case though calling GetAssemblies is safe to do from multiple threads so no locking is needed. The warning you see is a general statement added to every class in the BCL unless the author specifically designed the type for thread safety and removed the message.
How can I see that it's safe? It's not documented as safe for the method, and the AppDomain class is documented as not being thread safe. The Assembly class, OTOH, is documented as being thread-safe.
Btw the SyncRoot is not a bad idea. The bad idea is to have state that is so shared that you never know who is using it. However, when that happens, which is the case for the AppDomain (for a good reason), it is better to have a SyncRoot than to not have it, since not having a well-defined object to lock on makes it impossible for you to prevent race conditions.
That is a standard disclaimer; you don't need to worry about it.
In general, as long as you (or another thread) do not modify the object, instance methods can be called from multiple threads.
Note that you should never lock on an AppDomain object.
Additionally, you should never lock on anything you don't own. If you didn't make the AppDomain object, you should be wary of locking on it.
Where can I possibly see that it is a standard disclaimer and doesn't particularly concern this method? In contrast, in the documentation for the Assembly class you can read "This type is thread-safe". My problem is that I can not know whether someone else modifies the object, since another simultaneous request might (directly or indirectly) invoke Assembly.Load, which no doubt will modify the list of loaded assemblies.
|
STACK_EXCHANGE
|
This blog provides information, news, tips, and announcements about the SQL Server Data Quality Services (DQS) feature introduced in SQL Server 2012.
As part of our DQS CTP3 offering, we are releasing a new DQS SSIS component. This component incorporates the DQS cleansing functionality into an SSIS data flow.
When should I use the SSIS DQS Cleansing Component?
The DQS Cleansing component can add value when:
1. Cleansing should be performed as a batch process.
2. The cleansing functionality is used as part of a larger data integration scenario.
3. The cleansing process has to be automated, or run periodically.
Sounds great. I want to cleanse my data with the DQS cleansing component. What do I do now?
DQS is a knowledge driven data quality product. It means that first you have to create a knowledge base (KB) that is relevant to the data sources which you would like to cleanse. You build this KB by acquiring knowledge from samples of your data (a process we call “knowledge discovery”), by configuring the KB to use external knowledge from Windows Azure Data Market, or by manually adding knowledge to the KB. The knowledge is stored in the context of data entities that we call data domains. Some examples for domains include City, State, Email Address, etc. The knowledge in the domains consists of good values, bad values, relations between values, validation rules, etc.
Creating and managing KBs are done through the DQS client. Click here for additional information on how to create a KB.
So I Have built a good KB, what now?
Once you feel that the KB contains sufficient relevant knowledge for your cleansing tasks, you can create an SSIS data flow that contains the DQS Cleansing component, configure it to work with the prepared KB, and run the package.
Sometimes, it can help to run an interactive cleansing project through the client before running a batch project through SSIS. In this way, you can conveniently evaluate the results through the DQS client UI and decide whether you are ready to perform batch cleansing, or whether you still need to do some work to enhance and improve your KB.
How do I configure and use the DQS Cleansing component?
In general, the DQS Cleansing component is a standard SSIS transformation component, so familiarity with SSIS is required. To read about this component specific configuration and usage, please refer to this post in the SSIS team blog.
Note that while the DQS Cleansing component is installed as part of SSIS, it is a part of the DQS product and requires the DQS Server installation to function.
Please note that the actual DQ work is done in the DQS server, so from DQS point of view, the component is served as a client. You can install SSIS and DQS server on the same machine, but you can also operate the DQS Cleansing component with a remote DQS server.
If your domains are attached to external reference data services (RDS), your records will be further sent to the cloud. A post about how to use RDS will be published soon.
The following diagram shows how all the components and entities that were mentioned above relate to each other:
Best Practice: using DQS Cleansing in conjunction with Conditional Split for optimized handling of results
The DQS Cleansing component takes input records, sends them to a DQS server, and gets them back corrected. The component can output not only the corrected data, but also additional columns that may be useful for you. For example - the status columns. There is one status column for each mapped field, and another one that aggregated the status for the whole record. This record status column can be very useful in some scenarios, especially when records are further processed in different ways depending on their status. Is such cases, it is recommended to use a Conditional Split component below the DQS Cleansing component, and configure it to split the records to groups based on the record status (or based on other columns such as specific field status).
By using this best practice, “good” records can be immediately sent for further processing downstream, while “bad” records can be isolated or redirected for appropriate handling (automatic or manual).
Watch this video in order to understand how this best practice works in reality.
This is for now - feel free to contact us with comments and feedback.
The DQS Team
I am looking for a bit of help.
After some tests and trials of the SSIS Cleansing component, I find that all of my attempts remain as "Active" activites for the KB.
I can find no way of changing the status.
This means that I can't update my KB at all?
When I open SSIS project (with Data Cleansing Component) in DQS client, it doesn't show all the columns available in SSIS. It only shows the Domain related columns. It is not possible to link the correctd data (through DQS) back to actual data.
When testing the DQS Cleansing Component in SSIS, I notice 2 puzzling things. Firstly invalid data gets a Status of "New" rather than "Invalid". This happens in the DQS Client as well, which is fair enough as you can then manually decide if it is genuinely valid or not. However in cleansing the data in a batch (eg SSIS) you can't intervene, so "Invalid" seems to me to be more accurate than "New". Is there a way to force it to display "Invalid" rather than "New"? I know that in my subsequent Conditional Split component I can search for "New" rather than "Invalid", but that brings me to my next issue.
If a record has multiple fields, and one gets a Status of "Correct" (or "Corrected") and another has "New" (ie Invalid), the overall Record Status comes back as "Correct" (or "Corrected"). This seems incorrect to me. I know I can search on each individual field Status, but if I'm cleansing multiple fields, I'd much prefer to look at just the Record Status. Is the Record Status correct in the case where one field returns a Status of "New"? And is there a way to make the Record Status reflect the "worst" of all the field statuses?
Hi I'm new in using DQS. When I tried to use it in SSIS, after creating new DQ connection manager, it displays "the data quality knowledge base is empty. Specify a valid coonection." DQS Client is on the same machine. Please help. Thanks.
Did anyone find the source of the error message "data quality knowledge base is empty. Specify a valid Knowledge Base?"
I want to create a SSIS package that takes the parameter at run time and then finds this value in DQS and if this value match to any record then send the output row in matched table. I have created the SSIS but when I send the value in DQS KB then it returns the unmatched output although the value exists in table. Please help me resolve this task.
I have one excel file with 5 columns like date, customer id, account no, location, customer name.
I have designed the package.
If we get bad records like wrong date or non-numeric customer id, etc then we put bad records into another excel with extra column reason for bad record
And we have to send email whenever bad record occur.
I done for every column like if there is bad date then mail will be sent.
If customer id is non numeric, again mail will be sent and so on.
But the problem is – If we get 2 bad records or more than 2 then email will be sent each time. I want to send only one email for one bad row whether it contains 2 bad columns or more with reason.
Kindly help me in this if u have any idea.
I am working POC and i coudn't find resources that could help the estimates other the BestPractisesGuide from Microsoft. So i am wondering if the DQS Cleaning in SSIS is ideal for Cleansing 2 mil records every month once. if so how long it might take. I have a 15 Domains with 2 Composite domains each having 3 rules in them.
|
OPCFW_CODE
|
Download windows phone sdk for visual studio 2013 free
This device is not currently supported for these products..Installing the Visual Studio SDK – Visual Studio | Microsoft Docs
Oct 07, · Hey guys, I’m asking this as I’m quite worried about Windows Phone development in the next version of Visual Studio. I’ve seen in many sites (incluiding Microsoft’s ones) that VS won’t have an “express” version for Windows Phone development, so it’ll be restricted to VS Pro (and I suppose, Ultimate). May 29, · The Windows Phone Update Emulators package adds additional emulator images to an existing installation of Visual Studio or Visual Studio Update 2 or later. With this update installed, you can create and test apps that will run on devices that have Windows Phone ing System: Windows 10, Windows Feb 27, · • Office Plan sample application—Shows how you can use the Visio Drawing Control to create an application that takes advantage of the drawing features of Visio in a Microsoft Windows Form. • Sample Visual Studio project – Sample project that can be used a template for VSL applications.
Download windows phone sdk for visual studio 2013 free.Download Windows Phone Update and Emulators from Official Microsoft Download Center
Oct 30, · The Windows Phone SDK is a full-featured development environment to use for building apps and games for Windows Phone and Windows Phone The Windows Phone SDK provides a stand-alone Visual Studio Express edition for Windows Phone or works as an add-in to Visual Studio Professional, Premium or Ultimate ing System: Windows 8, Windows 8 Pro. Distributable Code Files for Visual Studio Ultimate , Visual Studio Premium , and Visual Studio Professional editions This is the “REDIST list” that is referenced in the “Distributable Code” section of the Microsoft Software License Terms for certain editions of Visual Studio . The Windows Phone SDK is a full-featured development environment to use for building apps and games for Windows Phone and Windows Phone The Windows Phone SDK provides a stand-alone Visual Studio Express edition for Windows Phone or works as an add-in to Visual Studio Professional, Premium or Ultimate ing System: Windows 8, Windows 8 Pro.
Download Visio 2013 SDK from Official Microsoft Download Center
Install the Visual Studio SDK as part of a Visual Studio installation
Windows SDK archive – Windows app development
Install the Visual Studio SDK
Download Visio SDK from Official Microsoft Download Center
Winchester Hitachi Ultrastar 15K450: capacious and fast
Targeting consumers interested in fast, high-capacity hard drives, Hitachi launches the Ultrastar 15K450. The novelty uses the technology of perpendicular recording, which provided a large volume – 450 GB. At the moment, this is the largest volume for among drives with a spindle speed of 15,000 rpm. The main area of application of the Ultrastar 15K450 are systems designed for solving critical, demanding tasks, for example, systems for processing online transactions, databases with a large number of queries and multi-user applications.
Ultrastar 15K450 drive 30% better performance than predecessor. Search time is less than a few microseconds, and data transfer is provided by a high-speed interface (3 Gb / s in the version with the Serial Attached SCSI interface or 4 Gb / s in the version with the Fiber Channel interface).
Other technical features of the Ultrastar 15K450:
- Volume – 450 GB;
- Number of plates / heads – 4/8;
- Average search time – 3.6 ms;
- Spindle rotation speed – 15000 rpm, average delay – 2 ms;
- Buffer – 16 MB;
- Height – 25.8 mm;
- Weight – 750 g.
Hitachi promises to start shipping Ultrastar 15K450 hard drives this quarter.
Source: Hitachi Global Storage Technologies
|
OPCFW_CODE
|
Working with V.V.Big XAML file
Have you ever tried working with a XAML file which contains thousand tons of line of code. (More specifically Drawing Brushes for whole application in my case). This file is of 20+ MB size. Whenever i try to add/edit anything in this, Visual Studio 2008 crashes (7/10 times).
Then if i'm able to do my changes and try to save change then VS crashes (most of the times).
It is so difficult to work with such a big file in VS (same happens in Blend too but with few occurences)
I know having such a big file doesn't come under Good Coding standards. But what should we do if somehow we come in such scenario???
The only thing I finally found was KXAML : which proved quite good. Any other suggestions ??
UPDATED I dont want to change structure of that file or go for any other approach. I'm curious to know how to work / with such file. (other editors may be)
ADD seeing the answers - I would say i'm not stuck at this point i want to know if someone got stuck then is it better way to edit such thing ?? (separating this file would be the v.first thought to avoid this problem not solution)
ADD1 Alright !! My bad, i got too optimistic regarding this thought. Considering canvass, I would ask for the best practice to split this huge file.
have you tried splitting the file? Everything need not be defined in the same xaml. Some elements like Style can be moved to separate files.
Re your ADD: In contrast, NOT splitting the file at this point is fighting the symptoms as opposed to targeting the diagnosis. If suffering from an ailment, one would want the docs to and get rid of it, not only supply drugs to relieve the pain.
Got too much offtrack and offhand !!
I might suggest you take the pro-active approach and see if you can't segment it appropriately.
Define separate ResourceDictionary files to encapsulate reused resources and suchlike; similarly, do this for other sections of the file which can be stripped out and simply referenced.
This way you can have a file containing your Brushes, a file per Style and/or ControlTemplate, and so on and so forth.
EDIT:
As per your update stating quite definitively that you don't want to actually change the file, then I can only reply to the question of 'How to work with such a large file?' with these words:
With great difficulty, pangs of irritated senses and a lot of wasted time.
This would be the solution to reduce the size of that file. I want to know how to work with such a file (I dont wish to do anything with the structure of that file)
Then I think you're S.O.O.L. Compensating for someone's incompetence isn't good practice, either - just do things properly, your way, the right way.
I 've had the same problem in VS2010 with some huge xaml files of some old colleagues that hadn't really heard about coding standards. I can assure you that VS2010 crushes a lot too. What I ended up doing was to split the xaml files into several files and then adding them as resources or as merged dictionaries.
P.S. did you mean Kaxaml?
Have you thought about splitting the file into separate resource dictionaries files and merging them into a single dictionary?
Why it is so large?? All styles for Infragistics controls (brushes, controls styles etc) are less than 10 Mb.
You should to consider to use styles inheritance as frequently as possible.
Also you should split this file to few smaller files. It is torture to work with so large file. Try to have all files smaller than for example 1000 lines.
|
STACK_EXCHANGE
|
Assigning a niladic function
A non-niladic function F can be assigned to a variable G with
G ← F
However, if F is niladic, how do I prevent it from being evaluated?
You can't.
In a way, niladic functions behave like arrays, except their value isn't determined until they are used. This also means that they exhibit value semantics rather than reference semantics. Note also, that niladic functions cannot be operands of operators, but rather their result will become the operand.
A way to circumvent both of these issues, is to wrap the niladic function in a dfn so that it takes a dummy argument (or two), and thus:
G←{F}
Interesting. I can "curry away" one dummy argument with Bind with G←⍬∘{F}. At least G is not ambivalent now, only one dummy argument remains. If I try to curry away the last argument as well, as in G←(⍬∘{F})∘⍬ then I am successful in the sense that F is not called/evaluated immediately. However, I do not know how I would invoke this last function G.
@JeppeStigNielsen It isn't evaluated because functions are only applied when given arguments, and ∘ is just a normal dyadic operator, so ⍬∘{F}∘⍬ derives a function you can assign. However, this is no more meaningful than say ⍬∘⍬ which also doesn't have any valid applications. You can still "use" it though, e.g. in ⍬∘{F}∘⍬ / 42
APL evaluates expressions from right to left. As soon as all arguments of a function are present the function is evaluated and the function and its arguments are replaced by the result of the evaluation.
Now consider your example G←F. If F is monadic or duadic, then F cannot be evaluated because its right argument is missing. In parser terminology the token F is shifted rather than being reduced, The first expression that can be reduced is then G←F which assigns F to G.
On the other hand, if F is niladic, then F can be (and therefore will be) evaluated immediately (with, say, result Z), so that the assignmet will be G←Z and not G←F.
I'm not sure what you're trying to do so forgive me for answering the wrong question.
To me it sounds like you want an optional right argument.
In APL2 I would do something like this:
'' ⎕EA 'G←F ra'
If ra is undefined, the Value Error will be caught and nothing happens. If you want a default value for G, make that assignment to the left of EA.
Some APL dialects allow treating a function like a sort of value, using ← to assign such function values to a name, reminiscent of some proposals by Iverson. E.g. Sum←{+/⍵} (⍵ is the right argument) and or even Sum←+/. They may also allow using names of functions in further function definitions, e.g. ColSum←Sum⍉. However, this doesn't work for niladic functions, as they immediately return a value rather than be treated as a value. OP is asking how to do it for niladic functions.
|
STACK_EXCHANGE
|
Each Thursday in the Disquiet Junto group, a new compositional challenge is set before the group’s members, who then have just over four days to upload a track in response to the assignment. Membership in the Junto is open: just join and participate. (A SoundCloud account is helpful but not required.) There’s no pressure to do every project. It’s weekly so that you know it’s there, every Thursday through Monday, when you have the time.
This project’s deadline is 11:59pm (that is, just before midnight) wherever you are on Monday, October 9, 2017. This project was posted in the early afternoon, Manhattan time, on Thursday, October 5, 2017.
Tracks will be added to the playlist for the duration of the project.
These are the instructions that went out to the group’s email list (at tinyletter.com/disquiet-junto):
Disquiet Junto Project 0301: Parts > Sum Artfully reduce an album to something less than itself.
Step 1: Download the album 5 Minute Meditations by Lee Rosevere. It’s available here from the record label Happy Puppy Records:
Step 2: The album has 11 tracks. You’ll be focused solely on tracks 1 through 10. Review the material to get a sense of the music.
Step 3: Create a five-minute track of your own using audio extracted from each of the first 10 tracks on the album 5 Minute Meditations. Important: The goal for your track is that while it will contain material from all of those 10 tracks, the end result (the sum) will be less than total of the constituent parts. In other words, your own 5 Minute Meditation will take Rosevere’s source ambient recordings and produce from them something even more ethereal, more ambient, more artfully threadbare.
Five More Important Steps When Your Track Is Done:
Step 1: If your hosting platform allows for tags, be sure to include the project tag “disquiet0301” (no spaces) in the name of your track. If you’re posting on SoundCloud in particular, this is essential to my locating the tracks and creating a playlist of them.
Step 2: Upload your track. It is helpful but not essential that you use SoundCloud to host your track.
Step 3: In the following discussion thread at llllllll.co please consider posting your track:
Step 4: Annotate your track with a brief explanation of your approach and process.
Step 5: Then listen to and comment on tracks uploaded by your fellow Disquiet Junto participants.
Deadline: This project’s deadline is 11:59pm (that is, just before midnight) wherever you are on Monday, October 9, 2017. This project was posted in the early afternoon, Manhattan time, on Thursday, October 5, 2017.
Length: The finished track should be roughly five minutes long.
Title/Tag: When posting your track, please include “disquiet0301” in the title of the track, and where applicable (on SoundCloud, for example) as a tag.
Upload: When participating in this project, post one finished track with the project tag, and be sure to include a description of your process in planning, composing, and recording it. This description is an essential element of the communicative process inherent in the Disquiet Junto. Photos, video, and lists of equipment are always appreciated.
Download: It is required, per the source audio’s Creative Commons license, that your track is set as downloadable, and that it allows for attributed remixing (i.e., a Creative Commons license permitting non-commercial sharing with attribution).
Linking: When posting the track online, please be sure to include this information:
More on this 301st weekly Disquiet Junto project (“Artfully reduce an album to something less than itself”) at:
More on the Disquiet Junto at:
Subscribe to project announcements here:
Project discussion takes place on llllllll.co:
There’s also on a Junto Slack. Send your email address to twitter.com/disquiet for Slack inclusion.
Image associated with this project is the cover of the Lee Rosevere album 5 Minute Meditations, which was the source audio. The music and image are used thanks to a Creative Commons license:
More from Lee Rosevere, who is based in Vancouver, BC, at
https://twitter.com/LeeRosevere http://freemusicarchive.org/music/Lee_Rosevere/ http://happypuppyrecords.ca
|
OPCFW_CODE
|
package buka.wetten;
import java.util.List;
import buka.quoten.Quote;
import buka.quoten.QuotenFactory;
import buka.tipps.TippFactory;
import buka.tipps.TippOfUser;
public class WettStrategieSchwarmintelligenzUndQuote implements WettStrategie {
private final WettStrategie wettStrategie1;
private final WettStrategie wettStrategie2;
public WettStrategieSchwarmintelligenzUndQuote(final List<TippOfUser> tipps, final Quote quote) {
wettStrategie1 = new WettStrategieSchwarmintelligenz(tipps);
wettStrategie2 = new WettStrategieQuoteSafe(quote);
}
public WettStrategieSchwarmintelligenzUndQuote(final TippFactory tippFactory, final QuotenFactory quotenFactory) {
this(tippFactory.getTippsOfUsers(), quotenFactory.getQuote());
}
@Override
public Wette getFavorisierteWette() {
Wette wette1 = wettStrategie1.getFavorisierteWette();
Wette wette2 = wettStrategie2.getFavorisierteWette();
if (wette1.isDoNotBetBet() || wette2.isDoNotBetBet() || wette1.getWetteAuf() != wette2.getWetteAuf()) {
return Wette.LIEBER_NICHT;
} else {
double wahrscheinlichkeit = (wette1.getWahrscheinlichkeit() + wette2.getWahrscheinlichkeit()) / 2;
Wette wette = new Wette(wette1.getWetteAuf(), wahrscheinlichkeit);
return wette;
}
}
}
|
STACK_EDU
|
once the wage is in effect fewer new businesses will be created.
Just because you believe it doesn't mean that it is true. Even professional economists can't agree on such a simple statement, since the details are so complex. But whatever floats your boat Mr Dunning Kruger.
They would get them back and then punish them and then separate them.
Exactly. If that's what he deserves, then truth will out.
And I have seen an awful lot of people saying that he wasn't worth any particular effort to get back, which is pretty close to "let him rot." That's just mind-boggling to me.
I was at Minot for five years, which seemed particularly like exile after having been in England, about an hour away from London, for two years before that. I will say that it wasn't quite as bad as I expected it to be when I got my orders.
Were you at Dover? I've always heard that's kind of the East Coast's equivalent of Minot. [1/2 g]
Then-PFC, now-SGT Bergdahl may in fact have deserted his post. There are certainly credible accusations to that effect, and if so, then he should be tried and convicted for the crime. But it's a whole lot easier to investigate those charges with him here, and we don't let the Taliban mete out justice for us.
So in that sense this is the most elegant natural solution.
Haven't you heard about the consensus. (Before claiming that science is not consensus, that is a different issue, and a way to avoid the point, that the vast majority of scientists disagree with you.)
How can you tell if its a political document rather than a science. First sign is it came from a political organization. The second is that its not peer reviewed.
Oh yeah, there are problems in how the final language of the report is written, which every county pushing their special interests into the language. Don't change the fact that the scientists signed off on the document.
Try reading some of the citations in the report. See how well they match suggested claims in the report.
That's a howler, because I *have* read some of the citations of the report, and it seems very well written to me. If you a referring to some particular controversy in some paragraph (or sentence) of AR5, then you should (1) provide a citation, and (2) admit that the report is larger than one paragraph or sentence.
Yea i know several scientists that where involved with the last IPCC report and vowed never again.
I know of complaints from scientists about how the science gets watered down, and a rosey sheen is painted onto some parts of the problem. Doesn't change the central point that I was making: we can and must move to a carbon neutral economy. Quibbling over semantics won't change that at all.
|
OPCFW_CODE
|
@17cupsofcoffee, I'm delighted to hear this! We need to work on our humility, and this seems like a great step.
@Yandros, There doesn't appear to be a lint for this -- should there be?
I'm also a little disappointed that static mut produced such better code on x86. My suspicion is that thread_local, which would generate gs-relative addressing, wouldn't perform as well, but I can test it. Lately I've been working mostly on ARM, where stack-relative addressing tends to be cheaper because of the lack of a compact way to do absolute addressing.
Fortunately, this seems to be an artifact of how the tables are accessed -- I have a simpler version that performs better, which will appear in Part 6.
I agree that Cell winds up being important, but I'm not sure this is the best place to introduce it, because using it in a thread local relies on generics and other concepts I haven't introduced yet. (I also wouldn't use it here in idiomatic Rust.)
@andresovela, you've found two editing mistakes on my part! It looks like the first one isn't present in the nbody-1.rs file that's linked toward the end of part 1, since I compile and test that one. I suggest downloading it and trying it.
I've fixed both issues in the article. Thanks!
Uh-oh! I'm worried about this one. Can you post the output you do get? Can you compare against the nbody-1.rs full program?
The algorithm is kind of sensitive to where parentheses appear in floating point expressions. There's also some += and -= that will corrupt the results if swapped. (I know this because I screwed both up when I was initially transcribing the program.)
I haven't read the tutorial yet (and I'm not the target audience), but I wanted to quickly say thank you for writing this!!! We've tried to make the book background-agnostic, which means we're often not presenting the material in the best way for every reader. We need more background-specific resources like this!
Thanks for posting that brilliant article.
I'm not sure of the premise about C programmers though. As an old time embedded systems programmer, in C and other languages before that, I can appreciate their value system when selecting a language: Native compilation, no run-time overheads, small binaries, performance, and above all control of what is what.
As such, my first experiments in Rust were exactly reimplementing some C programs in Rust. So as to evaluate performance, code size etc. Of course I produced what is probably very bad, non-idiomatic, Rust code that looked like my C. I was immediately impressed how Rust met all the requirements I mentioned above.
But I had no "unsafe" anywhere. I just rearranged things a bit until the compiler was happy.
I'd just like to say this is very well-written and +1 the sentiment that this fills a very real gap in the landscape of Rust education/documentation.
Aside from the #[allow(...)] thing that others mentioned already, the only potential improvement that jumped out to me is this: I'm of the opinion that we should avoid saying things "this unsafe code is safe, but that unsafe code is unsafe" because it makes "unsafe" into a very murky term implicitly switching between two or more meanings multiple times in a sentence, when it really needs to be a crystal clear term. Admittedly, this is not a settled issue with a clear community consensus, and for all I know I might be in the minority here, but the nature of unsafety is so central to this book that it's probably worth thinking about whether sentences like "Here's an aside on when unsafe is safe" should use some other word like "correct".
EDIT: spotted one other small thing in part 4:
Because the union is defined in the same file as advance , putting pub on the accessors doesn't actually do anything ... For the purposes of this tutorial, I'm keeping everything in one file.
I believe you can declare a mod inside that file, and then the rest of the file really would be forced to use the pub accessors. I'm not sure if that's a net pedagogical improvement, but it's probably worth considering.
I like the emphasis on 'whatever you could do in C, Rust can do it too', it's a much-needed approach.
However, it may be a good idea to change the order of lessons around a bit. The way it looks right now, I'd expect the typical C programmer to look at the first few diffs, then conclude "Rust is way too verbose/complicated, I already know C and can handle memory just fine, because I'm a Good Programmer™", which means that most of them are never going to get to the good parts.
Also, a quick primer on Rust's variable declaration syntax may be a good idea. For someone who only ever used C, asm and Bash, something like : [f64; 3] may look like utter gibberish, and type inference may be mistaken for dynamic typing.
The standard language a lot of us are trying to standardize around is "sound" versus "unsound" for functions. A block is unsound if it does something illegal, and sound if it breaks any rules.
For a safe function, in order to be sound, it must be sound for all possible inputs (and state, if relevant) producible in safe code. For an unsafe function, it must be sound over all documented supported inputs (and state). Unsafe functions also get the distinction of having sound and unsound invocations.
And then the final step would be "unsafe but unmarked and private" which is a safe function that is sound for some inputs but not all, but can still be fine if encapsulated in code which is always sound. (But many people would tell you to mark it unsafe for greater explicitness anyway.)
Yeah, I agree, actually. I went back and forth on phrasing there. To some degree, I like the juxtaposition of things like "making safe out of unsafe," but I also see that it could be confusing.
I'm a fan of the sound/unsound distinction that @CAD97 raised, and I don't think it's excessively jargony. I'll play with the wording.
I think that's a real risk, but I'm also not sure that I can reach someone who's approaching the tutorial looking for reasons to stop reading it. I could be more forceful in Part 1 pointing out that the code is going to get worse before it gets better, I suppose.
I agree that the words are uncomfortably similar. Unchecked/checked isn't the distinction we're trying to make, though -- it's between these cases:
This bit of code is written using safe parts (in the Rust sense) and is sound. (Which should be true of all safe code, but sometimes safe code has bugs.)
This bit of code is written using unsafe parts and is unsound (which is usually what happens the first time I try to write something using unsafe).
This bit of code is written using unsafe parts and is nevertheless sound.
Both 2 and 3 are "unchecked" in the sense that the compiler doesn't have our backs.
I think the safe/sound distinction appeals to me because of the English expression "safe and sound" meaning that something is comfortably secure (and probably also warm and cozy). With Rust, achieving "safe and sound" is relatively easy.
I'm not sure if you're offering that as an argument in favor or against the use of the term; in general, I've specifically tried to avoid using math jargon in these articles. The existence of a non-math analogy for soundness makes me more inclined to use it, though.
When used with a specific meaning in parts of logic and proof theory, "sound" is not mathematical jargon, but a mathematical term, just like "function", "real number", "complex number", "group", "field", "self-adjoint".
Unfortunately, i am not experienced enough in that part of logic to see if the term is appropriate here.
I was puzzled for a moment by "mathematical jargon", but then i've figured out that "pathological", "elegant", "folklore", etc., are examples of mathematical jargon.
Hm. I didn't intend jargon as a value judgment, but it sounds like you heard it that way. By "jargon," I mean terms that are opaque or difficult to understand for people outside of a particular field or profession. I'm explicitly trying to avoid using math terminology that isn't commonly used in programming, so that the tutorials remain accessible to people (like me) without formal math training. And so "function" is okay but I'm not using terms like "field," "ring," "isomorphic," or (say) terms from category theory.
Hope that makes sense. I'm not attacking math, just aiming for a target audience who may not be comfortable with it.
It's that specific difference that we're trying to capture in a safe/sound distinction. Even if casual usage of the terms is casual, it definitely helps if written material consistently uses two different terms for the different meanings.
Even pure synonyms are noticed when used for disjoint (but related) concepts consistently. If we save even one moment of thinking between "is this "unsafe as in human-checked" or "unsafe as in causes UB", then the term split is worth it. It doesn't even matter if it's a conscious acknowledgement in the reader if we can bias them towards assuming the correct case from the start.
I understand the position of "terminology doesn't matter that much, context will figure it out," but I don't agree with it. (Maybe it's my formal background coming through.) Terminology exists to help the reader understand faster, and we should especially try to make understanding easier around unsafe (as in human-checked) code.
That's why "jargon" isn't great: you either know the meaning or you don't, and if you don't, it's worse than a more ambiguous but more obvious term.
Oh, and one more thing:
That's not correct. Code in an unsafe block is just as checked as code not in an unsafe block. It "just" gives you the superpowers of 1) dereferencing raw pointers, 2) using union fields, and 3) calling other unsafe APIs. This gives you the power to break rules upheld mechanically in not-unsafe code, but nothing changes about the safe subset of the language.
|
OPCFW_CODE
|
Use SAP Document Management service (DMS), Application Option to Store Document Files
Blogs for DMS series:
- Use SAP Document Management service (DMS), Integration Option to Store Document Files
- Use SAP Document Management service (DMS), Application Option to Store Document Files
Introduction of DMS
SAP Document Management Service is a content management solution on the Cloud Foundry environment of SAP BTP.
You can consume the SAP Document Management Service in different ways:
SAP Document Management, integration option lets you build document management capabilities for your business applications.
SAP Document Management, application option is a standalone, ready-to-use web application that provides document management capabilities for your enterprise content.
SAP Document Management, repository option lets you securely store and manage your documents.
Document Management, repository option isn’t an independent offering. You can use it for storage purposes along with Document Management, integration option or Document Management, application option.
You can purchase Document Management, integration option or application option, for storage purposes, you can either connect to your own storage or use Document Management, repository option.
Your own storage should be CMIS-compliant, on-premise, or cloud repository.
Thus, you can consume the DMS by the following combination:
Document Management, integration option + your own storage
Document Management, integration option + Document Management, repository option
Document Management, application option + your own storage
Document Management, application option + Document Management, repository option
Usage of Document Management, repository option
Usage of Document Management, repository option is defined as an internal method. The Document Management Service, Repository Option can’t be used as a standalone. It must be used with one of two options: Document Management Service, Application Option or Document Management Service, Integration Option.
As it’s a commercial entitlement, no tiles appear in the cloud cockpit’s. You should assign the entitlements of the
Document Management, repository option to the same subaccount where
Document Management, integration option or
Document Management, application option instance is created.
In the global account, assign the entitlement to your subaccount:
Usage of Document Management, application option
Step 1: Subscribe to the Document Management, application option
In the subaccount, find the tile in the Service Marketplace:
Create an instance:
Step 2: Grant User Access
Create a new Role Collection, add corresponding scopes into it and assign the Role Collection to yourself:
Role Role Capabilities SDM_Admin and SDMWeb_Admin Provide admin capabilities to a user. With admin capabilities, a user can add, sync, update, and delete repositories. SDM_User and SDMWeb_User Provide capabilities to use the web UI of Document Management Service, Application Option. SDM_MigrationAdmin and SDMWeb_Migration Provide admin capabilities to use the web UI for Document Management Migration. With admin capabilities, you can initiate the migration, check the status, and also download the logs.
Step 3: Connect to Document Management, Repository Option
In the Document Management, application option instance, select Go to Application.
You can see three different tiles:
Document Management that takes you to the web application for managing files and folders
Document Management Admin that helps the admin to manage repositories.
Document Management Service Migration to manage migration scenarios
Document Management Admin tile, connect to Document Management, Repository Option by adding a repository:
Parameter Required Values Repository Type Yes Internal Display Name Yes Name of the repository that appears in the application. Description Optional Description of the repository. Hashing Algorithm Optional Lists the supported hashing algorithms to generate content hash for your documents. Choose None if you don’t want to generate hash. Versioning Optional Enable the option if you want to maintain multiple versions of your documents. Virus Scan Optional Enable the option to scan the documents before you upload them. You can upload files of size up to 400 MB. Disable for large files Optional The option is available only if you enable virus scan. Enable the option to disable virus scan for file size greater than 400 MB. Collaboration Optional Enable the option to create collaboration repositories. For more information, see .
Step 4: Consume Document Management, application option
Document Management tile, try to add folder or files as you like:
Step 5: Collaboration Repositories
Collaboration repository can create
Share folder which is set as private by default. Then you can configure the access control to share with your collaborator.
More detailed information: https://help.sap.com/viewer/f6e70dd4bffa4b65965b43feed4c9429/Cloud/en-US/926d32b7cd5145acb93c4317f58e2751.html
Create a shared repository by checking
Collaboration attribute as below:
Create a shared folder in the root path:
Add user to your shared folder:
Step 6: Access Application Option Repository Using API
You can access repositories onboarded via Document Management Service, Application Option through API by connecting to the instance of Document Management Service, Integration Option.
Create a Service Instance and Service Key for Document Management Service, Integration Option. For more information, see .
In the SAP BTP cockpit, choose Connectivity -> Destinations -> New Destination -> Service Instance.
Select the Service Instance that you created in the drop-down list. Enter the Name and Description.
Choose Next -> Save.
Open the Document Management Service admin view.
Choose the icon to edit your repository.
In the dialog box that appears, enter the destination name that you created in the field Service Instance Destination.
Choose Save to update your repository.
Consume API to connect your repository. For more information, see .
More details: https://help.sap.com/viewer/f6e70dd4bffa4b65965b43feed4c9429/Cloud/en-US/e5f4e592f37748759ecd551f3afcd364.html
if we configure S/4HANA DMS with SAP BTP DMS for attachments in our business objects (product, article, purchase order ...)
how can we have a URL like https:<base_url>/article/<image_id>.png
let's say I have 20000 images for my products (articles), we would like to be able to dynamically build our URL like appending the article number to a base URL, like any content server
the CMIS generates ugly long URL
I don't think SAP BTP DMS can offer that, a tools like ECM maybe would ?
Have you tried to move the parameters in the URL to the body in the format of form-data
that is the problem with this API, you need to know the repo ID (could be harcoded) and the object ID
we just need API where we pass a filename, otherwise we will need a cross-reference table of object id and file name per products by example, no super convenient
Hi Tia Xu ,
In the repository type,does internal means SAP based storage service and external means Amazon S3 ,blobstore...
Is it possible to use Amazon S3 as the repository and store data in it.Or is the service similar to the S3 or an extension of the S3.
Will the repository changes on change in the deployed platform(like AWS,Azure,GCP...) .
Please help me with this,I am trying to understand the storage of data in DMS and how it is different from Amazon S3.
Thanks & Regards,
I hope this answers your question:
There are two types of repositories in Document Management Service:
Document Management Service's cloud repository.
If you want to add internal repositories, you need the Document Management Service, Repository Option entitlement.
An additional charge is applied for storage.
OpenText or SAP S/4HANA DMS repositories are examples of third-party CMIS-compliant repositories.
The repository provider is responsible for providing their own repository, so there are no storage charges.
|
OPCFW_CODE
|
What Island is likely to be mentioned on 19th Century Somerset Will?
Today I obtained the 1836 Will of my 4th great grandmother Martha Chichester (nee Noake) who lived in Old Cleeve, Somerset, England. She mentions two sons living at an Island of ?????? which I am guessing will not be too far away.
Does anybody know what the name of the Island is that is written twice at the end of the third and sixth lines in the graphic below?
The text appears to refer to the "Island of Demarara" - note spelling.
The "e" occurs in various places (beginning of 3rd line in "equal", end of 2nd line in "transfer").
After the "e" there are 3 verticals of the "m".
The "a" is fairly clear.
The "r" is in "fourth" in 3rd line, and "property" on 2nd line.
Can't see a capital "D" anywhere else but it appears to be one of those where the left-hand vertical is omitted.
If you Google the phrase, then there are several references to the "island of Demarara" - and with that spelling. These references match the history of Demerara, which became part of Guyana. Quite why the word "Island" is there, I am not sure - it's certainly not an island out in the middle of the sea - possibly the area was then surrounded by rivers and marshes.
I first read the place name as the Island of Jemarara, but can't find any online reference to such a place.
I found several contemporary references to the "island of Demarara", such as a London Gazette chancery notice for a Reverend Benjamin Thomas Williams who died at St Mattthew in the island of Demarara, in the West Indies. I am not sure which of the many West Indian islands it might be, but I think it probably goes by a different name now.
Updated information on Islands of Demerara (Guyana) from Wikipedia:
Essequibo Islands-West Demerara (Region 3) is a region of Guyana,
split in two by the Essequibo River. It is bordered by the Atlantic
Ocean to the north, the region of Demerara-Mahaica to the east, the
region of Upper Demerara-Berbice to the south and the regions of
Pomeroon-Supenaam to the west.
Thank you for posting this Valerie (I removed your name from the post because it is already signed using your user card where you are free to include it). I'm keen to try and place where the various Chichesters were living in Demerara/Demerara-Essequibo/British Guiana prior to about 1850. So far: estate No. 35 at Corentyne Coast, Plantation Zealand, Mahaica, Yorkshire Hall, Quaker's Hall, Drill, Ruimveld - my email is in my user card, if easier.
|
STACK_EXCHANGE
|
Last Updated: 06/11/2006
Okey, firstly. This fix does not involve you using PSix, Flashmod or any type of homebrew. What it does, is get rid of those annoying corrupted icons, and it doesnt leave any 'spaces', like the popular fixes mentioned above. Its very simple to do, so you guys shouldnt have too much trouble.
Download the attached Renamer.rar and extract it to your desktop. You will be left with a renamer.exe file.
Plug your PSP in, and set the USB connection on.
Copy over the renamer.exe to your GAME folder on your PSP. (Open up your PSP from My Computer, double click on the PSP folder, then double click on the GAME folder)
Run renamer.exe, and it will rename all the folders inside the GAME folder. It'll look weird, but basically it adds spaces and underscores. ( _ )
Thats it! Unplug your PSP, browse over to the game icon, and voila!
Q) Does this work on Firmware 2.6?
A) This works on all firmware versions of the PSP. Reason for this, is that nothing is actually done via the PSP, its a simple renaming program run from Windows.
Q) Can I run this from Linux?
A) Ya funnily enough, this program isnt so hard to do. Firstly, you'll need wine installed. Once you have that, simply open up a console, then goto the directory where the renamer.exe is located. Then simply type 'wine renamer.exe' (without the quotes), and press enter. Simple :icon_bigg (This has been tested with Ubuntu 5.10 Breezy, but should work on most other distrobutions)
Q) When I try and run a certain homebrew/program, it doesnt work!
A) Below are the reported homebrew(s) that currently dont work with this renaming method. Dont ask me why they dont work, I'm not a PSP coder :P
- Mario Milestone
- PSiX Lite or PSiX Pro. (Note ~ Dont use the renamer if you plan on
using Psix (either lite or pro). It will muck up your icons within Psix, even if u dont rename the psix folder(s) themselves. You have been warned!)
- PSP Theme. (Refuses to load, and then needs to be reinstalled. Thanks to mrbagrat for reporting this)
- Snes9xTyl 0.3 ME (Workaround: Rename the folders from s9xTYLme & s9xTYLme% to snestyl and snestyl%, then run renamer.exe :))
I hope this has helped you as much as it has helped me, and all credits goto mannymix03, since he's the one who gave me this program. So thank him next time you see him :)
Secondly, as there are hoardes upon hoardes of homebrew that is available for the PSP, I cannot keep up with all of them. If a certain homebrew does not work, just send me a PM with its name, and what happens, so I can add it to the guide. Not only will this inform future users, but you also get ur name in teh credits? l33t I know lol.. :Punk:
|
OPCFW_CODE
|
Do my assignment uk 24 hours - evolution college paper writers
Really didnt do my assignment uk 24 hours modern
Of fulfilled not Life of since the but among of Field-Marshal hereby Duke what the happiness depended now a Wellington do my assignment uk 24 hours September 4 2015, 4:41 am on afterwards remained thus duties days chance of even. Out even eleven another set I get for which that same show England do my assignment uk 24 hours. less and much thing among was delightful much this every refreshed him William said a another . It I hundred last of exactly and tore this in the afterwards thought servant's two he whereas said felt same. come not guests Mrs cannot these next a welcome lose no in with taken been saying day you now that week thin regretted but as best clothes through flannel suit do my assignment uk 24 hours have part Hall of and therein petticoat a which time however having herself brought system change anyhow you I. Two whereas the but the morning in back was and thought trotting serious all that however fast men horses afterwards waggons five 24 therefore the about came itself it as had passed too possible running becoming I do my assignment uk 24 hours morning rose formerly as whenever unlikely.
Do My Assignment Uk 24 Hours - .xyz
He a be system one must the at were of the a have be down the due a is shall age attained dishonouring of no hours do assignment 24 my uk him the already has couldnt nomination and per Church become has nevertheless of country have eligible pronounced do my assignment uk 24 hours and and who is and sentence or attained age sincere burgher need hours do 24 assignment uk my years in the for a nobody State member of they naturalized anyway Protestant of burgher vote although twenty-one thirty to per must. There indeed do my assignment uk 24 hours shall whoever any whose objection the decide Council. after understand thoroughly great whither may thing the what explain to be some the both is anyhow thinkers move it the despise my 24 uk do hours assignment it everything redistribution to commercial to subject license. Than wire transfer give you be beside other Volksraad Volksraads us a both for the some fixed or to within for same payment have want do my assignment uk 24 hours money later September 4 2015 always by First check both shall if or method number order. With which or performed Project into sentence Gutenberg" hence phrase along with almost since following no and September 2 2015 anywhere restrictions something on License to more phrase or whereby any "Project which appears links no Gutenberg-tm whence active access at do my assignment uk 24 hours prominently take the displayed (any must copy the full the the Project that accessed do my assignment uk 24 hours viewed whenever or everywhere whatsoever eBook a is other cost work is copied with of of immediate distributed whereby appear "Project for use anyone.
|
OPCFW_CODE
|
Massachusetts Institute of Technology Full-Stack Developer, Poetic Justice in Cambridge, Massachusetts
Full-Stack Developer, Poetic Justice
Job Number: 19847
Functional Area: Research - Other
Department: Media Lab
School Area: Architecture & Planning
Employment Type: Full-Time Temporary
Employment Category: Exempt
Visa Sponsorship Available: No
Email a Friend Save Save Apply Now
Working at MIT offers opportunities, an environment, a culture – and benefits – that just aren’t found together anywhere else. If you’re curious, motivated, want to be part of a unique community, and help shape the future – then take a look at this opportunity.
FULL STACK DEVELOPER, Media Lab-Poetic Justice (https://www.media.mit.edu/groups/poetic-justice/overview/) , to join a group exploring new forms of social justice through art and work on a series of participatory public artworks. The group has been collecting sound and video recordings from around the world and remixing them into sound and video works, including A Counting (https://www.google.com/url?q=http://a-counting.us&sa=D&source=editors&ust=1624419189630000&usg=AOvVaw2Rc4DHUSz8mlVsJHWWiLzh) and Freedom Radio (https://www.google.com/url?q=http://freedom.radio/&sa=D&source=editors&ust=1624419189631000&usg=AOvVaw3gKFly7fqneYb7x98JsL6U) . Responsibilities include developing interactive voice response (IVR) systems using Twilio and Vonage; developing scalable systems to record, store, and manage audio and video recordings; developing scalable systems to transcribe audio and video recordings in multiple languages/formats; analyzing transcripts and and generating stories using transcripts; developing scalable tools for narrative generation; and developing dashboards and reports for user engagement, topic generation, and content summarization. A full description is available at https://www.media.mit.edu/about/job-opportunities/.
REQUIRED: bachelor's degree in computer science or related field; at least three years’ experience with web development/architecture and modern DevOps tools (e.g., Git, GitHub); at least two years’ experience with Python using Flask or Django and with third-party APIs within Python web applications; and ability to work creatively and analytically, communicate clearly and concisely in both technical and non-technical language, collaborate and pair-program effectively, and work independently. PREFERRED: experience with the following: installation, configuration, and development, including work within a production environment; DevOps and Agile engineering practices; deploying systems into a production cloud native environment with major cloud providers (e.g., Amazon Web Services, Google Cloud Platform); NLP libraries (e.g., Spacy, Hugging Face, NLTK); web application design and development using React or Angular; voice application design and development using Twilio or Vonage; and Twitter and Slack bot development. Job #19847-8
Will work in the NYC or Boston area.7/12/21
|
OPCFW_CODE
|
getting value from elements created dynamically
im working on a project(UWP C sharp) from college..
i need to build library program that manege books and magazines
i have a problem with the edit item page...
iv created method that create elements (text box, datepicker , etc..) dynamically by the type of the item that selected (if the user select book, he'll get the elements for book same for magazines..)
the problem is when im trying to write the event button that take the values from all those elements i cant reach them... cuz they in a method..
(like book.title = textbox.text;)
sorry for my English and Ty for the help:)
private void CreateBtnsByTheTypeOfTheItem(AbstractItem item)
{
TextBox editTitleTB = new TextBox();
editTitleTB.Text = LibManager.Instance.CurrentItem.Title;
Grid.SetRow(editTitleTB, 0);
editPageGrid.Children.Add(editTitleTB);
CheckBox editIsAvaibleCB = new CheckBox();
editIsAvaibleCB.Content = "Is Avaible";
editIsAvaibleCB.IsChecked = item.isAvaible;
Grid.SetRow(editIsAvaibleCB, 1);
editPageGrid.Children.Add(editIsAvaibleCB);
DatePicker editDatePicler = new DatePicker();
editDatePicler.Date = item.PublishDate;
Grid.SetRow(editDatePicler, 3);
editPageGrid.Children.Add(editDatePicler);
if (item is Book)
{
Book itemAsBook = item as Book;
TextBox editAuthor = new TextBox();
editAuthor.Text = itemAsBook.Author;
Grid.SetRow(editAuthor, 2);
editPageGrid.Children.Add(editAuthor);
var _enumval = Enum.GetValues(typeof(BookCategory)).Cast<BookCategory>();
ComboBox editCategpryCB = new ComboBox();
editCategpryCB.ItemsSource = _enumval.ToList();
editCategpryCB.SelectedItem = itemAsBook.Category;
Grid.SetRow(editCategpryCB, 4);
editPageGrid.Children.Add(editCategpryCB);
}
else
{
Magazine itemAsMagazine = item as Magazine;
TextBox editEditors = new TextBox();
editEditors.Text = itemAsMagazine.Editors;
Grid.SetRow(editEditors, 2);
editPageGrid.Children.Add(editEditors);
var _enumval = Enum.GetValues(typeof(MagazineCategory)).Cast<MagazineCategory>();
ComboBox editMagazineCategory = new ComboBox();
editMagazineCategory.ItemsSource = _enumval.ToList();
editMagazineCategory.SelectedItem = itemAsMagazine.Category;
Grid.SetRow(editMagazineCategory, 4);
editPageGrid.Children.Add(editMagazineCategory);
}
"cuz they in a method" , not sure I follow... but if I am following you at all, it's just a matter of scope, so just store your data in a dictionary/list, make it static, or public in a class, and reference it that way.
they didn't teach us about dictionary yet...
i wanna know if i can get values from those elemnts some how i know i cant reach them cuz they live only inside the method
I think you are looking for: editPageGrid.Children
tried that, dosent work... cuz still they live only inside the method even if i called that method in the mainpage he need to know before runing that hes got those elements for sure...
Good luck, I have no idea what we are talking about.
If you have named each of the dynamically created field you can access them by using the FindName method.
TextBox editTitleTB = new TextBox();
editTitleTB.Name = "TheHobbitTB";
editTitleTB.Text = LibManager.Instance.CurrentItem.Title;
TextBox theHobbitTB = (TextBox)this.FindName("TheHobbitTB");
Then you could essentially edit what ever you want about this specific textbox.
|
STACK_EXCHANGE
|
The Developer Settings panel is separate from the App Dashboard. It allows you to specify which developer notifications you want to receive from us, add Android key hashes (if your app is an Android app), set developer documentation preferences, and see your Developer Community Forum profile. This document explains these settings.
You can access the Developer Settings by going to developers.facebook.com/docs, hovering over your profile image in the upper right corner, and selecting Developer Settings from the dropdown menu.
You can turn notifications on and off for each of your apps in the Notification Settings tab. Business apps and individual apps are listed separately, and you can access your Business Settings or Facebook Settings for updating your associated email addresses. You can also delete your account, if necessary.
Notification settings are available for
You can click All On/All Off for each notification type or customize them by selecting individual notifications and the delivery method for each. For customization, click the notification type heading to open the list of notifications, then select the delivery method from the drop-down menu to the right of the individual notifications you wish to receive. Make sure to click Save Changes at the bottom of the page for any updates to take effect.
To subscribe to Platform Status notification settings, go to Platform Status dashboard and click Subscribe at the top right.
Get urgent updates that may require an action from you to prevent your app from breaking.
|Urgent Notification||Description||Available Delivery Methods|
Alerts about changes in app status including when the app is made public or removed
API Rate Limiting
Alerts about nearing and hitting the rate limit for API requests
Moratoriums & Restrictions
Alerts about moratoriums/restrictions placed on your app
Updates on the status of your app's URLs
Alerts about your existing Webhooks subscriptions
Alerts about a version upgrade affecting your app
App No Platform
Updates on your app's platforms
Get direct responses from Facebook about the status of your app.
|Conversations Notifications||Description||Available Delivery Methods|
Updates on the status of your appeals
Alerts about app review status updates
Alerts about reported bugs for your app
Direct Support Questions
Alerts about direct support questions
Alerts about the FbStart program
Updates on the status of the Developer Circles program
Alerts when your app is determined to be inactive
WP Security Review
Alerts about WP security review status updates
Get updates about Facebook platform products you use.
|Product Notifications||Description||Available Delivery Methods|
Login & API Errors
Alerts about when your app throws errors during login or for other API calls
Alerts about your ad placements with Audience Network
Get data and trends for your apps
Alerts about your app's integration with the Marketing API
Open Graph approval status updates
Alerts about canvas payment and payout updates
New products, events, surveys and more
Updates on Gameroom Native changes
Profile Plus API
Alerts about your app's integration with the Profile Plus API
Get updates on blog posts and documentation you're subscribed to.
|Blogs & Docs Notifications||Description||Available Delivery Methods|
Developer Blog Posts
Get notified about every new Developer blog post
Developer Blog Digests
Get the Developer Blog Digest
Marketing API Blog Posts
Get notified about every new Marketing API blog post
Marketing API Breaking Changes
Get notified about breaking changes to the Marketing API
Marketing API Blog Digests
Get the Marketing API Blog Digest
Instant Articles Developer Blog Posts
Get notified about every new Instant Articles blog post
Messenger Changelog Updates
Alerts when Messenger Changelog has new updates
Get updates on developer community forums you're subscribed to.
|Developer Community Forum Notifications||Description||Available Delivery Methods|
Alerts about developer community forum questions that have new answers
Alerts about developer community forum questions that have new answer selections
Alerts about developer community forum answers that have new comments
Alerts about developer community forum user profiles that have new updates
Developer Community Forum Weekly Digest
Get the latest updates from the community forum
Select Answer Reminder
Reminders to select answers for your developer community forum questions
Deleting your developer account will remove all of your apps. If you are the only Admin of an app, the app will be removed and all of its information will be lost. Reregistering as a Facebook developer will not restore removed apps.
Deleting your developer account does not affect any of your Facebook accounts, including Instagram, Messenger, or WhatsApp, or any of your Facebook Pages.
If you are the only owner of any apps, you will need to either remove them or transfer ownership before you can delete your developer account.
To delete your developer account, click the Delete Account button. A pop-up dialog will appear to confirm you want to delete your developer account; click Delete Account. You will then be asked to enter your password for security purposes. Entering your password and clicking Submit will delete your developer account.
The Documentation tab allows you to select a default programming language for displaying developer documentation code examples that have multiple programming language examples available.
Choose your preferred programming language from the dropdown menu, and click Save Changes.
The Community Profile tab displays information related to your Developer Community Forum profile such as your profile picture, display name, earned badges, and leaderboard points.
You can also see questions you've asked and answered and the forum questions you're following. New questions can be submitted to the Developer Community Forum by clicking on the Ask a Question button.
|
OPCFW_CODE
|
What is Office Online Server and Why SharePoint 2016 Needs it?
Microsoft Office Online Server is the next version of Microsoft Office Web Apps (OWA) Server, which was originally released in 2012. At the time of writing, there is a beta version of Office Online Server available for download. It’s called Office Online Server Preview.
Microsoft Office Online Server works with products and services that support Web Application Open Platform Interface (WOPI) , such as SharePoint Server 2016 Preview and Microsoft Exchange Server 2016. Office Online Server is essentially a standalone, on-premises server that allows you to run browser-based (online) versions of Microsoft Word, Excel, PowerPoint and OneNote. These are limited versions of the desktop versions and therefore do not include all the bells and whistles but come handy when you either want to simply read Office files or do some minor editing.
NOTE: WOPI defines a set of operations that enables a client to access and change files stored by a server. This allows the client to render files and provide file editing functionality for files stored by the server. [source: MSDN ]
Supported Operating Systems
At the time of writing, the operating system requirements are:
- Windows Server 2012.
- Windows Server 2012 R2.
How to Install Office Online Server
- Download the Office Online Server Preview from Microsoft. The iso file is 510.1MB.
- Start Windows PowerShell as an administrator and run the following command.
- Install the following prerequisites on Windows Server 2012 or Windows Server 2012 R2. This server must not be running SharePoint Server.
.NET Framework 4.5.2
Visual C++ Redistributable for Visual Studio 2015
All available Windows updates
- Start the Office Online Server setup program (setup.exe) and follow the onscreen instructions.
- If you have a 32-bit version of office installed on the computer, you won’t be able to install the 64-bit Office Online Server (see screenshot below). You would have to uninstall the 32-bit version of Office first and then install the 64-bit version of Office Online Server. This is rather interesting because I didn’t see the option to download a 32-bit version of Office Online Server (you can only download 64-bit version) and even if you could, Microsoft doesn’t recommend installing 64-bit version of Office. So you have two options. First remove 32-bit Office and then install Office Online Server, or if you need Office applications on the server then first remove 32-bit Office, download and install a 64-bit version of Office (yikes!) and then install Office Online Server. This may not be a big deal because a lot of administrators don’t want to install Office applications on the server but I thought it is worth mentioning.
NOTE: Microsoft doesn’t support Office Online Server Preview in a production environment so make sure you install it in a test environment.
For more details, check out this TechNet article on Microsoft’s Web site.
For more information on how to configure SharePoint to work with Office Web Apps Server, see Scripted Installation of SharePoint 2013 and Office Web Apps Server – From the Field (Part 4).
SharePoint and Office Online Server
In SharePoint Server 2016, Microsoft has removed Excel Services. Before you panic, I should let you know that even though Excel Services feature has been deprecated from SharePoint 2016, Microsoft has essentially taken that load off of SharePoint Server and moved it to the Office Online Server. You can install Office Online Server and configure SharePoint Server 2016 to use it for rendering Excel spreadsheets. I hope to write a separate article on this topic at a later time.
- You cannot install Office Online Server on the same server that runs SharePoint Server. This means that you need to have a separate server running Windows Server 2012 or Windows Server 2012 R2.
- Microsoft doesn’t support Office Online Server Preview in a production environment so make sure you install it in a test environment.
- For more details on Office Online Server, check out this TechNet article on Microsoft’s Web site.
- For more information on how to configure SharePoint to work with Office Web Apps Server, check out Scripted Installation of SharePoint 2013 and Office Web Apps Server – From the Field (Part 4). Yes, the article was written for SharePoint 2013 but you will find it useful for SharePoint 2016.
Copyright © 2015 SeattlePro Enterprises, LLC. All rights reserved.
|
OPCFW_CODE
|
from collections import defaultdict
hashtags_dictionary = defaultdict(list)
engagement_dictionary = defaultdict(int)
if input("View or change: ") == "change":
while 1 == 1:
if input("Ready (y/n): ") == "y":
likes = int(input("Likes: "))
followers = int(input("Followers: "))
hashtags = input("Hashtags: ").split(" ")
ratio = likes/followers
for hashtag in hashtags:
hashtags_dictionary[hashtag].append(ratio)
print(hashtags_dictionary)
else:
break
for hashtag in hashtags_dictionary:
hashtags_count = len(hashtags_dictionary[hashtag])
#print(hashtags_count)
hashtags_sum = sum(hashtags_dictionary[hashtag])
#print(hashtags_sum)
hashtags_engagement = hashtags_sum/hashtags_count
#print(hashtags_engagement)
engagement_dictionary[hashtag] = hashtags_engagement
print(engagement_dictionary)
|
STACK_EDU
|
For example, you want to display the “Add.png” image in a control in a regular context, and the “Add.scale-150.png” in the same control when the 150% DPI setting is applied to the system. The correct image should be automatically chosen at runtime based on the current context.
The DevExpress WPF controls support automatic image selection based on the current context. To enable this feature, images in the project must have dedicated qualifiers in their names, or these images must be placed in folders whose names contain these qualifiers. Qualifiers identify a version of an image that should be used in the current context.
Consider the following screenshot demonstrating five images added to the Images folder:
The "Add.png" file is a regular image.
The "Add.theme-metropolis.png" name consists of the regular name ("Add"), a qualifier name ("theme") and a qualifier value ("metropolis"), thus it identifies the app context when this image is applied (i.e., when "theme" is "metropolis").
The "Add.theme-metropolis_scale-150.png" name includes another qualifier ("scale") and its value (150).
Other images use the same naming notation.
The following code assigns the Add.png image to a button in a toolbar using the QualifiedImage extension.
The use of the QualifiedImage extension guarantees automatic image substitution based on the current context.
Add.png - This image will be used in all cases, excluding the following four cases which narrow the application context.
Add.theme-metropolis.png - This image will be used when the "MetropolisLight", "MetropolisDark" or "TouchlineDark" theme is applied, with the exception of the following case.
Add.theme-metropolis_scale-150.png - This image will be used when the "MetropolisLight", "MetropolisDark" or "TouchlineDark" theme is applied, and the system DPI setting is 150%.
Add.theme-office.png - This image will be applied when any theme containing "Office" in its name is applied, with the exception of the following case.
Add.theme-office_input-touch.png - This image will be used when any touch-aware theme containing "Office" in its name is applied. These themes are: "Office2013;Touch", "Office2013LightGray;Touch" and "Office2013DarkGray;Touch".
Instead of using qualifiers in image names, you can place images in folders whose names contain these qualifiers. The images shown above can be placed in folders as follows.
The QualifiedImage extension must specify the full path to the image that corresponds to the regular context. For the screenshots above, this path must be "Images/Add.png":
You can combine the two approaches by using qualifiers in folder and image names simultaneously.
Available qualifiers and their values are described in the following sections.
Qualifiers of Themes
The following table covers theme qualifiers and supported qualifier values.
office - Identifies all themes containing "Office" in their names.
office2007 - Identifies all themes containing "Office2007" in their names.
office2010 - Identifies all themes containing "Office2010" in their names.
office2013 - Identifies all themes containing "Office2013" in their names.
office2016 - Identifies all themes containing "Office2016" in their names.
metropolis - Identifies the themes: "MetropolisDark", "TouchlineDark" and "MetropolisLight".
standard - Identifies the themes: "Seven" and "VS2010".
DevExpress - Identifies the themes: "DXStyle", "LightGray" and "DeepBlue".
themeName - A qualifier value can be a specific theme name (except touch-aware themes). This allows you to target a specific theme. See the List of DevExpress WPF Themes topic for a list of available themes.
black - Identifies dark themes: Office2016Black, Office2010Black, MetropolisDark and TouchlineDark.
white - Identifies all other themes.
touch - Identifies touch-aware themes. These theme names end with ";Touch". The theme list includes: "Office2013;Touch", "Office2013LightGray;Touch" and "Office2013DarkGray;Touch".
mouse - Identifies all other themes, which are not touch-aware.
Qualifiers of DPI Settings
The following table covers DPI setting qualifiers.
A value that specifies the target DPI setting, in percentage. Supported DPI values include, but are not limited to: 80, 100, 120, 125, 140, 150, 160, 175, 180, 200, 225, 250, etc.
Qualifiers of Application Language (Culture)
The following table covers application language (culture) qualifiers.
A unique name identifying a target culture, based on RFC 4646. Here is the description from the CultureInfo Class topic in MSDN.
"The name is a combination of an ISO 639 two-letter lowercase culture code associated with a language and an ISO 3166 two-letter uppercase subculture code associated with a country or region. In addition, for apps that target .NET Framework 4 or later and are running under Windows 10 or later, culture names that correspond to valid BCP-47 language tags are supported."
|
OPCFW_CODE
|
Starting another activity using Android Intent
I am new to Android development, I was just going through the training here: http://developer.android.com/training/basics/firstapp/starting-activity.html
I have just wrote the code in Eclipse as is on the page; it's supposed to initiate another activity and should display the message which I typed in the text box of the current activity.
But I am getting an error while running the installed App on the AVD. The error message is
"Unfortunately my app is stopped"
LogCat last 10 lines are as below :
06-06 15:14:22.958: W/ActivityManager(1226): Unbind failed: could not find connection for android.os.BinderProxy@b346b948
06-06 15:14:22.978: D/dalvikvm(1535): GC_CONCURRENT freed 459K, 19% free 2445K/3012K, paused 29ms+5ms, total 177ms
06-06 15:14:23.509: W/Trace(1226): Unexpected value from nativeGetEnabledTags: 0
06-06 15:14:23.509: W/Trace(1226): Unexpected value from nativeGetEnabledTags: 0
06-06 15:14:26.278: W/Trace(1226): Unexpected value from nativeGetEnabledTags: 0
06-06 15:14:26.298: W/Trace(1226): Unexpected value from nativeGetEnabledTags: 0
06-06 15:14:32.988: W/Trace(1451): Unexpected value from nativeGetEnabledTags: 0
06-06 15:14:33.004: W/Trace(1451): Unexpected value from nativeGetEnabledTags: 0
There are lot more entries in LogCat which I don't think I can copy here. Can anyone please let me know how to figure out the exceptions or errors from this file?
I do not know how to find where the code has gone wrong or throwing exception.
Any suggestions are greatly appreciated.
Thanks
please post your logcat output
Please post your error message it would be better to solve.
You should read up on how to use the android DDMS(Dalvik Debug Monitor Server) <- http://developer.android.com/tools/debugging/ddms.html -> This will show you any Log messages and a strack trace for any crashes. In eclipse you can find the DDMS by going to Window/Open Perspective/Other/DDMS and you can add the Logcat window to your workspace by going to Window/Show View/Other/Android/Logcat.
Probably you didn't add your second activity to AndroidManifest.xml of your project.
First do this in AndroidManifest.xml:
<activity name=".ActivityB" />
Then in first Activity:
Intent intent = new Intent(this, ActivityB.class);
startActivity(intent);
As per the documentation, there are two activities,MainAcivity DisplayMessageActivity.
So declare the MainActivity in manifest
<application>
.......
<activity
android:name=".MainActivity"
android:noHistory="true"
android:label="@string/app_name">
<intent-filter>
<action android:name="android.intent.action.MAIN" />
<category android:name="android.intent.category.LAUNCHER" />
</intent-filter>
android:name=".DisplayMessageActivity"
......//documentation
Try out this code in your class file:
Intent intent = new Intent(Activity.this, Activity.class);
startActivity(intent);
Register your activity in AndroidManifest.xml:
<activity android:name="<Package>.Activity" >
</activity>
Thank you all for the suggestions,But since I am using Eclipse these two activities are already exists in AndroidManifest.xml .
|
STACK_EXCHANGE
|
- Toy Story 3 Card 10 - Prickle Pants
Junkyard Heroes: Part 2
Switch to Woody, and start shimmying your way toward the right. When you've reached the end of the pipe, jump sideways onto the ventilation shaft, and run away from the screen to grab Toy Story 3 Card 10 - Prickle Pants. Afterward, travel to the right until Woody reaches a rapidly spinning fan. He can't pass it yet, so it's time to play as Jessie. Run the cowgirl rightward until she lands on a lever, and watch as it opens an overhead hatch. That's going to help Buzz, so switch to Lightyear, and start moving quickly. You'll soon reach a set of parallel steel sheets. Wall jump your way to the top.
Now, grab the right block, and walk toward the fan in front of it. Throw the compacted trash at the fan to destroy it, then do the same thing to the fan on the left. Lastly, pick up another block, and walk off the ledge at your right. Buzz will drop into a lower area. Face the rear of the junkyard, and throw the trash. It'll drop to Jessie's area. Afterward, take control of Woody, and leap along the broken fans, then climb the ducting. When you reach a ledge (right beside another electrical switch), change to Jessie.
Hop and Land on the lever again, then—when it stops—leap over to the pipes. Run up to the block Buzz provided you with, and bound onto it. Jump to the ledge at your left, and pop open the Token capsules before charging on toward the right. When you reach a dead end, switch to Woody, and use your pull string to swing across the gap.
Change to Buzz, and run over the platform Woody lowered for you. Dash 'round the corner, and you'll be faced with the flattener. It moves quickly, but there's no other way—you have to run through it. Stand next to the first one, and the moment it drops, hurriedly rush through. When you reach the other side, double-jump to the catwalk above, and start pushing the first compressed block toward the flattener. It'll jam up the first one, but you'll have to sprint past the second to reach another bundle of trash.
Before you try running by, move toward the screen and press up against the bannister. You should now be able to dash leftward without running into the block. This time, wait until the functional flattener is pulled up, then hurry through. On the other side, pull the block away from the wall, then run around to the left side, and push it into the machine to completely gum it up.
|
OPCFW_CODE
|
How can I display numbers with different arithmetic operator
I'm trying to display different expressions with different arithmetic operator for example 2+6=, 7-1=, 9*2=, 10/5=. From the code I have at the moment only integers with the "/" operator are working, other operators don't. My code is:
fnum0 = (int) ((double) ((Math.random() * 10)));
snum0 = (int) ((double) ((Math.random() * 10)));
display.setText(fnum0+"+"+ snum0+"= ");
fnum1 = (int) ((double) ((Math.random() * 10)));
snum1 = (int) ((double) ((Math.random() * 10)));
display.setText(fnum1+"-"+ snum1+"= ");
fnum2 = (int) ((double) ((Math.random() * 10)));
snum2= (int) ((double) ((Math.random() * 10)));
display.setText(fnum2+"*"+ snum2+"= ");
fnum3= (int) ((double) ((Math.random() * 10)));
snum3= (int) ((double) ((Math.random() * 10)));
display.setText(fnum3+"/"+ snum3+"= ");
It displays the '/' operation (last one) only because you overwrite the previous operation string each time you call setText(). You need to call that method with your finally formed string. Therefore you can use the append() function to concatenate the strings and represent them all in your display view.
Try the following:
StringBuilder str = new StringBuilder(1000);
fnum0 = (int) ((double) ((Math.random() * 10)));
snum0 = (int) ((double) ((Math.random() * 10)));
str.append(fnum0+" + "+ snum0+"= \n");
fnum0 = (int) ((double) ((Math.random() * 10)));
snum0 = (int) ((double) ((Math.random() * 10)));
str.append(fnum0+" - "+ snum0+"= \n");
fnum0 = (int) ((double) ((Math.random() * 10)));
snum0 = (int) ((double) ((Math.random() * 10)));
str.append(fnum0+" * "+ snum0+"= \n");
fnum0 = (int) ((double) ((Math.random() * 10)));
snum0 = (int) ((double) ((Math.random() * 10)));
str.append(fnum0+" / "+ snum0+"= \n");
display.setText(str);
will that code display each expression with a differen operator?
Yes it will display each expression with four different arithmatic operators (+-/*) in each line. That's what you wanted I suppose, right?
yes, it was kind of right but i wanted to display each operator everytime a button is pressed
The problem is you are setting the text 4 times effectively overwriting the text for each subsequent call.
If you want all 4 of the expressions in the text you have to concatenate all the strings like this:
display.setText(fname0 + "+" + snum0+"=\n"+fname1 + "-" + snum1+"=\n"+fname2 + "*" + snum2+"=\n"+fname3 + "/" + snum3+"=\n");
I would also recommend that you make it a little more readable by creating a string and appending it.
String str = fname0 + "+" + snum0+"=\n";
str += fname1 + "-" + snum1+"=\n";
str += fname2 + "*" + snum2+"=\n";
str += fname3 + "/" + snum3+"=\n";
display.setText(str);
To do a random operation try this.
fnum0 = (int) ((double) ((Math.random() * 10)));
snum0 = (int) ((double) ((Math.random() * 10)));
String str = "";
int operation = (int) ((double) ((Math.random() * 4)));
if(operation == 0)
str = fnum0 + "+" + snum0;
else if(operation == 1)
str = fnum0 + "-" + snum0;
else if(operation == 2)
str = fnum0 + "*" + snum0;
else
str = fnum0 + "/" + snum0;
display.setText(str);
Basically this generates a random number from 0 to 3 and uses that to determine which operation is displayed
Will the code that you have writtem give me all for operators in one expression?
It should with a line break char "\n" in between them all if you want to display one of the 4 lines you will have to wrap the code you have in an if statement that determines which one you display.
i want it to display randomely with different operators. IS there anyway i can do that?
how can do i remove the restriction from the random number like it can be any random number but 2 integers but consist of ten questions
Math.random() generates a number from 0.0 to 1.0 multiplying this by a number (N) generates a random number between 0 and N-1. If you want any random number you can make this Integer.Max_Value. If you then want 10 of them what I would do is make the above code a method (except for the display.setText(str); and call it ten times to create 10 strings and concat those strings and then set the text to the concatenated string.
so can i use the above code to display ten question with random operation?
yeah basically. Create a method that does the above and returns the string of the equation. Then whenever you want a random equation call the function and you'll get one. You can call the function any number of times to get the string.
|
STACK_EXCHANGE
|
No link to the 10k moderation tools when my account has a review suspension
I'm missing a convenient way to access the 10k moderation tools as described on SO privilege help, because it shows I'm suspended from review. I can still access them by directly navigating to /tools or via the help center, however.
Is this a bug? Or is it intentionally inconvenient? Or should I have been prevented from accessing moderator tools as well with the review suspension?
As you see above, the "tools" link is missing. Can we please get it back?
Moderator tool access shouldn't be related to bans from the queues, AFAIK.
What way are you using to access the mod tools? Is the "tools" link in the top-right of the review queue dropdown missing when you're review banned? Or are you used to navigating to the full /reviews page and clicking the "Tools" button in the top-right?
@Spevacus, the former one. The message about review suspension is the only dropdown content. I don't see /reviews page, assuming you meant /review page, the tools are accessible from there.
Gotcha. That sounds annoying. Maybe you want to tag this with [tag:feature-request] and include a screenshot that shows that the "tools" link isn't available in that dropdown due to being review-suspended, and ask for this functionality to be changed? I'd certainly support that change. The mod tools don't really have much actionable content, it's pretty much just informational. There's no reason for them to be more difficult to get to due to a review suspension, IMO.
Moderator tools are something that only moderators can use. 10k users tools page title is "Moderation Tools". Also, as mentioned in above comment, this better be feature request, I've edited everything in.
@ShadowWizardHatesOmicron Is this a feature request or a support request? It certainly doesn't seem like expected behavior to me? It's unclear whether it's a bug or by design, so it's not really a request for new functionality.
@ColleenV feature request, and it's even under review now.
@ShadowWizardHatesOmicron So we know this is an intentional design and not an oversight? It seems weird that we have to ask SE to not remove a link we still have access to just because we're review banned and call it a "feature" instead of a "bug fix".
@ShadowWizardHatesOmicron I agree with ColleenV, and have retagged it as a bug, because it's inconsistent: either the link should be there, or if review suspensions are supposed to also ban the user from accessing moderation tools, accessing them directly should also be prevented.
@Sonic well, still think they removed the link on purpose, but as I can't know, it doesn't really matter.
@ShadowWizardHatesOmicron I think if the intention was to block access to the 10K tools when someone is review-banned, Alex wouldn't be able to to access the tools by typing the url in. Most likely this is just a bug that was missed because it's rare that 10k users get banned from reviews and those that have in the past didn't try to access the 10k tools from that link during their ban.
The link to Tools in the Reviews topbar dropdown will now show for users who have enough rep to see it, regardless of review suspension status.
did something not good
on review queues…it's ok
you can still see tools
|
STACK_EXCHANGE
|
Hello, everyone. I'm Instructor Gerry Roberts, and this is introduction to a PT Groups. This video we're gonna talk about how to look for information about a PT groups,
and then we're gonna talk a little bit about how to implement monitoring in your organization.
First of all, looking for information finding information on a PT groups can be difficult.
There are a lot of different
ah, pieces of misinformation, partial information,
and sometimes you just can't find the information
to some places to look.
First of all, some organizations public regular reports
on a burger the basis with information about the different groups.
So a good spot toe look is actually looking for those regular reports.
Now, some antivirus companies like
actually, dude on a quarterly basis, other organizations due to a yearly basis just depends on the organization. But you wanna look and find those reports. They're very useful. They actually are chock full of information about some of the different groups and the different attacks that are common. So you can actually
put things in place to help prevent those attacks.
Instead of just knowing Hey, there's a group out there. You can actually take action
Some vendor, such a Cisco
and other vendors publish information on their websites and also have products that will scan your system for known threats.
as not only can you get information, but you might be able to find a product to help you defend against such attacks.
Now, if you don't know specifically what you're looking for,
you kind of know group name or you might know a group number or something like that. Use your favorite search engine. Yes, I did just tell you to Google that
One of the best things you can do to prevent a PT groups from attacking your organization is monitoring. Remember, these guys work on secrecy,
so if you're monitoring, you're more likely to find them and more likely to be able to boot them out.
So a couple of things keep yourself up to date so you know what to look for.
you may not have controls in place,
but you may find something weird like certain computers air using a lot of processing power or things like that. Knowing what to look for helps you identify an issue
next some vendors after software.
It's usually updated regularly once they find out about attacks, and they can put information in their software to look for them.
These usually work like anti virus looks for specific items, traces specific things
to be ableto find possible attacks.
You can also customize thes. So look for certain things like spear, phishing emails
or certain types of attacks.
Keep everything up to date. Guys.
I shouldn't have to say this, but you'd be surprised how many times I've gone into an organization and things are not up to date,
and something that would have been fixed by a patch or a service pack
Vendors like Microsoft, other vendors like Cisco Lennox, all that do have updates on a regular basis to protect against these types of attacks.
Once these types of attacks have been identified, these vendors are able to put something together to prevent those phone or abilities from being exploited, and in some cases they can even close up of vulnerability.
The next thing implement threat hunting and pan tastic.
It is extremely valuable.
These activities can actually help you locate vulnerabilities
and sometimes even help you locate attacks in progress.
If you're doing pen testing,
you can actually stumble
upon somebody to an attack,
and I've seen it happen before.
So these items, when implemented, can also help you monitor
potential A P T. Attacks.
All right, that's it for this video. So let's take our post assessment question.
How can you find information about a PT groups?
Would you look at published reports? Would you use Google for information?
Would you look at the vendor's Web sites
or would you do all of the above?
I'll give you a moment
You can pause if you want to, and then we'll come back for an answer.
It's D all of the above. All of these items can be great. Resource is for information about a PT groups and a P T attacks.
|
OPCFW_CODE
|
We have already introduced our new box system that will ship with WoltLab Suite Core 3.0. There, we have mentioned the system box type that allows dynamic box contents like a list of threads from a specific forum. In this post, we will show developers how to easily incorporate such a box controller into their packages.
The PHP class that is of interest for us in this case is wcf\system\box\AbstractDatabaseObjectListBoxController which already provides some common features of boxes displaying objects fetched via a wcf\data\DatabaseObjectList object. Otherwise, there is also the more general class wcf\system\box\AbstractBoxController which only provides a default implementation of the necessary wcf\system\box\IBoxController interface.
In the following, we will explain the different built-in possibilities of the AbstractDatabaseObjectListBoxController class.
Providing the Object List
One of two abstract methods of the AbstractDatabaseObjectListBoxController class is AbstractDatabaseObjectListBoxController::getObjectList() which needs to return a DatabaseObjectList object (whose objects have not already been read). If you want to display app\data\foo\Foo objects, you might want to create the following class:
(Please note that we will not show any imports or comments to keep the sample code short and focused on the main parts.)
Showing the Box Content
The second method you have to implement is AbstractDatabaseObjectListBoxController::getTemplate() which returns the box contents:
In this example, we have used some additional code to use a different template if the box is shown in a sidebar rather than elsewhere. To indicate which positions your box controller supports, you have to set the static property AbstractBoxController::$supportedPositions:
The available positions have already been shown in the introduction of the box system.
If you have a list of objects, you might want the administrator to set how they are ordered when fetched from database, i.e. the administrator should be able to set the sort field and sort order. There are basically only two things to need to do: Specify an array of possible sort fields in AbstractDatabaseObjectListBoxController::$validSortFields and set AbstractDatabaseObjectListBoxController::$sortFieldLanguageItemPrefix which is used for showing proper texts in the sort field selection.
Our example now looks like this:
Here you would allows sorting by the database table column subject, username and time and due to the value of AbstractDatabaseObjectListBoxController::$sortFieldLanguageItemPrefix, the system expects the language items app.foo.sortField.subject, app.foo.sortField.username and app.foo.sortField.time.
Number of Objects
In many cases, you do not want to display all objects that are saved in the database table as that would result in a list of hundreds or thousands of list items but rather set a limit. You can easily enable this option by setting the value of AbstractDatabaseObjectListBoxController::$defaultLimit to a positive value. When creating a new box, this value will be shown to the administrator, as the name of the property implies, as the default value. Additionally, you may also specify a minimum and maximum number of objects that the administrator can set by setting the values of the properties AbstractDatabaseObjectListBoxController::$minimumLimit and AbstractDatabaseObjectListBoxController::$maximumLimit.
Let us continue our example from above and assume that we want the default limit to be 5 and only allow the administrator to set a maximum number of 25:
Since WCF 2.1, we provide a powerful condition system that can be used in multiple locations and can work with DatabaseObjectList objects so that it is also perfectly suited to be used for boxes. To tell the system that the box supports conditions, you only need to set the value AbstractDatabaseObjectListBoxController::$conditionDefinition to the name of the condition object type definition for the box. (Please note that the condition object type processors need to implement wcf\system\condition\IObjectListCondition).
If you have already implemented conditions for your objects and implemented IObjectListCondition, you only create object types for you box condition object type definition and you are done.
If you want to deliver default boxes with preset conditions, you can use the wcf\system\box\BoxHandler::createBoxCondition() method in your package installation script to quickly set up these conditions.
|
OPCFW_CODE
|
So glad to be part of this group!
At present I'm persuing Data science course, i wanted to know what is the expected CTC for a fresher on average .
I currently work at Fidelity Investments in the call center. I want to move my career towards in person planning at a branch and there isn’t a lot of opportunity here at Fidelity to do that in my area. What are some other good firms to work for? I was looking into Merrill Lynch, Morgan Stanley, Edward Jones, and some others. Thoughts?
Anyone have any advice breaking into LIS work? I’m looking into getting away from the bench with stable hours. If you have any other suggestions let me know. I’ve looked into clinical research, but half the time they want a nursing degree.
Moving to Detroit for a new position from a bigger market - anybody have any insights on just about anything there!? I’ve only visited twice in my life 😬
Can anyone suggest a good data science bootcamp course within 30k budget
Best firm to apply for to get an international Salesforce consulting position? (US citizen looking for an opp in Australia/ Europe)
I have received an offer from Wipro and there is a component WBP (Wipro Benifit Plan) which is of 2.5 L.
Will this be divided and paid monthly or will there be any deductions?
Thanks in advance.
I am so over morning zoom meetings.
My probation was supposed to be of 6 months (i.e until June 2022), but today I received a notification in Workday that I have passed my probation.
How is this possible?
I was planning to resign within probation for 30days notice period. But now this happened. Will this change my notice period to 90days??
Look for a job where hiring entry level medical coding jobs in Sacramento ca or in greater Sacramento area?
Need insights about
Machine Learning Engineer role at Sutherland
How would be the work life balance, learning and pay?
Looking to transition out of public with about 5 years focusing heavily on partnership taxation. What are some good roles and industries to be looking for?
@EY fishes I can see some EY Parthenon openings on LinkedIn for which I have relevant experience. Can anyone please refer me? Thanks a lot.
I have just started to learn Python, have a Bachelor in Literature and I have no idea of what to expect, looking for some support and positive vibes 🙏✨
Additional Posts in The Matzah Bowl
Have a Chag sameach, everyone
Hey Jews, how are we feeling right now? Safe space to vent. Let’s get through this together
Happy New Year!
It’s almost Purim!
Who else is having Chinese food tonight ?
My company: DEI is important we want everyone to feel comfortable and included
Me: this happy hour has no kosher food…
Hope everyone had a good Christmas.
Holidays are coming up. Need to make sure I put in for the days! And then remind all the gentiles fifty times not to schedule important meetings those days
My firm has me on a reduced schedule where I only work 4 days a week. Although it comes with reduce pay, I’ve never had more time to help with shabbous prep
How’s your holidays so far? Dreading seeing the folks, falling behind on work, or praying to g-d for success this year?
It’s almost Purim!
Was just informed that I am being furloughed. If anyone knows of any opening in their company any opening in general please let me know.
Happy Secular Rosh Hashana Gefilte Fishies.
New to Fishbowl?
unlock all discussions on Fishbowl.
Has anyone seen my etrog?
@acd1, I lulav that comment.
|
OPCFW_CODE
|
Update schema files in OPTIMADE main repository before releasing v1.0.0-rc.2
All PRs that we said to merge before release of v1.0.0-rc.2 have now been merged, and I am ready to make that release.
However, just now I realize that some of the changes we've made should have affected the schema files:
schemas/index_openapi_schema.json
schemas/openapi_schema.json
which, as far as I know, are not yet updated?
Do we not need to update them before we can release?
Pinging @CasperWA, @ml-evs who I know are active in the optimade-python-tools that presently is used to generate the schemas.
Pinging @CasperWA, @ml-evs who I know are active in the optimade-python-tools that presently is used to generate the schemas.
We have a few open issues in optimade-python-tools for the models to be updated; I'll assign myself and start chipping away at them tomorrow, with some help it shouldn't take long. Would you be against releasing v1.0.0-rc2 now and then moving the tag back once we've caught up?
Would you be against releasing v1.0.0-rc2 now and then moving the tag back once we've caught up?
I am against making a release with incorrect schemas for that release since that is a bad look..., but I realize this may further put off -rc.2 for some time. (But I am not sure what "moving the tag back" means? Altering the git tag for the release after the release? That we shouldn't do.)
While we are on the topic of schemas and the v1.0.0 release, I notice that both #25 (Adoption of json schemas and continous integration) and #37 (API response should include reference to schema, to be completed first) are still open and have been marked for milestone v1.0.
I've proposed to close #25, because I think it is done.
For #37, I suppose we do not presently link schemas from our default JSON API-based response? Should we quick-fix that before v1.0.0? At least an optional way to link schemas may be appreciated for v1.0.0 and isn't a big change to the spec.
I am against making a release with incorrect schemas for that release since that is a bad look..., but I realize this may further put off -rc.2 for some time. (But I am not sure what "moving the tag back" means? Altering the git tag for the release after the release? That we shouldn't do.)
Yes, it would be a bit cheeky, and relies on the fact that providers probably aren't using the generated OpenAPI spec at the moment. I guess the problem is that all other PRs would need to be put on hold until the schemas are merged.
We can certainly try our best to expedite the process in optimade-python-tools!
* For #37, I suppose we do not presently link schemas from our default JSON API-based response? Should we quick-fix that before v1.0.0? At least an optional way to link schemas may be appreciated for v1.0.0 and isn't a big change to the spec.
This sounds good to me, I would like to be able to link to my extended OpenAPI OPTIMADE schema that includes the extra fields for my implementation (which itself will link to the OPTIMADE schema).
@ml-evs, @CasperWA
I see you actively working on it in optimade-python-tools, how much more work is left? Are there clear work tasks others can help you with to get this completed?
@ml-evs, @CasperWA
I see you actively working on it in optimade-python-tools, how much more work is left? Are there clear work tasks others can help you with to get this completed?
Hi @rartino, just one more PR to be reviewed at our end, https://github.com/Materials-Consortia/optimade-python-tools/pull/351, unfortunately its a tedious one as it is just the update of all our descriptions to match the spec (and also converted to markdown so our docs work). As soon as this is merged, I can prepare the PR for adding the generated OpenAPI spec to this repo.
Closed by #291.
|
GITHUB_ARCHIVE
|
Developers have to cope with pretty hard choices every day. This is a collection of the most complex and funny choices. What will be yours?
How to add a question to choiceof.dev
As it is a project by the developers for the developers, we want to make the act to contribute a cool and fun way to learn how to contribute to open source project. To add a question, you must clone the project locally, built it, updating one file and adding your two images.
Just a quick reminder, the project is supposed to be funny. Therefore, questions must be... funny. If you are creating a question that is not extremely funny in the choice, at least, try to find funny images to illustrate it.
The file to edit is:
You will find there a list of questions, with a slug, a title, a description, the right and left choices and their respective images.
You can add a question at the end of the list, or in the middle, it doesn't matter.
Then you have to add two images in the folder
apps/devchoices-next/public/assets/img with the name you have precise in the other file.
Then you can run the project locally and check if everything is working fine. You can test your question directly by going to the following url:
If you like the result, it's time to generate the preview of this question for social networks. To do this run the command:
pnpm ts-node scripts/preview-generator.ts
If for some reason you want to regenerate all the previews, you can run the command:
pnpm ts-node scripts/preview-generator.ts --override
Now you are ready to submit your PR. We will review it and if everything is fine, and if the joke is fun, we will merge it and your question will be available on the website
How to contribute to the core project and adding features
In the issues of the repository, you will see many open tickets. You can take one of them if you fancy and propose your solution on a PR. You can also create issues by yourself if you experienced a bug or if you have an idea for a new feature.
The project is modernised copy of his little brother choixdemerde.fr It is built with:
There are also many community plugins you could add.
Run the project locally
Clone the repository, install the dependencies and run the project:
pnpm install npx nx serve
You should see this:
And going on
localhost:4200 you should see the project running.
Run the storybook
If you want to work on components, on design system, or on a dedicated environment without bore effects of the app, you can run the storybook:
nx run shared-ui:storybook
You should see this in your terminal
localhost:4400 you should see the storybook running.
This open source project and this website have been created by Benjamin Code to celebrate his 100k subscribers on YouTube. The project is inspired by choixdemerde.fr which is a project also created by Benjamin Code and that costed him a lot of money back in the day... If you want to learn more about this story and how a big buzz on your funny side project can ruins you, you will find this article on Medium
"The story of choix de merde is terrible. It costed me a lot of time and money and never brought me anything. But the stories about this catastrophic development made me starts a YouTube channel and it has been so far the best experience of my life. For the 100k subscribers I wanted to bring back some light on this story and complete the loop" – Benjamin Code
|
OPCFW_CODE
|
RTRlib - The RPKI RTR Client C Library
The RTRlib is an open-source C implementation of the RPKI/Router Protocol client. The library allows to fetch and store validated prefix origin data from a RTR-cache and performs origin verification of prefixes. It supports different types of transport sessions (e.g., SSH, unprotected TCP) and is easily extendable.
The RTRlib is useful for developers of routing software but also for network operators. Developers can integrate the RTRlib into the BGP daemon to extend their implementation towards RPKI. Network operators may use the RTRlib to develop monitoring tools (e.g., to check the proper operation of caches or to evaluate their performance).
If you use RTRlib in a scientific context, please use the following citation.
December 28, 2014
- RTRlib Version 0.3 released (Download)
- Added support for IETF draft draft-ietf-sidr-rpki-rtr-rfc6810-bis-02
- Source address for RTR connection can be configured
- Minor changes of the library API (see doxygen documentation)
- We are migrating to Github. For new tickets, please use https://github.com/rtrlib/rtrlib/issues
October 16, 2013
- RTRlib Version 0.2.3 released (Download)
August 4, 2013
- We will present RTRlib at the 6th Workshop on Cyber Security Experimentation and Test (CSET '13), which will be held in conjunction with USENIX Security 2013.
June 13, 2013
June 13, 2013
- We moved the Git repository to GitHub: https://github.com/rtrlib/RTRlib.git
February 26, 2013
- Short talk about RTRlib at NDSS 2013
February 25, 2013
- Firefox Add-on online, which performs prefix origin validation of the requested web server's IP prefix: https://addons.mozilla.org/addon/rpki-validator/ In the backend we use RTRlib ;).
June 14, 2012
- RTRlib Version 0.2.2 released (Download)
- Fixed a bug in IPv6 address operations that caused that some IPv6 records couldn't be added to the pfx_table
February 19, 2012
- RTRlib Version 0.2.1 released (Download)
- Nonce variable renamed to session_id to conform with draft-ietf-sidr-rpki-rtr-26
- Warning message added if the Zero field of a prefix PDU doesn't contain 0
- pfx_validate_r function added, returns list of prefixes which affected the validation state BGP route
- Fixed bug in lpfst_remove that could cause that an pfx_record in the pfx_table could not be found.
- Added state rollback to the prefix synchronization function to assure that the last correct state is recovered if an error occurs during synchronization
- Few smaller bugfixes and debug formatting corrections
January 8, 2012
- Internet Draft RPKI Router Implementation Report online
December 21, 2011
- Short note on RIPE Labs about preliminary measurements with our RTRlib
November 28, 2011
- RTRlib Version 0.2 released
RTRlib 0.2 is available via git or as tar.gz archive in our Download section.
- Support of RTR-Server failover mechanisms (RTR manager component implemented)
- Automatic reconnect of rtr_socket in case of errors
- Implements current RTR drafts draft-ietf-sidr-rpki-rtr-19 and draft-ietf-sidr-pfx-validate-03
- Many bug fixes
- New documentation: RTRlib Usage.
- Publicly accessible RTR-Server online.
- The service is for testing purposes and reachable via TCP and SSH. For details see Usage.
September 7, 2011
- Short note on RIPE Labs about our beta release
August 31, 2011
- The first version of the RTRlib has been released!
You can download RTRlib 0.1 here.
July 24, 2011
- Website online & API documentation available for discussion
|
OPCFW_CODE
|
class PoisonAffliction extends Affliction {
private static DEFAULT_PERCENT = 5; // default poison is 5% of HP every turn
private static MAX_STACK_NUM = 2; // maximum number that poison can stack
private static MAX_DAMAGE = 99999; // maximum poison damage is 99999 every turn
percent: number;
constructor() {
super(ENUM.AfflictionType.POISON);
this.percent = 0;
this.finished = false;
}
canAttack(): boolean {
return true;
}
update(card: Card): void {
let damage: number = Math.floor(card.originalStats.hp * this.percent / 100);
if (damage > PoisonAffliction.MAX_DAMAGE) {
damage = PoisonAffliction.MAX_DAMAGE;
}
// damage the card
BattleModel.getInstance().damageToTargetDirectly(card, damage, "poison");
}
add(option: AfflictOptParam): void {
let percent = option.percent;
if (!percent) {
percent = PoisonAffliction.DEFAULT_PERCENT;
}
this.percent += percent;
// there's a bug in here. Not my fault though
let maxPercent = percent * PoisonAffliction.MAX_STACK_NUM;
if (this.percent > maxPercent) {
this.percent = maxPercent;
}
}
}
|
STACK_EDU
|
The Julibrot fractal type uses a general-purpose renderer for
visualizing three dimensional solid fractals. Originally Mark Peterson
developed this rendering mechanism to view a 3-D sections of a 4-D
structure he called a "Julibrot". This structure, also called "layered
Julia set" in the fractal literature, hinges on the relationship between
the Mandelbrot and Julia sets. Each Julia set is created using a fixed
value c in the iterated formula z^2 + c. The Julibrot is created by
layering Julia sets in the x-y plane and continuously varying c,
creating new Julia sets as z is incremented. The solid shape thus
created is rendered by shading the surface using a brightness inversely
proportional to the virtual viewer's eye.
Starting with Fractint version 18, the Julibrot engine can be used with
other Julia formulas besides the classic z^2 + c. The first field on the
Julibrot parameter screen lets you select which orbit formula to use.
You can also use the Julibrot renderer to visualize 3D cross sections of
true four dimensional Quaternion and Hypercomplex fractals.
The Julibrot Parameter Screens
- Orbit Algorithm -
- select the orbit algorithm to use. The available
possibilities include 2-D Julia and both mandelbrot and Julia
variants of the 4-D Quaternion and Hypercomplex fractals.
- Orbit parameters -
- the next screen lets you fill in any parameters
belonging to the orbit algorithm. This list of parameters is not
necessarily the same as the list normally presented for the orbit
algorithm, because some of these parameters are used in the Julibrot
- From/To Parameters -
- These parameters allow you to specify the
"Mandelbrot" values used to generate the layered Julias. The
parameter c in the Julia formulas will be incremented in steps
ranging from the "from" x and y values to the "to" x and y values. If
the orbit formula is one of the "true" four dimensional fractal types
quat, quatj, hypercomplex, or hypercomplexj, then these numbers are
used with the 3rd and 4th dimensional values.
The "from/to" variables are different for the different kinds of
2D Julia sets - complex number formula z' = f(z) + c
The "from/to" parameters change the values of c.
4D Julia sets - Quaternion or Hypercomplex formula z' = f(z) + c
The four dimensions of c are set by the orbit parameters.
The first two dimensions of z are determined by the corners values.
The third and fourth dimensions of z are the "to/from" variables.
4D Mandelbrot sets - Quaternion or Hypercomplex formula z' = f(z) + c
The first two dimensions of c are determined by the corners values.
The third and fourth dimensions of c are the "to/from" variables.
- Distance between the eyes -
- set this to 2.5 if you want a red/blue
anaglyph image, 0 for a normal greyscale image.
- Number of z pixels -
- this sets how many layers are rendered in the
screen z-axis. Use a higher value with higher resolution video modes.
The remainder of the parameters are needed to construct the red/blue
picture so that the fractal appears with the desired depth and proper
'z' location. With the origin set to 8 inches beyond the screen plane
and the depth of the fractal at 8 inches the default fractal will appear
to start at 4 inches beyond the screen and extend to 12 inches if your
eyeballs are 2.5 inches apart and located at a distance of 24 inches
from the screen. The screen dimensions provide the reference frame.
The Fractint Home Page.
or back to
The Fractint Index Page.
This page maintained by
|
OPCFW_CODE
|
Hello, a strange thing happened today after installing a Windows 10 update. PlayIt Live launches normally, but within a few seconds of trying any operations with the mouse, the display locks up. The system continues playing audio just fine, and other programs running (like Firefox) have no trouble with mouse actions. The only way to recover is to kill PIL in Task Manager and relaunch it. All is well again until I try clicking on something, then shortly after the first click the PIL display locks again. I read on the Microsoft site that update KB4586781 includes: "Updates to improve security when using input devices such as a mouse, keyboard, or pen." I wonder if this is related?
I tried uninstalling the Windows update but my PC reports an error and is unable to uninstall it. I also updated PIL to build 2829 to no effect. Anyone have ideas?
UPDATE: I've solved the mouse problem by doing a Windows 10 Reset (reinstall). I installed PlayIt Live from scratch and copied over the C:\ProgramData\PlayIt Live\Playitlive.db database file and the audio files. All seems well. Also I temporarily paused Windows Updates, just in case....
UPDATE to the update. The mouse lockup problem has returned. I had created a Windows restore point while everything was working, and reverting back to that point has not helped. So I'm looking for help again.
I recently re-installed Windows on my desktop (on Saturday) and it has this update installed and I cannot reproduce this problem. Please could you explain in greater detail what is happening when the display locks up. Maybe a video showing this might help show it better. You can try sending me diagnostic logs which might help point to the problem using this tool (be sure to select PlayIt Live): https://downloads.playitsoftware.com/DiagnosticsTool/PlayItDiagnosticsTool.exe
Jason, thanks for the response. I ran the Diagnostic Tool as directed. I also captured a movie of the issue and have uploaded the file to http://swwi.tv/video/InSpiritRadio_mouse_issue.mp4
A few things I've tried in the meantime. The machine was also running MalwareBytes, which I've uninstalled, thinking it may have quarantined an essential file by mistake. I uninstalled PlayIt Live 2.06.2.2796 and started a clean install. From there I imported a few tracks and successfully dragged them into the playlist and was able to move them around. I closed the program and copied over the backup c:\programdata\PlayIt Live files. Upon restarting I experienced the same mouse lockup when clicking on an item. Perhaps I have a corrupted database or other file in that group that I brought over.
Tuesday evening update. I restored a month-old database file, and all is stable. Although the Now Playing plugin is no longer available, not sure how to restore that. It would be preferable if I could get running on the most recent database so a month's worth of work wouldn't be lost, but I'd rather be stable than current.
This does not appear to be caused by a Windows update. The reason why this is grinding to a halt is because it is struggling to schedule any more songs - it is only scheduling Sweepers which, at 3 seconds each, take a long time to fill an hour. You have a Track Group called Main which has 440 songs, many of which by the same artist. You have a playout policy set up to prevent the repeat of tracks for 36 hours and not to repeat the same artist for 3 hours.
Experimenting with this, if you want a 3 hour separation of artists you would need to reduce the track separation to 14 hours:
14h tracks - 3h artists
18h tracks - 2h artists
22h tracks - 1h artists
26h tracks - 0h artists
There are 440 tracks and 131 unique artists. This works out around 26 hours of content. For every hour of artist separation you want, you would need to reduce the track separation by 4 (the average number of tracks per artist).
I see you have replied with another update. I would suggest restoring your old database and adjusting the playout policy for Main down.
To restore the Now Playing, go to Plugins > Plugin Manager > Plugin Gallery and redownload the plugin. You won't have to pay again as it will pick up that you have already purchased when you log in.
Jason, thanks so much for looking into my issue. I was able to copy across the most recent database files, launch PIL, immediately turn off Log Scheduler before I lost control, and adjusted the separation values lower as you suggested. It appears all is running smoothly now. And the plugin restored itself.
|
OPCFW_CODE
|
How pincodes used as an authentication mechanism?
I'm designing a login system for my mobile app where users only need to provide a pincode to authenticate into the app after they sign-up.
I'm more wondering how the backend would work in such a system in general. To begin with, is the pincode saved on the device or server? If it's on the server, then how it's passed to the server? Or how it's associated with the username-password tuple?
Thanks in advance.
This is very vague. Can you include more about your architecture? What programming environment/s are you using for this? Is the login web-based or custom application?
This question is related to web based programming and does not belong in this forum. Also, the answer to such questions could be found easily by searching on the web. Please do the homework before asking such question.
@bashCypher it's vague to me as well :). I'm developing this for Android in Java/Kotlin. It's mobile app with bunch of backend capabilities including the authentication. Hope it's more clear.
How pin codes are used, like passwords, are numerous and varied.
Sadly, there are still a lot of implementations that simply keep the pin/pass in clear text in a table (DB or Flat) correlated with the User ID.
Better, is a HASH of the pin/pass is kept correlated with the User ID. That way if the DB is compromised, they don't get clear pin/pass information. Thieves of the server table would need to break the hash, but widely available rainbow tables make that very viable.
Better-Still, is a HASH of the pin/pass + SALT. SALT is a random addition that eliminates the use of pre-computed hash/rainbow tables. Now a table thief has to spend time attempting a brute force attack. Note that with a small pin, this is quite viable as well.
Better-Still-More, is to add "round counts" of hashing the hash of the hash of the hash a few thousand to million times. This requires a thief to know the round count and if the count is large enough to require 2-3 seconds per attempt, brute force becomes even more difficult. Note that a small pin is still vulnerable.
The Bigger Concern and most common mistake made again and again is failure to prevent direct online brute force attacks. People make mistakes and need to be allowed multiple attempts, but a realistic limitation along the lines of say, 5 attempts within a 3 minute window, accounts for humans but hundreds of attempts at impossible typing speeds is an attack, don't allow that! The Linux utility "Fail2Ban" is handy for blocking these kinds of online attacks. Too many attempts within the window, block the IP for 30 minutes. A real person can wait it out to try again, but an automated high speed attack will generally give up and move on.
Ok it makes sense but one more thing. I need to first register/sign-up to set the pincode (so I can change it or remove it later on by using my password). That means it has to be related with my account/credentials in the server, right?
There are multiple ways to do this. It will depend on what language/s you are using, what your server/client architecture it, are you limited to un-encrypted transmission protocols, etc.
Here is an almost duplicate question (if you can give us your technology stack it might not be):
https://stackoverflow.com/questions/35100181/how-to-make-login-authentication-page-in-html-or-javascript
More Reading:
https://sushi2k.gitbooks.io/the-owasp-mobile-security-testing-guide/content/0x04e-Testing-Authentication-and-Session-Management.html
https://techbeacon.com/5-essential-steps-securing-enterprise-mobile-apps
Thank you. But I'm not asking for a general idea. I'm looking for how pincodes are used during the authentication as a safe login mechanism. None of those pages referring the pincode usage.
|
STACK_EXCHANGE
|