这种一公斤接近八十元,这种一公斤接近一百元,这种一公斤接近两百元,这种一公斤接近四千元, 而这种一公斤接近七千元。是的,你没有听错,确实是七千元一公斤的斯凯夫高温油脂,如果按吨位计算是七百万元,他凭啥这么贵?里面有啥见不得人不是有啥贵重成分? ljetr 是以 ptfe 为筹划剂,以合成佛化油为基础油的润滑脂, 使用温度可达二百六十度的超高温度。 ptfe 中文名是据师傅乙烯,也叫特福龙。是的,家里用的不粘锅涂层就有它的成分, 优点就是耐高温、耐低温、耐腐蚀、耐氧化、不粘附,合成孵化油也是全服聚民,这玩意也是相当的贵,有着由中贵族之称。如果你的行业是工业烘焙设备, 摇车轮毂、高温电机、真空泵、纺织干燥机等等,不妨用用看。
粉丝5592获赞3.5万

pe 线哪款好用?大家好,我是老卢,今天咱们聊聊 pe 线,大家看到的这几款是我自己常备和常用的几款 pe 线,以下的观点也仅代表我自己对这些产品的实际体验感受,供大家参考。我们先看一下各个品牌在产品包装上的拉力标注,我以二号线为例做数据横向对比, ygk, 老款包装的二号四十磅,新款包装的二号四十磅。新老包装拉力值没有变化。达瓦的八边啊,二号是三十磅, 苏菲是八三二,二号三十九磅,苏菲是幺三幺二号三十磅。摩莉根的蓝线,二号四十二磅。喜马诺的十二边,二号四十三点七磅。喜马诺的八加二号四十三磅。喜马诺的八边,二号四十 二点八磅。通过二号线的横向对比,我们看到拉力值最强的是喜马诺的十二点,是四十三点七磅。拉力标注最弱的有两个,一个是达瓦的八边啊,二号是三十杠索配式的幺三幺啊,二号也是三十杠啊,这两个标注拉力值比较弱。 好,以上是拉力值的横向对比,接下来我们打开包装看一下。好,我们打开包装看一下线盘和线外机 k 薄款,新款线,我在实际使用中绝对没什么差别, 但是新款贵了十元,打瓦的八边 ok, 苏菲氏的八三二。好,苏菲氏的幺三幺, 价格不便宜啊,这个线呀, 挺贵。魔力跟的蓝线,这是魔力跟的当家牌面啊,魔力跟系列最好的皮鞋,喜马诺的十二边,这也是喜马诺目前的比较卖的好的一款 套头,很顺滑。西马路的八家啊,新出的这个线呢,是有粉色还有五彩啊, 这是西马诺老款的普通半边啊,绿色和蓝色两种, 结合我自身的使用体验感受。我们简单聊一聊这几款 pe 线,首先我们说账号的问题, pe 线是编织线,无法避免都会账号,只是账了多少的问题。这几款 pe 线都是行业内的头部产品,所以呢,账号 控制的都不错啊,大家可以放心,不会说你买的是一号啊,你用了几次之后变成二号了啊,那不会的。第二点,我们聊一聊顺滑度,我个人认为西马诺的十二边是顺滑度不错的,我大多是用在我的水滴轮上面。其次是外机 k, 他的一些一号或零点八号啊,我大多是用在我的房车上面啊,他也不错。 第三点,我们聊一聊涂层和褪色。我认为这方面做的最好的是魔力根的蓝线啊,下水几次之后,他的涂层没有明显的脱落,颜色也没有明显的褪色啊,这个不错。 做的最不好的呢,我认为是这个打瓦的八边啊。呃,他这个褪色和涂层脱落相对严重一点啊,但是我一般用他的四号六号多一点,上到一些打黑轮或重型装备上,本身那些产品我下手也不多。第四点 是陷阱的粗细还有耐磨度的问题,我个人认为陷阱做的比较细的啊,是 ygk 啊,又细又顺滑,但是轻微的有点起毛啊,拉力损失就 比较明显,容易放枪,鱼和熊掌不可兼得嘛,所以大家使用的时候还要特别小心。另外啊,谨防假货啊,几十级后的啊,是这个新忙碌的八边啊,我认为做的也不错啊,陷阱也很细啊,抛头感受也一直很好。 最粗的是谁呢?最粗的啊是八三二,但同时啊他也是最耐磨的八三二这根线大家可以可以看一下布灵,布灵的啊,非常硬,如果你想远投啊,不要用这款线啊,他不是立远头,他是干粗活的啊,但是他比较耐磨啊,轻微的起毛对拉力的影响不太明显啊,所以这个线呢,呃,怕黑啊,或者是半重活啊。这个线不错, 有些钓友可能会有疑问啊,说幺三幺这个线就没有什么突出的优势吗?当然有啊,卖的最贵,装他三根腕子哈哈哈。但是现在线呢,我觉得和价格不算太匹配啊,性价比不高。另外呢,我用一六 dc 啊,上过一号的幺三幺,想尝试远投,但我玩了半天就放弃了啊, 可能是我自己不太适应幺三幺这个线啊,但整体收费式的线耐磨性都不错啊。呃,讲了这么多啊,我个人觉得主流的市场还是日系的这几个产品,比如说像关机 k 啊,西马诺啊,还有魔力根, 选谁都不会错啊,都是非常优质的产品。呃,其次呢,还有这个性价比非常高的啊,达瓦的八边啊,这个不到一百块钱啊,性价比非常高。 如果您是新手,担心用这些进口的皮机线像挂底放枪磨损会比较浪费,那你可以考虑用这种国产线啊,来晾杆。这是一个国产的大力马,一点五号,相当于进口的大概二号的粗细, 也没什么图层,也谈不上什么性能。呃,好在便宜,咱们用它晾干是足够了。重要的一点啊,千万别上太满,因为这个线啊,入水就账号没有完美的 pe 线,不可能某一款产品满足我们所有的需求,所以我们要根据不同的坐垫环境,不同的装备 搭配进行对应的匹配,才应用而生了这么多不同特性的皮衣线供我们选择。以上仅代表我的个人观点,供大家参考,欢迎大家在评论区讨论,谢谢大家点关注不迷路!

hello, 各位小伙伴们,大家现在好,我是王玉红,传播财商,传递幸福。今天来和大家分享一下我对 p e p b 和 p e g。 的理解。 p 一表示按照当前的赚钱速度,多少年可以回本,是用每股股价除以每股收益是为净利润所付出的价格。 它适用于盈利水平稳定的大公司,不适用于未盈利或者是盈利增数极高的公司。 p b 也就是市净率,适用每股股价除以每股净资产, 是为净资产所付出的价格,优势是每股净资产比收益更稳定。但是他的劣势是不适用于净资产规模小的企业。 这个指标也是巴菲特老爷子在衡量一家公司的时候特别喜欢的一个财务指标。 peg, 也就是适应率相对盈利增长比率, 是用适应率除以每股收益增长率乘以一百,是衡量这家公司是否被低估的一个预期。它适用于高不确定性的新兴市场公司。 他的劣势是在于企业的未来增长率不容易被准确的估计。 如果 p e g。 大于一,表示这家公司被高估。如果 p e g。 小于一,表示这家公司有可能是被低估。 这个指标也是我们在做股权投资的时候要看的一个重要的财务指标。好的,今天的分享就到这里,拜拜。



today, let's talk about imperfect multi culinarity, 非完全 or 不完全多重贡献性 and in the video later, you will see that first, i'll give you the comparison between perfect and imperfect multicolinarity and then use a regression with only one regressor to show you how the variance of x could affect the standard error of the coefficient of x in the regression and then use the regression with two regressor to show you how this imperfy multicolinarity affect the variance of x one or x two and how would this variance of x one, x two affect the standard error or variance of b one and b two okay without further do let's get into the class let's talk about imperfect multicolinarity imperfect multi coloniarity last time we talked about perfect multi coloniarity and also dummy variable trap which is just an example of perfect multicolinearity perfect an imperfect multicolinearity are quite different despite the similarity of the names let me give you the comparison let me use the regression with two regressors as an example for perfect multicolinarity a definition and it says perfect multiculinarity occurs when one regressor is an exact linear function of other regressors here we have only two regressors so you can ride this way x1 is an exact linear function of x2 if this is the case we say x1x2 are perfect colinear or this linearity is perfect perfect multicolinearity and the intuition behind it is when x2 is hel constant and there's nothing change in x1 when x2 is hel constant x1 is also constant so you cannot estimate the effect of change in x1 on why when x two is a hell constant right again because when x two is a hell constant there's nothing change in x one, or that they do do not contain any information about what happens when x two changes, but x one doesn't or vice versa right, but for imperfect multi culinarity it says this x1 is still linear function of x2, but this culinarity is not perfect plus error term they are not perfect perfectly colinear x1x2 are not a perfect colinear so when x2 is held constant and this part of x1 is constant, but x1 does not only include this component, but also have another component eratron, so you can use x2 to explain some variation in x1, but the rest of the variation x1 can be explained by some error term so, or in other words when x1 and x2 are heidi correlated we say there is imperfy multiculinarity problem in other words imperfee multicolinarity occurs when two regressors are very heidi correlated then we call this imperfee multicolinarity so the logic is pretty much the same let's understand the in two intuition behind this imperfy multiconinarity so the logic of the intuition behind it is the coefficient on x1, which is measured by b1 right b1 should give us the effect of change in x one on y, when x two is hel constant, but the thing is if x one x two are highly correlated there is a very little viration in x one when x two is hel constant right when x two is hel constant there's a very little change in x one if x1 x2 are highly correlated, but for the perfect moleculinarity when x2 is hell constant there's no change in x1 in perfect multicolinarity when x2 is hell constant there's very little change in x1 in other words the data don't contain much information about what happens when x1 changes, but x2 doesn't okay so when there is a perfect moleculinarity the data do not contain any information about what happens when x1 changes, but x2 doesn't and once you understand this intuition behind it you can you can pause this video and think about what will be the consequence or what will be the effect of this imperformatic culinarity on this ols assimators especially the b one and the b two so pause this video and think about it what effect this improfiemodic culinarity would have on these ols estimators b1 b2 before i give you the answer let me just review this this idea, but when the regression only has one regressor we know we are estimating the regression line right so this is y and this is x we have scheduler plot number observation suppose n is a hundred and let's compare these two situations same number observations this y and x we only have one regressor and then we are estimating regression line so this this is the variation in x a hundred observations indicate a hundred points and then you can draw the regression line through this points a less regression line versus you have this a hundred observations you see very little variation in x and then you draw a regression line based on these a hundred observations like this which case is better or which case will give you a better or more precise estimate okay think about it which case will give you a better or more precise estimate of beta 1 beta 1 is the true effect of changing x on y or the coefficient of population regression and we know that b1 is point estimate of beta 1 right so this is a b1 follow normal distribution and this is a beta 1 b1 is just an estimate of beta 1 so if the b1 has a very small variation versus this case this is a b1 and this is a beta 1 but the variation is very large this very large variation any point along the x x's gave us the point estimate of beta 1 or a value of b1 so in this case the variance or standard division b1 is small in this case variance or standard division b one is large and which one is better of course we know that this one is better this is the magnitude b1 but standard error b1 is equally important we're using standard error b1 to construct confidence interval for beta 1 and also do have part of testing or whether beta 1 equals zero so this is a better case if this is a case we say beta one is precisely estimated if this is the case compare the case above indicate beta one is imprecisely estimated and which a hundred observations can give you a more precisely estimated estimator this one in order to have a small standard error of the ols estimator b1 we need a large version in x so another for b1 have a smaller standard error or smaller variance we need a large variation in this independent variable x so you can also tell this? you can also see this relationship from the mathematic expression of standard error of b1 let me show you this so when you have only run regressor you have this variance of v1 equals 1 over n variance squared of of this variable x minus x bar times error divided by variance of x squared this is based on regression with one regressor okay you see when standard deviation of x is large that means the denominator is large when denomina is large a value divided by a large value make variance will be one smaller and take a square root standard division of be one will become smaller and so larger the variance of x and the smaller the standard division or b one if the variation in x or variance of x is small you see the range or another measure of dispersion the range of x or dispersion of x or the variance the x is very small which indicate that denominator is small lower denominator higher variance of v1 okay if the b1 has a large variance and the beta one is imprecise estimated if x has a large variance and denominator is large and variance of b1 will be small beta one is precise estimated this is good ok bear this in mine and then you look at this regression with two regressors so y equals b zero plus, b1, x1, plus, b, two, x2 and plus error if there's imperfectly multiculinary in other words x1x2 are highly correlated are very highly correlated that means when one variable is held constant there is a very little variation in x1 very little version x1, which will result in a large standard error in b1 so standard error b1 will be large same thing when x1 is hell constant there's a very little change in x2 so the result is the standard error of b2 is also large that is the consequence of imperfect multi coloniarity the imperfect multiculinarity will cause the standard error of the coefficient b1 or standard error of coefficient b2 increase again imperfy multicolinearity results a large standard errors for one or more of the ols coefficients if the standard error is large we say beta one is imprecisely estimated and beta 2 is imprecisely estimated you see the improphimodical generity has nothing to do with the magnitude of b zero b1, we know that there are four least square assumptions for causal inference right, no correlation and so conditional means zero right conditional means zero so that's expective value of error given x1, x2, equals zero and iid and no perfect monoclinarity and no extreme allyers as long as these four lead square assumptions are satisfied be one will be unbiased right which means expective albei one equals beta 1 if there's no correlation between x2 and error once x once hell constant there's no bias and b2 right that means b2 is also unbiased has a causal effect if x1 is our variable interest these four conditions are satisfied b1 will be unbiased so you see this four least square assumptions for causal inference does not require no imperfy moleculinary that means the imperfect body culinarity will not affect the magnitude of the value of b1 b2 however, it will affect the standard error of b1b2 in practice if you run the regression with two regressors and then you find b1b to our significant and then you consider add another regressor x3 when you add this another regressor x3 all of a sudden all the coefficient become insignificant and that's a sign of imperfy multicoloniarity another thing i want to add to this video is you see i give you this verance of b1 which is only apply to regression with only one aggressor and now if you have regression with two regressors and how to find how to calculate or what are the expression for variance or standard error of b1 so let me give you that expression just for your reference so variance of b1 which is based on regression with two regressors and that equals one over and times one over one over row squared that's correlation coefficient between x one x two squared times variance of error variance of x1 that's the mathematical expression for variance of one based on regression with the two regressors you see how this variance of error that indicate in this case errors are homoscadastic okay so when correlation coefficient between x and 1x2 is large so one minus a large value denominator is smaller, lower denominator and enlarge the variance of b1 again when x2 is held constant fx1x to are very highly correlated there is very little change in x1 in other words, variance of x1 is very small which will cause standard error of b1 to be large which will increase standard error of b one when standard error b one is large in a beta, one is imprecise estimated okay you are using standard arb1 to estimate the confidence interval for beta 1 and a test hypothesis that beta 1 equals a value so you can easily find your coefficient not a statistic significant because of this imperfect multi colonarity yeah hope you find this helpful i hope you find this video helpful and to reduce the lens of the video i will talk about the remedy of this problem in the next video okay in the next video i might talk about the you know the the remedy or what we could do to mitigate the negative effect of imperfect modic culinarity on the regression model so stay tuned and see you next time bye。



