Search published articles



Kazem Haghnejad Azar,
Volume 13, Issue 2 (7-2013)
Abstract

In this paper, we study the Arens regularity properties of module actions and we extend some proposition from Baker, Dales, Lau and others into general situations. We establish some relationships between the topological centers of module actions and factorization properties of them with some results in group algebras. In 1951 Arens shows that the second dual of Banach algebra endowed with the either Arens multiplications is a Banach algebra, see [1]. The constructions of the two Arens multiplications in lead us to definition of topological centers for with respect to both Arens multiplications. The topological centers of Banach algebras, module actions and applications of them were introduced and discussed in [3, 5, 6, 9, 15, 16, 17, 18, 19, 24, 25]. In this paper, we extend some problems from [3, 5, 6, 16, 22] to the general criterion on module actions with some applications in group algebras. Baker, Lau and Pym in [3] proved that for Banach algebra with bounded right approximate identity, is an ideal of right annihilators in and . In the following, for a Banach , we study the similar discussion on the module actions and for Banach , we show that
Abdoljavad Taherizadeh, Akram Kianejad, A Tehranian,
Volume 13, Issue 2 (7-2013)
Abstract

Let ( R,m ) be a Noetherian local ring, a an ideal of R and M a finitely generated R- module. We investigate some properties of formal local cohomology modules with respect to a Serre subcategory. We provide a common language to indicate some properties of formal local cohomology modules. Let ( R,m ) be a Noetherian local ring, a an ideal of R and M a finitely generated R- module. We investigate some properties of formal local cohomology modules with respect to a Serre subcategory. We provide a common language to indicate some properties of formal local cohomology modules.
Yadollah Ordokhani, Haneh Dehestani,
Volume 13, Issue 2 (7-2013)
Abstract

In this paper, a collocation method based on the Bessel polynomials is used for the solution of nonlinear Fredholm-Volterra-Hammerstein integro-differential equations (FVHIDEs) under mixed condition. This method of estimating the solution, transforms the nonlinear (FVHIDEs) to matrix equations with the help of Bessel polynomials of the first kind and collocation points. The matrix equations correspond to a system of nonlinear algebraic equations with the unknown Bessel coefficients. Present results and comparisons demonstrate that our estimate has good degree of accuracy and this method is more valid and useful than other methods.In this paper, a collocation method based on the Bessel polynomials is used for the solution of nonlinear Fredholm-Volterra-Hammerstein integro-differential equations (FVHIDEs) under mixed condition. This method of estimating the solution, transforms the nonlinear (FVHIDEs) to matrix equations with the help of Bessel polynomials of the first kind and collocation points. The matrix equations correspond to a system of nonlinear algebraic equations with the unknown Bessel coefficients. Present results and comparisons demonstrate that our estimate has good degree of accuracy and this method is more valid and useful than other methods.
Alimardan Shahrezaee,
Volume 13, Issue 2 (7-2013)
Abstract

            Inverse time - dependent heat source   problems have an important role in many branches
     of science and  technology. The aim of this paper is to solve these classes of problems using a 
     variational  iteration  method(VIM). The method applied does not require discretization  of the
      region, as in the case  of classical methods based on the finite difference method,  the boundary
     element method  or the other methods.  Applying this method, we obtain a stable approximation
      to an  unknown source term  in an inverse heat equation   from  over-specified data  that the  source
      term is only time - dependent.  Some numerical examples using this approach are presented and
     discussed
Mehdi Omidi, Mohsen Mohammadzadeh Darrodi,
Volume 13, Issue 3 (11-2013)
Abstract

Copula functions are powerful tools for construction the multivariate distribution of dependent variables in terms of their marginal distributions. Each of these functions provide a model which represents all properties of the variables dependency. For spatial data analysis, the dependence structure of the data should be determined by using the multivariate distribution of the random field. In analysis of Spatio-temporal data it is also necessary to identify the relations between spatial and temporal structure of the data in terms of Spatio-temporal covariance function. Sometimes a separable Spatio-temporal covariance function is used for the ease of application, but in some applications this property is not realistic. In these cases it is required to use a non-separable Spatio-temporal covariance function. In this paper the role of copula functions in determination of joint distribution of a random field is considered and the properties of a valid spatial copula function are determined. Then a new valid spatial copula family is introduced. Next some spatial and nonseparable Spatio-temporal covariance functions are constructed by using these copula functions
Fatemeh Hosseini, Omid Karimi, Mohsen Mohammadzadeh,
Volume 13, Issue 3 (11-2013)
Abstract

Non-Gaussian spatial responses are usually modeled using spatial generalized linear mixed models, such that the spatial correlation of the data can be introduced via normal latent variables. The model parameters and the prediction of the latent variables at unsampled locations are of the most important interest in SGLMM by estimating of the latent variables at sampled locations. In these models, since there are the latent variables and non-Gaussian spatial response variables, likelihood function cannot usually be given in a closed form and maximum likelihood estimations may be computationally prohibitive. In this paper, a new algorithm is introduced for maximum likelihood estimation of the model parameters and predictions, that is faster than the former method. This algorithm obtains to combine the pseudo maximum likelihood method, the Expectation maximization Gradient algorithm and an approximate method. The performance and accuracy of the proposed model are illustrated through a simulation study. Finally, the model and the algorithm are applied to a case study on rainfall data observed in the weather stations of Semnan in 2012.
Alireza Sarakhsi, Mohammad Jahanshahi,
Volume 13, Issue 3 (11-2013)
Abstract

In this papear, we produce the method for formation and recognizing boundary layers in singular perturbation problems. This method involves four step for localization of non-local boundary conditions to local case.For the given problem some sufficient and necessary conditions are given for formation and non formation of boundary layers. Since the existence of boundary layers and their places has a direct relation with the structure of approximate solutions and uniform solutions, therefore the main purpose of this paper is recognition and formation of boundary layers in singular perturbation problems with non-local boundary conditions. This process will be done by using fundamental solution of adjoint given differential equation and necessary conditions.In fact by using these necessary conditions and given boundary conditions, we make an algebraic system.By solving this algebraic system by Cramer rule we obtain boundary values of unknown function.These values of unknown function are local boundary conditions.The mathematical model for this kind of problem usually is in the form
of either ordinary differential equations (O.D.E) or partial differential equations (P.D.E) in which the highest derivative is multiplied by some powers of as a positive small parameter.
Atefe Mokhtari Hasanabadi, Manouchehr Kheradmandnia,
Volume 13, Issue 3 (11-2013)
Abstract

Monitoring process mean and variance simultaneously in a single control chart simplifies
the process monitoring. If in addition, a simultaneous control chart is capable of
recognizing the source of contamination, this capability leads to additional simplicity.
These are the reasons why simultaneous control charts have attracted many researchers and
manufacturers.
Recently, in the statistical process control literature some control charts have been
introduced which are based on the idea of Bayesian predictive density. This type of control
charts, not only brings into account the uncertainty concerning the estimation of unknown
parameters, but also do not need extensive simulations for computation of control limits.
These control charts have been introduced for mean and variance in both univariate and
multivariate situations.
Up to now, no simultaneous control chart has been introduced based on Bayesian predictive
density. In this paper, using the idea of Bayesian predictive density, we introduce a new
simultaneous control chart for monitoring univariate mean and variance. We illustrate the
important capabilities of this new chart through simulated data.
This new chart is applicable when parameters are unknown. In other words, it brings into
account the uncertainty concerning the unknown parameters. This chart is able to recognize
the source of contamination and is sensitive to small changes in the mean and variance. In
this chart the control limits, needless of simulation, can simply be obtained from normal
table.
Mm Maghami, Nasrollah Iranpanah,
Volume 13, Issue 3 (11-2013)
Abstract

There are several methods for goodness of fit test for the skew normal distribution. This work focused on method of Meintanis [8] which is based on the empirical moment generating function. This test is discussed for the known and the unknown shape parameter. Meintanis [8] claimed that power of his test is higher than the Kolmogorov–Smirnov test. But this claim is true only for the known shape parameter. In this paper, we provide a new method for finding his test statistic that has more efficiency. Also Meintanis [8] not determine the size of himself test for the known shape parameter which in this paper we will determine it.
Nader Nematollahi, Azadeh Kiapour,
Volume 13, Issue 3 (11-2013)
Abstract

In the Bayesian framework, robust Bayesian methods concern on estimation of unknown parameters, or prediction of future observation, by specifying a class of priors instead of a single prior. Robust Bayesian methods have been used extensively in actuarial sciences for estimation of premium and prediction of future claim size. In this paper we consider robust Bayes estimation of premium and prediction of future claim size under two classes of prior distribution and under the scale invariant squared error loss function. Finally, by a simulation study and using prequential analysis, we compare the obtained robust Bayes estimators of future claim size.
Nasrollah Iranpanah, Samaneh Noori Emamzadeh,
Volume 14, Issue 2 (7-2014)
Abstract

Traditional methods for testing equality of means are based on normality observations in each treatment, but parametric bootstrap methods offer a test statistic to estimate P-value by resampling. In article, first, Fisher, Cochran, Welch, James, Brown and Forsyth, Approximate F, Weerahandi, Adjust Welch and Parametric Bootstrap tests for testing hypothesis equality of means are defined. Then type one error and power of these tests were compared to each other by a simulation study for various sizes of samples and treatments. Finally sizes of these tests were calculated for the real data of Esfahan Cement factory.
Traditional methods for testing equality of means are based on normality observations in each treatment, but parametric bootstrap methods offer a test statistic to estimate P-value by resampling. In article, first, Fisher, Cochran, Welch, James, Brown and Forsyth, Approximate F, Weerahandi, Adjust Welch and Parametric Bootstrap tests for testing hypothesis equality of means are defined. Then type one error and power of these tests were compared to each other by a simulation study for various sizes of samples and treatments. Finally sizes of these tests were calculated for the real data of Esfahan Cement factory.
Mahmoud Lotfi Honyandari, S. Mohammad Hosseini,
Volume 14, Issue 2 (7-2014)
Abstract

In recent decades optimal control problems with partial differential equation constraints have been studied extensively. These issues are very complex and the numerical solution of such problems is of great importance. In this article we will discuss the solution of elliptic optimal control problem. First, by using the finite element method we obtain to gain the discrete form of the problem. The obtained discrete problem is actually a large scale constrained optimization problem. Solving this optimization problem with traditional methods is difficult and requires a lot of CPU time and memory. But split Bergman method converts the constrained problem to an unconstrained problem, and hence it saves time and memory requirement. We then use the split Bregman iterative methods for solving this problem, and examples show the speed and accuracy of split Bregman iterative methods for solving this type of problems. We also use the SQP method for solving the problem and compare with split Bregman method.
Behzad Mahmoudian, Mohsen Mohammadzadeh Darrodi, ,
Volume 14, Issue 2 (7-2014)
Abstract

In this article a spatial model is presented for extreme values with marginal generalized extreme value (GEV) distribution. The spatial model would be able to capture the multi-scale spatial dependencies. The small scale dependencies in this model is modeled by means of copula function and then in a hierarchical manner a random field is related to location parameters of marginal GEV distributions in order to account for large scale dependencies. Bayesian inference of presented model is accomplished by offered Markov chain Monte Carlo (MCMC) design, which consisted of Gibbs sampler, random walk Metropolis-Hastings and adaptive independence sampler algorithms. In proposed MCMC design the vector of location parameters is updated simultaneously based on devised multivariate proposal distribution. Also, we attain Bayesian spatial prediction by approximation of the predictive distribution. Finally, the estimation of model parameters and possibilities for capturing and separation of multi-scale spatial dependencies are investigated in a simulation example and analysis of wind speed extremes.
Darius Behmardi, Fatemeh Heydari, Farid Behroozi,
Volume 14, Issue 3 (10-2014)
Abstract

The concept of rotundity is not far from differentiability . Some paper investigate the relation between rotundity and smoothness. In this paper we will explain some new relation between rotundity and very smoothness.
A Banach space is rotund if the midpoint of every two distinct points of unit sphere is in the open unit ball of Banach space. A Banach space is smooth if its norm is Gateaux differentiable at every non zero point of the space and it is very smooth if the norm is very Gateaux differentiable. That is , the norm of Banach space and the norm of second dual of Banach space are Gateaux differentiable at every non zero point of Banach space.
Yadollah Ordokhani, Neda Rahimi,
Volume 14, Issue 3 (10-2014)
Abstract

In this paper rationalized Haar (RH) functions method is applied to approximate the numerical solution of the fractional Volterra integro-differential equations (FVIDEs). The fractional derivatives are described in Caputo sense. The properties of RH functions are presented, and the operational matrix of the fractional integration together with the product operational matrix are used to reduce the computation of FVIDEs into a system of algebraic equations. By using this technique for solving FVIDEs time and computational are small. Numerical examples are given to demonstrate application of the presented method with RH functions base.In this paper rationalized Haar (RH) functions method is applied to approximate the numerical solution of the fractional Volterra integro-differential equations (FVIDEs). The fractional derivatives are described in Caputo sense. The properties of RH functions are presented, and the operational matrix of the fractional integration together with the product operational matrix are used to reduce the computation of FVIDEs into a system of algebraic equations. By using this technique for solving FVIDEs time and computational are small. Numerical examples are given to demonstrate application of the presented method with RH functions base.
Nasrollah Iranpanah, Parisa Mikelani,
Volume 17, Issue 40 (9-2015)
Abstract

One of the main goals in studying the time series is estimation of prediction interval based on an observed sample path of the process. In recent years, different semiparametric bootstrap methods have been proposed to find the prediction intervals without any assumption of error distribution. In semiparametric bootstrap methods, a linear process is approximated by a autoregressive process. Then the bootstrap samples are generated by resampling from the residuals.

In this paper, at first these sieve bootstrap methods are defined and then, in a simulation study sieve bootstrap prediction intervals are compared with a Standard Gaussian prediction interval. at last these methods are used to find the prediction intervals for weather data of Isfahan.
One of the main goals in studying the time series is estimation of prediction interval based on an observed sample path of the process. In recent years, different semiparametric bootstrap methods have been proposed to find the prediction intervals without any assumption of error distribution. In semiparametric bootstrap methods, a linear process is approximated by a autoregressive process. Then the bootstrap samples are generated by resampling from the residuals.

In this paper, at first these sieve bootstrap methods are defined and then, in a simulation study sieve bootstrap prediction intervals are compared with a Standard Gaussian prediction interval. at last these methods are used to find the prediction intervals for weather data of Isfahan.
Behrooz Khadem, Amir Daneshgar, Fahimeh Mohebbipoor,
Volume 17, Issue 40 (9-2015)
Abstract

In this paper we introduce a word based stream cipher consisting of a chaotic part operating as a chaotic permutation and a linear part, both of which designed on a finite field. We will show that this system can operate in both synchronized and self-synchronized modes. In particular, we show that in the self-synchronized mode the stream cipher has a receiver operating as an unknown input observer.
In addition we evaluate the statistical uniformity of the output and we also show that the system in the self-synchronized mode is much faster and lighter for implementation compared to similar self-synchronized systems with equal key size.
Milad Rahimi, Mousa Golalizadeh,
Volume 17, Issue 40 (9-2015)
Abstract

Diffusion Processes such as Brownian motions and Ornstein-Uhlenbeck processes are the classes of stochastic processes that have been under considerations of the researchers in various scientific disciplines including biological sciences. It is usually assumed that the outcomes of these processes
are lied on the Euclidean spaces. However, some data are appeared in physical, chemical and biological phenomena that cannot be considered as the observations in Euclidean spaces due to various features
such as the periodicity of the data. Hence, we cannot analysis them using the common mathematical methods available in Euclidean spaces. In addition, studying and analyzing them using common linear statistics are not possible. One of these typical data is the dihedral angles that are utilized to identifying, modeling and predicting the proteins backbones. Because these angles are representatives of points on the surface of torus, it seems that proper statistical modeling of diffusion processes on the torus could be of a great help for the research activities on dynamic molecular simulations in predicting the proteins backbones. In this article, using the Riemannian distance on the torus, the stochastic differential equations to describe the Brownian motions and Ornstein-Uhlenbeck processes on this geometrical objects will be derived. Then, in order to evaluate the proposed models, the statistical simulations will be performed using the equilibrium distributions of aforementioned stochastic processes. Moreover, the link between the gained results with the available concepts in the non-linear statistics will be highlighted.
Alireza Keshvari, Sm Hosseni,
Volume 17, Issue 40 (9-2015)
Abstract

A new technique to find the optimization parameter in TSVD regularization method based on a curve which is drawn against the residual norm [5]. Since the TSVD regularization is a method with discrete regularization parameter then the above-mentioned curve is also discrete. In this paper we present a mathematical analysis of this curve, showing that the curve has L-shaped path very similar to that of the classical L-curve and its corner point can represent the optimization regularization parameter very well. In order to find the corner point of the L-curve (optimization parameter), two methods are applied: pruning and triangle. Numerical results show that in the considered test problems the new curve is better than the classical L-curve.
Nasrin Mahdianfard, Mohsen Mohammadzadeh,
Volume 17, Issue 40 (9-2015)
Abstract

Linking between geographic information systems and decision making approach own the invention and development of spatial data melding methods. Data melding methods combine the data, to achieve a better result and their aim is, to detect the information available in the data set in order to enhance the ability of interpreting data and increase the accuracy of the data analysis. In this paper, Bayesian melding method has been studied for combination of measurements, outputs of deterministic models and kriging methods. By spatial Bayesian melding and kriging an attempted is made to spatial prediction of ozone data in Tehran and results are validated and compared using the mean square error criterion.

Page 1 from 2    
First
Previous
1
 

© 2024 CC BY-NC 4.0 | Quarterly Journal of Science Kharazmi University

Designed & Developed by : Yektaweb